I am trying to combine 2 a2l files generated from different applications/process in the asap editor. But canape recognizing only one device and one port. Can't we load multiple devices in single A2L file ?
Related
I am using TRT6.0.1.5 and 2080Ti GPU, want to loads an engine file
Since I got two cameras doing real-time detection, below is what I have tried
loads engine once and using the same deserialized engine to detect
it will crash eventually
loads engine separately to two variables
the first cameras runs ok and also detect objects normally
but the second cameras detect nothingļ¼ but it did not crash.
How can I correctly loads one engine file and run inference separately on one machine?
Or maybe create different execution context?
You need to run the detection on two separate video streams right?
If I were you I'd only change the batch size on the network while you serialize to TensorRT, in this case to two.
Then while running both streams you can use only one network with a different batch size. Something like:
tContext->execute(batch_size, inference_buff.data())
Where your inference_buff will have the data of both image streams.
My application needs to record video interviews with the ability to pause and resume, and have these multiple segments captured to the file.
I'm using directshow.net to capture camera stream to a preview window AND an avi file, and it works, except that whenever I start recording a new segment, I overwrite the avi file instead of appending. The relevant code is:
captureGraphBuilder.SetOutputFileName( ref mediaSubType, Filename, out muxFilter, out fileWriterFilter )
How can I create a capture graph so that the capture is appended to a file instead of overwriting it?
Most media files/formats, and AVI specifically, do not suppose or allow appending. When you record, you populate the media file AND then you finalize it on completion. You typically don't have the option to "unfinalize" and resume recording.
The symptom of overwriting you are seeing is a side effect of writing filter implementation. There is no append vs overwrite mode you can easily switch to.
Your options basically are the following (in the order of less-to-more development):
Record new media file each time, then run an external tool (like FFmpeg) which is capable to concatenate media and produce new continuous file out of segments.
Implement a DirectShow filter inserted into the pipeline (esp. in two instances, for video and for audio) which is capable to implement pause/resume behavior. Once you pause the filter would discard new media data, and once you resume it starts again passing them respectively modifying time stamps to mimic continuous stream. The capture graph will be in running state through all segments and pauses.
Implement a custom multiplexer and/or writer filter which is capable to read existing file and append new media so that the file itself is once again finalized on completion with old and new segments, continuous.
Item #3 above is technically possible to implement, but I don't think such implementation at all exists: workarounds are always easier to do. #2 is a sort of supposed way to address the mentioned task, but since you are doing C# development with DirectShow.NET, I anticipate that it is going to be a bit difficult to address the challenge from this angle. #1 is relatively easy to do and the cost involved is an external tool to use.
I am currently attempting to pull specific information from the MIBs of a couple of devices.
These will mainly be Cisco devices, I was wondering if there were any common OIDs for all devices I can query from, or would they need to be individually hard coded in a config file? Or maybe anyway I can dynamically search for these OIDs?
Correct me if I am wrong, but to my understanding, the MIB set for each device type is different and there are very few common elements within them, most of which will be manufacturer specific?
I am trying to retrieve things like
CPU usage
HDD free space
uptime
etc...
Yes, all devices implement different sets of MIBs. However, many devices will implement the same standard MIBs, so there may be common variables that you could poll from all of them, particularly if they're from a single vendor.
There are two fairly straighforward ways to find out the MIB set implemented by a device:
SNMP walk it and compare the output
Ask the vendors/documentation about which MIBs are implemented.
Some devices also store the MIB files on their file system, making it possible to fetch the canonical list from there, but that's not applicable in 100% of the cases.
"File Source (Async)" filter supports only one file per it's life.
Is the a way to play two files in a sequence without rebuilding a graph?
File Source (Async) only supplies random access byte stream to the filter graph, there are other components vital for playback: demultiplexers, decoders. No, it is not possible to enqueue another file through File Source (Async) filter.
Playing multiple files seamlessly otherwise is possible but requires to split graph into parts and connect them together in terms of sending data from one graph (reading from file, the one you rebuild with file change) to the other (with renderers, the one being never rebuilt and providing seamless playback user experience).
Read up other questions on bridging graphs:
GMFBridge usage in DirectShow
When changing a file name, Recording Start is overdue for 3 seconds.
I am using wmp in my windows application. I want to change the rate of the play speed.
It is possible for some type of files e.g; avi. But its not possible for some types, eg; wmv,mpeg etc. Is there any other way to change rate. Please, its urgent. Thanx in advance
Its possible, but your choice of using windows media player will limit your choices. Windows media player uses a very simple graphfilter to control playback. This will make it impossible to change the rate for formats which require more complex filters. The general way to change the rate is to either repeat or drop frames in the video.
I am not sure about wmv, but if memory serves me right, wmv is just a container format like AVI, so the graphfilter that is used varies from file to file.
mpeg has 3 kinds of frames. only the i frame is complete. the p and b frames are not so you cant really repeat or drop the frames easily.
Dont know how to help you with this, but you will have better choices if your using directshow so that you can change graphfilters to duplicate/drop frames.