I was using a callback mechanism to grab the webcam frames in my media application. It worked, but was slow due to certain additional buffer functions that were performed within the callback itself.
Now I am trying the other way to get frames. That is, call a method and grab the frame (instead of callback). I used a sample in CodeProject which makes use of IVMRWindowlessControl9::GetCurrentImage.
I encountered the following issues.
In a Microsoft webcam, the Preview didn't render (only black screen) on Windows 7. But the same camera rendered Preview on XP.
Here my doubt is, will the VMR specific functionalities be dependent on camera drivers on different platforms? Otherwise, how could this difference happen?
Wherever the sample application worked, I observed that the biBitCount member of the resulting BITMAPINFOHEADER structure is 32.
Is this a value set by application or a driver setting for VMR operations? How is this configured?
Finally, which is the best method to grab the webcam frames? A callback approach? Or a Direct approach?
Thanks in advance,
IVMRWindowlessControl9::GetCurrentImage is intended for occasional snapshots, not for regular image grabbing.
Quote from MSDN:
This method can be called at any time, no matter what state the filter
is in, whether running, stopped or paused. However, frequent calls to
this method will degrade video playback performance.
This methods reads back from video memory which is slow in first place. This methods does conversion (that is, slow again) to RGB color space because this format is most suitable for for non-streaming apps and gives less compatibility issues.
All in all, you can use it for periodic image grabbing, however this is not what you are supposed to do. To capture at streaming rate you need you use a filter in the pipeline, or Sample Grabber with callback.
Related
In Vulkan,
A semaphore(A) and a fence(X) can be passed to vkAcquireNextImageKHR. That semaphore(A) is subsequently passed to vkQueueSubmit, to wait until the image is released by the Presentation Engine (PE). A fence(Y) can also be passed to vkQueueSubmit. Client code can check when the submission has completed by checking fence(Y).
When fence(Y) signals, this means the PE can display the image.
My question:
How do I know when the PE has finished using the image after a call to vkQueuePresentKHR? To me, it doesn't seem that it would be by checking fence(X), because that is for client code to know when the image can be written to by vkQueueSubmit, isn't it? After the image is sent to vkQueueSubmit, it seems the usefulness of fence(X) is done. Or, can the same fence(X) be used to query the image availability after the call to vkQueuePresentKHR?
I don't know when the image is available again after a call to vkQueuePresentKHR, without having to call vkAcquireNextImageKHR.
The reason this is causing trouble for me is that in an asynchronous, 60fps, triple buffered app (throwaway learning code), things get out of wack like this:
Send an initial framebuffer to the PE. This framebuffer is now unavailable for 16 milliseconds.
Within the 16ms, acquire a second image/framebuffer, submit commands, but don't present.
Do the same as #2, for a third image. We submit it before 16ms.
16ms have gone by, so we vkQueuePresentKHR the second image.
Now, if I call vkAcquireNextImageKHR, the whole thing can fail if image #1 is not yet done being used, because I have acquired three images at this point.
How to know if image #1 is available again without calling vkAcquireNextImageKHR?
How do I know when the PE has finished using the image after a call to vkQueuePresentKHR?
You usually do not need to know.
Either you need to acquire a new VkImage, or you don't. Whether PE has finished or not does not even enter that decision.
Only reason wanting to know is if you want to measure presentation times. There's a special extension for that: VK_GOOGLE_display_timing.
After the image is sent to vkQueueSubmit, it seems the usefulness of fence(X) is done.
Well, you can reuse the fence. But the Implementation has stopped using it as soon as it was signaled and won't be changing its state anymore to anything, if that's what you are asking (and so you are free to vkDestroy it or do other things with it).
I don't know when the image is available again after a call to vkQueuePresentKHR, without having to call vkAcquireNextImageKHR.
Hopefully I cover it below, but I am not precisely sure what the problem here is. I don't know how to eat a soup without a spoon neither. Simply use a spoon— I mean vkAcquireNextImageKHR.
Now, if I call vkAcquireNextImageKHR, the whole thing can fail if image #1 >is not yet done being used, because I have acquired 3 images at this point.
How to know if image #1 is available again without calling >vkAcquireNextImageKHR?
How is it any different than image #1 and #2?
Yes, you may have already acquired all the images the swapchain has to offer, or the PE is "not ready" to give away an image even if it has two.
In the first case the spec advises against calling vkAcquireNextImageKHR with timeout of UINT64_MAX. It is a simple matter of counting the successful vkAcquireNextImageKHR calls vs the vkQueuePresentKHRs. One way may be to simply do one vkAcquireNextImageKHR and then do one vkQueuePresentKHR.
In the second case you can simply call vkAcquireNextImageKHR and you will eventually get the image.
In order to use a swapchain image, You need to acquire it. After that the actual availability of the image for rendering purposes is signaled by the semaphore (A) or the fence (X). You can either use the semaphore (X) during the submission as a wait semaphore or wait on the CPU for the fence (X) and submit after that. For performance reasons the semaphore is a preferred way.
Now when You present an image, You give it back to the Presentation Engine. From now on You cannot use that image for whatever purposes. There is no way to check when that image is available again for You so You can render into it again. You cannot do that. If You want to render into a swapchain image again, You need to acquire another image. And during this operation You once again provide a semaphore or a fence (probably different than those provided when You previously acquired a swapchain image). There is no other way to check when an image is again available than through calling the vkAcquireNextImageKHR() function.
And when You want to implement triple-buffering, You should just select appropriate presentation mode (mailbox mode is the closest match). You shouldn't wait for a specific time before You present an image. You just should present it when You are done rendering into it. Your synchronization should be entirely based on acquire, present commands and semaphores or fences provided during these operations and during submission. Appropriate present mode should do the rest. Detailed explanation of different present modes is available in Intel's tutorial.
Normally seek commands are executed on a filter graph, get called on the renderers in the graph and calls are passed upstream by filters until a filter that can handle the seek does the actual seek operation.
Could an individual filter seek the upstream filters connected to one or more of its input pins in the same way without it affecting the downstream portion of the graph in unexpected ways? I wouldn't expect that there wouldn't be any graph state changes caused by calling IMediaSeeking.SetPositions upstream.
I'm assuming that all upstream filters are connected to the rest of the graph via this filter only.
Obviously the filter would need to be prepared to handle the resulting BeginFlush, EndFlush and NewSegment calls coming from upstream appropriately and distinguish samples that arrived before and after the seek operation. It would also need to set new sample times on its output samples so that the output samples had consistent sample presentation times. Any other issues?
It is perfectly feasible to do what you require. I used this approach to build video and audio mixer filters for a video editor. A full description of the code is available from the BBC White Papers 129 and 138 available from http://www.bbc.co.uk/rd
A rather ancient version of the code can be found on www.SourceForge.net if you search for AAFEditPack. The code is written in Delphi using DSPack to get access to the DirectShow headers. I did this because it makes it easier to handle com object lifetimes - by implementing smart pointers by default. It should be fairly straightforward to transfer the ideas to a C++ implementation if that is what you use.
The filters keep lists of the sub-graphs (a section of a graph but running in the same FilterGraph as the mixers). The filters implement a custom version of TBCPosPassThru which knows about the output pins of the sub-graph for each media clip. It handles passing on the seek commands to get each clip ready for replay when its point in the timeline is reached. The mixers handle the BeginFlush, EndFlush, NewSegment and EndOfStream calls for each sub-graph so they are kept happy. The editor uses only one FilterGraph that houses both video and audio graphs. Seeking commands are make by the graph on both the video and audio renderers and these commands are passed upstream to the mixers which implement them.
Sub-graphs that are not currently active are blocked by the mixer holding references to the samples they have delivered. This does not cause any problems for the FilterGraph because, as Roman R says, downstream filters only care about getting a consecutive stream of sample and do not know about what happens upstream.
Some key points you need to make sure of to avoid wasted debugging time are:
Your decoder filters need to be able to queue to the exact media frame or audio time. Not as easy to do as you might expect, especially with compressed formats such as mpeg2, which was designed for transmission and has no frame index in the files. If you do not do this, the filter may wait indefinitely to get a NewSegment call with the correct media times.
Your sub graphs need to present a NewSegment time equal to the value you asked for in your seek command before delivering samples. Some decoders may seek to the nearest key frame, which is a bit unhelpful and some are a bit arbitrary about the timings of their NewSegment and the following samples.
The start and stop times of each clip need to be within the duration of the file. Its probably not a good idea to police this in the DirectShow filter because you would probably want to construct a timeline without needing to run the filter first. I did this in the component that manages the FilterGraph.
If you want to add sections from the same source file consecutively in the timeline, and have effects that span the transition, you need to have two instances of the sub-graph for that file and if you have more than one transition for the same source file, your list needs to alternate the graphs for successive clips. This is because each sub graph should only play monotonically: calling lots of SetPosition calls would waste cpu cycles and would not work well with compressed files.
The filter's output pins define the entire seeking behaviour of the graph. The output sample time stamps (IMediaSample.SetTime) are implemented by the filter so you need to get them correct without any missing time stamps. and you can also set the MediaTime (IMediaSample.SetMediaTime) values if you like, although you have to be careful to get them correct or the graph may drop samples or stall.
Good luck with your development. If you need any more information please contact me through StackOverflow or DTSMedia.co.uk
I'm using Directshow SampleGrabber in callback mode to capture video frame from source file and do some processing. Also I would like to maintain the current playback rate of video and need to support both random, forward and backward seeking. For this I'm also doing some local buffering in a different thread.
I'm running graph with syn source set to NULL, so as to get maximum speed. However when I pause the graph after fixed amount of buffering. The SampleGrabber callback is getting called spuriously even when graph is paused. This is affecting my frame indexing and tracking. I want to resume the graph exactly from the same position at which it was paused. However if I run the graph with default clock it works fine but then my playback get affected. I want buffering thread to finish as soon as possible.
How can I make sure that callback is not called when graph is paused? Any thoughts or suggestion would be of great help.
Thanks in advance
Pradeep
Paused graph typically has all the same streaming internally (active state) with the exception that renderers are blocking streaming, esp. as soon as enough data is received for a preview banner. Since you removed clock from the graph, your renderer is likely to not block execution because it does not hold any clock to pause against. In your case this is the problem coming out of your intent to reuse the same graph for quick parsing through file and playback. Separate graph design looks having more chances to do better.
I'm developing a mobile app using Flex and I have run into some problems using BlazeDS. Some users request a (relatively) pretty large amount of data from my server, which returns in about 2 seconds. The data consists of some pretty simple objects (Client, which has a name/phone/email, and a few other properties, some of which are other nested objects with more properties). The largest requests consist of no more than about 10,000 of these objects, which is only a few MB in size. The problem I am running into is that as soon as the server sends its response, the mobile screen locks up while the data is being processed. For 10,000 objects, this can take several minutes and sometimes even crash the device, and at best leave the user with a frozen screen the entire time. For the average user, it is at least 2-5 seconds of frozen screen. This is not only an issue for devices with limited capabilities. This also happens on my PC (i5 processor, 8GB RAM). From what I can tell, this downtime is taking place somewhere between when the device receives the response and when I can access the data. Setting a breakpoint on the first line of the following RemoteObject result handler has the screen lock up BEFORE it reaches the breakpoint:
protected function myResultHandler(event:ResultEvent):void
{
var result:ArrayCollection = event.result as ArrayCollection;
//Do other stuff here
}
I know very little about BlazeDS and AMF, so my only guess is that the freezing happens while the objects are being created on the device. Is there any way to speed up this process at all? Should I normally expect to see really poor performance like this? Any help would be greatly appreciated.
After a couple of hours digging around, I found the solution to my problem: On the server side, the objects I was sending had a ton of extraneous properties unrelated to the information I needed on the mobile app. In addition, there were helper methods on those classes in the form getMyHelper() which would attempt to generate a property on the Flex side. This resulted in a huge list of reference errors being thrown during the download since no properties with those names existed in the AS objects. I created stripped down 'lite' versions of the objects I needed sent across with no extra properties or methods. The massive lists now display nearly instantaneously after receiving the response from the server.
I have a DirectShow webcam application. I make use of Sample Grabber to get the buffer callbacks and IVideoWindow to control the display co-ordinates for the Preview. I have Preview and Capture Streams which I run as below.
g_pBuild->RenderStream(&PIN_CATEGORY_CAPTURE, &MEDIATYPE_Video,cam,g_pGrabberF,pNullRenderer2); g_pBuild->RenderStream(&PIN_CATEGORY_PREVIEW, &MEDIATYPE_Video,cam,NULL,NULL);
On certain On board cameras, IMediaControl::Run followed by IMediaControl::Stop followed by IMediaCOntrol::Run doesn't switch on the camera.
Extenal USB cameras work properly here. How can I diagnose more on this? Any pointers, please help.
Maybe its specific to a certain hardware issue in the unit.
Do a quick test by adding sleep of 1 sec between calls.
If it does help than you need to find a way to know when to unit state in idle or not.
There are two important parts of the question which you did not provide:
Filter graph topologies
HRESULTs of the method calls
A problem you might be having is that one of the filters in the topology does not handle well state transitions and fails somewhere between states. Supposedly your second Run meets it still trying to complete Stop. You might get a HRESULT there which indicates the issue (better for you) or the filter fails silently.
The filter graph's is the unlikely source of the bug itself. Chances are high that it does everything flawlessly, however since internally it distributes the calls between filters, one of the filter is letting you down.