GMFBridge DirectShow filter SetLiveTiming effect - directshow

I am using the excellent GMFBridge directshow family of filters to great effect, allowing me to close a video recording graph and open a new one, with no data-loss.
My original source graph was capturing live video from standard video and audio inputs.
There is an undocumented method on the GMFBridgeController filter named SetLiveTiming(). From the name, I figured that this should be set to true if we are capturing from a Live graph (not from a file) as is my case. I set this value to true and everything worked as expected
The same capture hardware allows me to capture live TV signals (ATSC in my case), so I created a new version of the graph using the BDA architecture filters, for tuning purposes. Once the data flows out from the MPEG demuxer, the rest of the graph is virtually the same as my original graph.
However, on this ocassion my muxing graph (on the other side of the bridge) was not working. Data flowed from the BridgeSource filter (video and audio) and reached an MP4 muxer filter, however no data was flowing from the muxer output feeding a FileWriter filter.
After several hours I traced the problem to the SetLiveTiming() setting. I turned it off and everything began working as expected. and the muxer filter began producing an output file, however, the audio was not synchronized to the video.
Can someone enlighten me on the real purpose of the SetLiveTiming() setting and perhaps, why one graph works with the setting enabled, while the other fails?
UPDATE
I managed to compile the GMFBridge Project, and it seems that the filter is dropping every received sample because of a negative timestamp computation. However I am completely baffled at the results I am seeing after enabling the filter log.
UPDATE 2: The dropped samples were introduced by the way I launched the secondary (muxer) graph. I inspected a sample using a SampleGrabber (thus inside a streaming thread) as a trigger-point and used a Task.Run() .NET call to instantiate the muxer graph. This somehow messed up the clocks and I ended having a 'reference start point' in the future - when the bridge attempted to fix the timestamp by subtracting the reference start point, it produced a negative timestamp - once I corrected this and spawned the graph from the application thread (by posting a graph event), the problem was fixed.
Unfortunately, my multiplexed video (regardless of the SetLiveTiming() setting) is still out of sync.
I read that the GMFBridge filter can have trouble when the InfTee filter is being used, however, I think that my graph shouldn't have this problem, as no instance of the InfTee filter is directly connected to the bridge sink.
Here is my current source graph:
-->[TIF]
|
[NetworkProvider]-->[DigitalTuner]-->[DigitalCapture]-->[demux]--|-->[Mpeg Tables]
|
|-->[lavAudioDec]-->[tee]-->[audioConvert]-->[sampleGrabber]-->[NULL]
| |
| |
| ->[aacEncoder]----------------
| |--->[*Bridge Sink*]
-->[VideoDecoder]-->[sampleGrabber]-->[x264Enc]--------
Here is my muxer graph:
video
... |bridge source|-------->[MP4 muxer]--->[fileWriter]
| ^
| audio |
---------------------
All the sample grabbers in the graph are read-only. If I mux the output file without bridging (by placing the muxer on the capture graph), the output file remains in sync, (this ended being not true, the out-of-sync problem was introduced by a latency setting in the H264 encoder) but then I can't avoid losing some seconds between releasing the current capture graph, and running the new one (with the updated file name)
UPDATE 3:
The out of sync problem was inadvertently introduced by me several days ago, when I switched off a "Zero-latency" setting in the x264vfw encoder. I hadn't noticed that this setting had desynchronized my already-working graphs too and I was blaming the bridge filter.
In summary, I screwed up things by:
Launching the muxer graph from a thread other than the Application
thread (the thread processing the graph's event loop).
A latency switch in an upstream filter that was probably delaying
things too much for the muxer to be able to keep-up.

Author's comment:
// using this option, you can share a common clock
// and avoid any time mapping (essential if audio is in mux graph)
[id(13), helpstring("Live Timing option")]
HRESULT SetLiveTiming([in] BOOL bIsLiveTiming);
The method enables a special mode of operation which addresses live data. In this mode sample times are converted between the graphs as relative to respective clock start times. Otherwise, the default mode is to expect reset of time stamps to zero with graph changes.

Related

Is it OK for a DirectShow filter to seek the filters upstream from itself?

Normally seek commands are executed on a filter graph, get called on the renderers in the graph and calls are passed upstream by filters until a filter that can handle the seek does the actual seek operation.
Could an individual filter seek the upstream filters connected to one or more of its input pins in the same way without it affecting the downstream portion of the graph in unexpected ways? I wouldn't expect that there wouldn't be any graph state changes caused by calling IMediaSeeking.SetPositions upstream.
I'm assuming that all upstream filters are connected to the rest of the graph via this filter only.
Obviously the filter would need to be prepared to handle the resulting BeginFlush, EndFlush and NewSegment calls coming from upstream appropriately and distinguish samples that arrived before and after the seek operation. It would also need to set new sample times on its output samples so that the output samples had consistent sample presentation times. Any other issues?
It is perfectly feasible to do what you require. I used this approach to build video and audio mixer filters for a video editor. A full description of the code is available from the BBC White Papers 129 and 138 available from http://www.bbc.co.uk/rd
A rather ancient version of the code can be found on www.SourceForge.net if you search for AAFEditPack. The code is written in Delphi using DSPack to get access to the DirectShow headers. I did this because it makes it easier to handle com object lifetimes - by implementing smart pointers by default. It should be fairly straightforward to transfer the ideas to a C++ implementation if that is what you use.
The filters keep lists of the sub-graphs (a section of a graph but running in the same FilterGraph as the mixers). The filters implement a custom version of TBCPosPassThru which knows about the output pins of the sub-graph for each media clip. It handles passing on the seek commands to get each clip ready for replay when its point in the timeline is reached. The mixers handle the BeginFlush, EndFlush, NewSegment and EndOfStream calls for each sub-graph so they are kept happy. The editor uses only one FilterGraph that houses both video and audio graphs. Seeking commands are make by the graph on both the video and audio renderers and these commands are passed upstream to the mixers which implement them.
Sub-graphs that are not currently active are blocked by the mixer holding references to the samples they have delivered. This does not cause any problems for the FilterGraph because, as Roman R says, downstream filters only care about getting a consecutive stream of sample and do not know about what happens upstream.
Some key points you need to make sure of to avoid wasted debugging time are:
Your decoder filters need to be able to queue to the exact media frame or audio time. Not as easy to do as you might expect, especially with compressed formats such as mpeg2, which was designed for transmission and has no frame index in the files. If you do not do this, the filter may wait indefinitely to get a NewSegment call with the correct media times.
Your sub graphs need to present a NewSegment time equal to the value you asked for in your seek command before delivering samples. Some decoders may seek to the nearest key frame, which is a bit unhelpful and some are a bit arbitrary about the timings of their NewSegment and the following samples.
The start and stop times of each clip need to be within the duration of the file. Its probably not a good idea to police this in the DirectShow filter because you would probably want to construct a timeline without needing to run the filter first. I did this in the component that manages the FilterGraph.
If you want to add sections from the same source file consecutively in the timeline, and have effects that span the transition, you need to have two instances of the sub-graph for that file and if you have more than one transition for the same source file, your list needs to alternate the graphs for successive clips. This is because each sub graph should only play monotonically: calling lots of SetPosition calls would waste cpu cycles and would not work well with compressed files.
The filter's output pins define the entire seeking behaviour of the graph. The output sample time stamps (IMediaSample.SetTime) are implemented by the filter so you need to get them correct without any missing time stamps. and you can also set the MediaTime (IMediaSample.SetMediaTime) values if you like, although you have to be careful to get them correct or the graph may drop samples or stall.
Good luck with your development. If you need any more information please contact me through StackOverflow or DTSMedia.co.uk

The operation could not be performed because the filter is in the wrong state GetCurrentBuffer

The operation could not be performed because the filter is in the wrong state
I am getting this error when attemting to run hr = m_pGrabber->GetCurrentBuffer(&cbBuffer, NULL);.
Strange part is - it initially worked when I stopped the graph, now it fails on running or stopped graph.
So - what state it should be in??
The sample grabber code in MSDN I copied does not say if the graph should be stopped or running to get the buffer size - but the way it is presented the graph is running. I assume the graph should be running to fill the buffer, but I am not getting pass the sizing the buffer.
The graph is OK, all filters are conncted and renders as required, in may app and in GraphEdit.
I am trying to save the captured still frame into bitmap file so I need the capured data in the buffer.
Buffering and GetCurrentBuffer expose you a copy of last known media sample. Hence, you might hit conditions "no media sample available yet to copy from" and "last known media sample is released due to transition to stopped state". In both cases the request in question might fail. Copy data from SampleCB instead of buffered mode and this is going to be one hundred percent reliable.
See also: ISampleGrabber::GetCurrentBuffer() returning VFW_E_WRONG_STATE
Using GetCurrentBuffer is a bad idea in most cases. Proper way to use sample grabber is by setting your callback and receiving data in SampleCB.

Why does GetDeliveryBuffer blocked with an INTERLEAVE_CAPTURE mode AVI Mux?

I'm trying to use a customized filter to receive video and audio data from a RTSP stream, and deliver samples downstream the graph.
It seems like that this filter was modified from the SDK source.cpp sample (CSource), and implemented two output pins for audio and video.
When the filter is directly connected to an avi mux filter with INTERLEAVE_NONE mode, it works fine.
However, when the interleave mode of avi mux is set to INTERLEAVE_CAPTURE,
the video output pin will hang on the GetDeliveryBuffer method (in DoBufferProcessingLoop) of this filter after several samples have sent,
while the audio output pin still works well.
Moreover, when I inserted an infinite pin tee filter into one of the paths between the avi mux and this source filter,
the graph arbitrarily turned into stop state after some samples had been sent (one to three samples or the kind).
And when I put a filter that is just an empty trans-in-place filter which does nothing after the infinite tee,
the graph went back to the first case: never turns to stop state, but hang on the GetDeliveryBuffer.
(Here is an image that shows the connections I've mentioned like)
So here are my questions:
1: What could be the reasons that the video output pin hanged on the GetDeliveryBuffer ?
In my guess it looks like the avi mux caught these sample buffers and did not release them until they are enough for interleaving,
but even when I set the amount of video buffers to 30 in DecideBufferSize it will still hang. If the reason is indeed like that, so how do I decide the buffer size of the pin for a downstream avi muxer ?
Likely a creation of more than 50 buffers of a video pin is not guaranteed to work because the memory size cannot be promised. :(
2: Why does the graph goes to stop state when the infinite pin tee is inserted ? And why could a no-operation filter overcomes it ?
Any answer or suggestion is appreciated. Or hope someone just give me some directions. Thanks.
Blocked GetDeliveryBuffer means the allocator, you are requesting a buffer from, does not [yet] have anything for you. All media samples are outstanding and are not yet returned back to the allocator.
An obvious work around is to request more buffers at pin connection and memory allocator negotiation stage. This however just postpones the issue, which can very much similarly appear later for the same reason.
A typical issue with a topology in question is related to threading. Multiplexer filter which has two inputs will have to match input streams to produce a joint file. Quite so often on runtime it will be holding media samples on one leg while expecting more media samples to come on the other leg on another thread. It is assumes that upstream branches providing media samples are running independently so that a lock on one leg is not locking the other. This is why multiplexer can freely both block IMemInputPin::Receive methods and/old hold media samples inside. In the topology above it is not clear how exactly source filter is doing threading. The fact that it has two pins make me assume it might have threading issues and it is not taking into account that there might be a lock downstream on multiplexer.
Supposedly source filter is yours and you have source code for it. You are interested in making sure audio pin is sending media samples on a separate thread, such as through asynchronous queue.

SampleGrabber callback is getting called spuriously even when graph is paused

I'm using Directshow SampleGrabber in callback mode to capture video frame from source file and do some processing. Also I would like to maintain the current playback rate of video and need to support both random, forward and backward seeking. For this I'm also doing some local buffering in a different thread.
I'm running graph with syn source set to NULL, so as to get maximum speed. However when I pause the graph after fixed amount of buffering. The SampleGrabber callback is getting called spuriously even when graph is paused. This is affecting my frame indexing and tracking. I want to resume the graph exactly from the same position at which it was paused. However if I run the graph with default clock it works fine but then my playback get affected. I want buffering thread to finish as soon as possible.
How can I make sure that callback is not called when graph is paused? Any thoughts or suggestion would be of great help.
Thanks in advance
Pradeep
Paused graph typically has all the same streaming internally (active state) with the exception that renderers are blocking streaming, esp. as soon as enough data is received for a preview banner. Since you removed clock from the graph, your renderer is likely to not block execution because it does not hold any clock to pause against. In your case this is the problem coming out of your intent to reuse the same graph for quick parsing through file and playback. Separate graph design looks having more chances to do better.

Generating silent audio track

I'm using a simple DirectShow graph to convert some videos to WMV format, which is working fine. I'm now trying to use a filter based on the Synth Filter sample to supply a silent audio track to the videos and I'm running into some problems.
Essentially, I don't know how to stop the graph when this filter (the synth filter) is connected. I guess because it just provides samples forever until somebody tells it to stop, the usual approach of calling IMediaEvent::WaitForCompletion on the filter graph doesn't work (the graph never stops). What I want it to do of course is stop as soon as the video source filter is finished.
I've tried tracking the position of the graph with IMediaSeeking::GetPositions and then manually stopping the graph when this exceeds the duration of the source file, but the accuracy of the stop time with this approach isn't great.
Can anyone think of a better way to do this? Do I need to have another filter that monitors the output from the video source and also has a pointer to the audio source so it can stop it as soon as the video source delivers EndOfStream? Is there no way to accomplish this from purely application-side code?
I've done something not too different myself in the past. I added support for IMediaSeeking to the silence generator filter, and then you need to make sure that you set start and stop times for the conversion (even if it's just 0 and duration), so that the silence generator can generate the right amount of audio and then send EOS.
G

Resources