Why do I get these artifacts when using Microsoft MPEG-TS Demultiplexer and H.264 decoder on a H.264 MPEG-TS stream? - directshow

I have written a live push filter that gets an mpeg-ts stream containing h.264 video and aac audio. I set up a directshow pipeline and configure the output pins. I can render the h.264 stream but I get artifacts in the rendering as can be seen from this screenshot when streaming from gstreamer using a videotestsrc and the "ball" pattern.This screenshot should contain only one white dot on a black background. The two additional are "leftovers" that appear when the animation plays.
If I stream MPEG-2 and change the pipeline accordingly, the pattern renders without error. I have tried to use settings described on msdn to configure the pin, both using H264 and AVC1 explicitly providing the sequence header and so on. I still get the same kind of artifacts.
One interesting thing is that the artifacts mostly appear at the same frequency as the I-Frames arrive, and if we only send I-Frames (key-int-max=1), the artifacts disappear completely.
Also, the errors seem to appear at the top half of the image when the I-Frame interval is 60, i.e. every 2 seconds. When we change to one I-Frame every second frame (key-int-max=2), the artifacts appear only in a narrow strip at the top of the image.
The following gstreamer pipeline produced the video stream:
videotestsrc live-source=true pattern=ball ! video/x-raw-yuv,format=(fourcc)I420,width=1366,height=768,framerate=30/1 ! timeoverlay halign=left valign=bottom shaded-background=true ! x264enc bitrate=4096 tune=zerolatency ! h264parse ! queue ! mux. audiotestsrc wave=ticks volume=0.2 ! voaacenc ! mux. mpegtsmux name=mux ! udpsink host=<ip> port=<port>
This is what the pipeline looks like:
The configuration in this example is majortype = MEDIATYPE_Video, subtype = MEDIASUBTYPE_H264, formattype = FORMAT_MPEG2Video. No sequenceheader specifically provided, etc.
So the question is, are these kinds of artifacts symptoms of some common configuration problem?

You are loosing data in your transmission possibly leading to errors. The issue is that I pictures have a higher bitrate than P pictures. If you have a cbr n/w then you can potentially see this issue.
Why the artifacts appear this way:
Now if you had a video with only 1 white ball and rest of background is black, when you loose frames the reference pictures are lost and it will appear garbled because a decoder may simply try to use frames from the last valid frame to display. Since your last valid frame is all black rest of the screen looks ok, the part where the while ball is still shows up with errors.
Replace this pattern with a different one and you will see more clearly what I mean.
With I pictures only there is no question of any reference pictures and hence you will get clean output.
One way to check if transmission is ok is dump your output to a file and read it on the other side with a file. If that works fine you know your transmission is loosing data.
Also since you have the clock at the bottom you should see the clock also jumping if there is loss in frames.

It turns out that the MPEG-2 Demultiplexer is not designed to handle H.264 content. That is why these effects appear.

Related

GMFBridge DirectShow filter SetLiveTiming effect

I am using the excellent GMFBridge directshow family of filters to great effect, allowing me to close a video recording graph and open a new one, with no data-loss.
My original source graph was capturing live video from standard video and audio inputs.
There is an undocumented method on the GMFBridgeController filter named SetLiveTiming(). From the name, I figured that this should be set to true if we are capturing from a Live graph (not from a file) as is my case. I set this value to true and everything worked as expected
The same capture hardware allows me to capture live TV signals (ATSC in my case), so I created a new version of the graph using the BDA architecture filters, for tuning purposes. Once the data flows out from the MPEG demuxer, the rest of the graph is virtually the same as my original graph.
However, on this ocassion my muxing graph (on the other side of the bridge) was not working. Data flowed from the BridgeSource filter (video and audio) and reached an MP4 muxer filter, however no data was flowing from the muxer output feeding a FileWriter filter.
After several hours I traced the problem to the SetLiveTiming() setting. I turned it off and everything began working as expected. and the muxer filter began producing an output file, however, the audio was not synchronized to the video.
Can someone enlighten me on the real purpose of the SetLiveTiming() setting and perhaps, why one graph works with the setting enabled, while the other fails?
UPDATE
I managed to compile the GMFBridge Project, and it seems that the filter is dropping every received sample because of a negative timestamp computation. However I am completely baffled at the results I am seeing after enabling the filter log.
UPDATE 2: The dropped samples were introduced by the way I launched the secondary (muxer) graph. I inspected a sample using a SampleGrabber (thus inside a streaming thread) as a trigger-point and used a Task.Run() .NET call to instantiate the muxer graph. This somehow messed up the clocks and I ended having a 'reference start point' in the future - when the bridge attempted to fix the timestamp by subtracting the reference start point, it produced a negative timestamp - once I corrected this and spawned the graph from the application thread (by posting a graph event), the problem was fixed.
Unfortunately, my multiplexed video (regardless of the SetLiveTiming() setting) is still out of sync.
I read that the GMFBridge filter can have trouble when the InfTee filter is being used, however, I think that my graph shouldn't have this problem, as no instance of the InfTee filter is directly connected to the bridge sink.
Here is my current source graph:
-->[TIF]
|
[NetworkProvider]-->[DigitalTuner]-->[DigitalCapture]-->[demux]--|-->[Mpeg Tables]
|
|-->[lavAudioDec]-->[tee]-->[audioConvert]-->[sampleGrabber]-->[NULL]
| |
| |
| ->[aacEncoder]----------------
| |--->[*Bridge Sink*]
-->[VideoDecoder]-->[sampleGrabber]-->[x264Enc]--------
Here is my muxer graph:
video
... |bridge source|-------->[MP4 muxer]--->[fileWriter]
| ^
| audio |
---------------------
All the sample grabbers in the graph are read-only. If I mux the output file without bridging (by placing the muxer on the capture graph), the output file remains in sync, (this ended being not true, the out-of-sync problem was introduced by a latency setting in the H264 encoder) but then I can't avoid losing some seconds between releasing the current capture graph, and running the new one (with the updated file name)
UPDATE 3:
The out of sync problem was inadvertently introduced by me several days ago, when I switched off a "Zero-latency" setting in the x264vfw encoder. I hadn't noticed that this setting had desynchronized my already-working graphs too and I was blaming the bridge filter.
In summary, I screwed up things by:
Launching the muxer graph from a thread other than the Application
thread (the thread processing the graph's event loop).
A latency switch in an upstream filter that was probably delaying
things too much for the muxer to be able to keep-up.
Author's comment:
// using this option, you can share a common clock
// and avoid any time mapping (essential if audio is in mux graph)
[id(13), helpstring("Live Timing option")]
HRESULT SetLiveTiming([in] BOOL bIsLiveTiming);
The method enables a special mode of operation which addresses live data. In this mode sample times are converted between the graphs as relative to respective clock start times. Otherwise, the default mode is to expect reset of time stamps to zero with graph changes.

my YUY2 output does not work with Video Renderer filter

I have a basic avstream driver (based on the avshws sample).
When testing the YUY2 output I get different results based on which renderer I use:
Video Renderer: blank image
VMR-7: scrambled image (due to the renderer using a buffer with a larger stride)
VMR-9: perfect render
I dont know why the basic video renderer (used by amcap) wont work. I have examined the graph of a webcam outputting the same format and I cannot see any difference apart from the renderer output.
I'm assuming you're writing your own filter based on avshws. I'm not familiar with this particular sample but generally you need to ensure two things:
Make sure your filter checks any media types proposed are acceptable. In the DirectShow baseclasses the video renderer calls the output pin IPin::QueryAccept which calls whichever base class member you're using e.g. CBasePin.CheckMediaType or CTransformFilter.CheckTransform
Make sure you call IMediaSample::GetMediaType on every output sample and respond appropriately e.g. calling CTransformFilter.SetMediaType and changing the format/stride of your output. It's too late to negotiate at this point - you've accepted the change already and if you really can't continue you have to abort streaming, return an error HRESULT and notify EC_ERRORABORT or EC_ERRORABORTEX. Unless it's buggy the downstream filter should have called your output pin's QueryAccept and received S_OK before it sends a sample with a media type change attached (I've seen occasional filters that add duplicate media types to the first sample without asking).
See Handling Format Changes from the Video Renderer
I have figured out the problem. I was missing one line to update the remaining bytes in the stream pointer structure:
Leading->OffsetOut.Remaining = 0;
This caused certain filters to drop my samples (AVI/MJPEG Decompressor, Dump) which meant that certain graph configurations would simply not render anything.

Why does GetDeliveryBuffer blocked with an INTERLEAVE_CAPTURE mode AVI Mux?

I'm trying to use a customized filter to receive video and audio data from a RTSP stream, and deliver samples downstream the graph.
It seems like that this filter was modified from the SDK source.cpp sample (CSource), and implemented two output pins for audio and video.
When the filter is directly connected to an avi mux filter with INTERLEAVE_NONE mode, it works fine.
However, when the interleave mode of avi mux is set to INTERLEAVE_CAPTURE,
the video output pin will hang on the GetDeliveryBuffer method (in DoBufferProcessingLoop) of this filter after several samples have sent,
while the audio output pin still works well.
Moreover, when I inserted an infinite pin tee filter into one of the paths between the avi mux and this source filter,
the graph arbitrarily turned into stop state after some samples had been sent (one to three samples or the kind).
And when I put a filter that is just an empty trans-in-place filter which does nothing after the infinite tee,
the graph went back to the first case: never turns to stop state, but hang on the GetDeliveryBuffer.
(Here is an image that shows the connections I've mentioned like)
So here are my questions:
1: What could be the reasons that the video output pin hanged on the GetDeliveryBuffer ?
In my guess it looks like the avi mux caught these sample buffers and did not release them until they are enough for interleaving,
but even when I set the amount of video buffers to 30 in DecideBufferSize it will still hang. If the reason is indeed like that, so how do I decide the buffer size of the pin for a downstream avi muxer ?
Likely a creation of more than 50 buffers of a video pin is not guaranteed to work because the memory size cannot be promised. :(
2: Why does the graph goes to stop state when the infinite pin tee is inserted ? And why could a no-operation filter overcomes it ?
Any answer or suggestion is appreciated. Or hope someone just give me some directions. Thanks.
Blocked GetDeliveryBuffer means the allocator, you are requesting a buffer from, does not [yet] have anything for you. All media samples are outstanding and are not yet returned back to the allocator.
An obvious work around is to request more buffers at pin connection and memory allocator negotiation stage. This however just postpones the issue, which can very much similarly appear later for the same reason.
A typical issue with a topology in question is related to threading. Multiplexer filter which has two inputs will have to match input streams to produce a joint file. Quite so often on runtime it will be holding media samples on one leg while expecting more media samples to come on the other leg on another thread. It is assumes that upstream branches providing media samples are running independently so that a lock on one leg is not locking the other. This is why multiplexer can freely both block IMemInputPin::Receive methods and/old hold media samples inside. In the topology above it is not clear how exactly source filter is doing threading. The fact that it has two pins make me assume it might have threading issues and it is not taking into account that there might be a lock downstream on multiplexer.
Supposedly source filter is yours and you have source code for it. You are interested in making sure audio pin is sending media samples on a separate thread, such as through asynchronous queue.

Different approaches on getting captured video frames in DirectShow

I was using a callback mechanism to grab the webcam frames in my media application. It worked, but was slow due to certain additional buffer functions that were performed within the callback itself.
Now I am trying the other way to get frames. That is, call a method and grab the frame (instead of callback). I used a sample in CodeProject which makes use of IVMRWindowlessControl9::GetCurrentImage.
I encountered the following issues.
In a Microsoft webcam, the Preview didn't render (only black screen) on Windows 7. But the same camera rendered Preview on XP.
Here my doubt is, will the VMR specific functionalities be dependent on camera drivers on different platforms? Otherwise, how could this difference happen?
Wherever the sample application worked, I observed that the biBitCount member of the resulting BITMAPINFOHEADER structure is 32.
Is this a value set by application or a driver setting for VMR operations? How is this configured?
Finally, which is the best method to grab the webcam frames? A callback approach? Or a Direct approach?
Thanks in advance,
IVMRWindowlessControl9::GetCurrentImage is intended for occasional snapshots, not for regular image grabbing.
Quote from MSDN:
This method can be called at any time, no matter what state the filter
is in, whether running, stopped or paused. However, frequent calls to
this method will degrade video playback performance.
This methods reads back from video memory which is slow in first place. This methods does conversion (that is, slow again) to RGB color space because this format is most suitable for for non-streaming apps and gives less compatibility issues.
All in all, you can use it for periodic image grabbing, however this is not what you are supposed to do. To capture at streaming rate you need you use a filter in the pipeline, or Sample Grabber with callback.

High Resolution Capture and Encoding

I'm using two custom push filters to inject audio and video (uncompressed RGB) into a DirectShow graph. I'm making a video capture application, so I'd like to encode the frames as they come in and store them in a file.
Up until now, I've used the ASF Writer to encode the input to a WMV file, but it appears the renderer is too slow to process high resolution input (such as 1920x1200x32). At least, FillBuffer() seems to only be able to process around 6-15 FPS, which obviously isn't fast enough.
I've tried increasing the cBuffers count in DecideBufferSize(), but that only pushes the problem to a later point, of course.
What are my options to speed up the process? What's the right way to do live high res encoding via DirectShow? I eventually want to end up with a WMV video, but maybe that has to be a post-processing step.
You have great answers posted here to your question: High resolution capture and encoding too slow. The task is too complex for the CPU in your system, which is just not fast enough to perform realtime video encoding in the configuration you set it to work.

Resources