Im trying to encode raw data(both video frame and audio sample) into .asf file, using asf writer filter in directshow.
my filter graph structure:
raw_send_filter -> asf writer filter
raw_send_filter implements CBaseFilter and CBaseOutputPin. It plays a role as source filter which get raw data, then deliver them to ASF writer filter. The process follows these steps:
Get deliver buffer (return into "sample") , using the function CBaseOutputPin::GetDeliveryBuffer
sample->GetPointer(&buffer);
Set time stamp (with frame rate = 30 fps)
deliver sample
The problem is after encode some raw data, I can not deliver any more.
I can encode .avi file with this way, using Avi mux filter. Can u tell me why I can not deliver samples after encoding some?
Thanks.
Possibly the ASF multiplexer is waiting for more data. Check if you send audio and video in the same rate.
Related
I have 2 input streams of data that are being passed to a Haali Muxer (mp4 format).
Currently I stream these to Haali directly in a DirectShow graph without a clock. I wondered if I should be trying to write these to the muxer synchronised, or whether it happily accepts a stream of audio data that stops before the video data stream stops. (I have issues with the output file not playing audio after seeking, and I'm not sure why this could occur)
I can't find much in the way of documentation for muxing with the Haali muxer, does anyone know the best place to look for info on this filter?
To have the streams multiplexed into single MP4 file you need single instance of multiplexer (Haali, GDCL, commercial, wrapper over mp4v2 library, over Media Foundation sink etc) with two (or more) input pins on it connected to respective sources, which in turn are going to be written as tracks.
Filter graph clock does not matter. Clock is for presentation, and file writers accept incoming data and write it as soon as possible anyway. It is more accurate to remove the clock, as you seem to already be doing, but having standard clock is not going to be different.
Data is synchronized using time stamps on individual media samples, parts of media streams. Multiplexer builds internal queues for every stream and then consumes data from the streams to build single file, in a sort of way that original stream data is interleaved. If one stream supplies too much data, that is, if data is available too early while another stream supplies data slowly, multiplexer blocks further data reception on this particular stream by not returning from respective processing call (IPin::Receive) expecting that during this wait the slow stream provides additional input. Eventually, what multiplexer looks at when matching data from different streams is data time stamps.
To obtain synchronized data in resulting MP4 file you, thus, need to make sure the payload data is properly time stamped. Multiplexer will take care of the rest.
This also includes that the time stamps should be monotonously increasing within a stream, and key frames/splice points are respectively indicated. Otherwise some multiplexers might issue a failure immediately, other would produce the output file but it might have playback issues (esp. seeking).
I am attempting to publish some mp4 files with the GDCL Mpeg-4 Multiplexor but it's not accepting the input from my camera (QuickCam
Orbit/Sphere AF).
I see that it has set the the sub type is set to MEDIASUBTYPE_NULL.
I can't seem to figure out a set of filters that will adapt to
successfully link the pins. What do I need to do to successfully
adapt from my Capture pin to the multiplexor?
GDCL Mpeg-4 Multiplexor multiplexes compressed data and your camera captures raw (uncompressed) video. You need to insert a compressor in between in order to deliver MPEG-4 compatible video into the multiplexer. That is, MPEG-4 Part 2 or MPEG-4 Part 10 AKA H.264 video compressor. The multiplexer filter itslef does not do data compression/encoding.
I have an application. I will have a situation, wherein I will receive a big array of encoded bytes. I have to decode them and render it. For decoding, I am using a custom decoder class. After the decode, how can I construct a DirectShow graph which will receive input data from the decoder? Please give some direction/samples on this.
Have a look at the PushSource sample in the DirectShow SDK. This sample shows you how to create a source filter that can be rendered. It is all about setting the output media type of your filter correctly so that the rest of the graph can be rendered. The sample also shows you how to feed media samples to the rest of the media pipeline. In your case what do you decode to? The PushSource sample outputs RGB24 IIRC.
Also, it sounds like you're decoding in the same filter as your receiving the bytes in? Typically in DirectShow you would write a source filter that is able to receive bytes from the network and outputs samples in the encoded format. You would then connect this filter to a custom decoder filter, that then outputs either RGB24 or some raw media format that is understood by DirectShow. Similarly for audio, you could output say, PCM.
Edit:
I have used the same approach (CSource, CSourceStream). That is correct, the DoBufferProcessingLoop calls FillBuffer. My general approach has been to use the producer-consumer pattern. The networking-reading thread populates the queue with samples and in my overridden DoBufferProcessingLoop I check whether the queue has any data, calling FillBuffer if there is data. You can of course try other methods such as waiting on events (frame availibility). To see the approach I used you can download the source code of an example RTSP source filter at http://sourceforge.net/projects/videoprocessing/ and see if that suits you. Best thing I would say is to just try stuff and learn as you go along.
I just create a simple graph
SourceFilter(*.mp4 file format) ---> GDCL MPEG 4 Mux Filter ---> File writer Filter
It works fine. But when the source is in h264 file format
SourceFilter( *.h264 file format) ---> GDCL MPEG 4 Mux Filter---> File writer Filter
It record a file but the recorded file does not play in VLC Player, QuickTime, BS Player, WM Player.
What i am doing wrong? Any ideas to record h264 video source? Do i need H264 Mux?
Best Wishes
PS: i JUST want to record video by the way...Why i need a mux?
There are two H.264 formats used by DirectShow filters. One is Byte Stream Format, in which each NALU is preceded by a start code 00 00 01. The other is the format used within MP4 files, in which each start code is preceded by a length (the media type or the MP4 file metadata specifies how many bytes are used in the length field). The problem is that some FOURCCs are used for both formats.
The MP4 mux sample accepts either BSF or length-preceded data, depending on the subtype give. It does not attempt to work out which it is. Most likely, when you are feeding the H.264 elementary stream, you are giving the mux a FOURCC or media type that the mux thinks means length-prepended, when you are giving BSF data. Check in TypeHandler::CanSupport.
If you just want to save H.264 video to a file, you can use a Dump filter to just write the bits to a file. If you are saving BSF, this is a valid H.264 elementary stream file. If you want support for the majority of players, or if you want seeking support, then you will want to write the elementary stream into a container with an index, such as MP4. In this case, you need a mux, not for the multiplexing, but for the indexing and metadata creation.
G
I made a console application, using directshow, that record from a live source (now a webcam, then a tv capture card), add current date and time in overlay and then save audio and video as .asf.
Now I want that the output file is going to change every 60 minutes without stopping the graph. I must not loose any seconds of the live stream.
The graph is something like this one:
http://imageshack.us/photo/my-images/543/graphp.jpg/
I took a look at the GMFBridge but I have some compiling problem with their examples.
I am wondering if there is a way to split what exist from the overlay filter and audio source, connect them to another asf writer (paused) and then switch them every 60 minutes.
The paused asf filter's file name must change (pp.asf, pp2.asf, pp4.asf ...). Something like this:
http://imageshack.us/photo/my-images/546/graph1f.jpg/
with pp1 paused. I found some people in internet that say that the asf writer deletes the current file if the graph does not go in stop mode.
Well, I have the product (http://www.videophill.com) that does exactly what you described (its used for broadcast compliance recording purposes) - and I found that only way to do that is this:
create a dshow graph that will be used only to capture the audio and video
then, at the end of the graph, insert samplegrabber filters, both for audio and video
then, use IWMWritter to create and save wmv file, using samples fetched from samplegrabber filters
when time comes, close one IWMWritter and create another one.
That way, you won't lose single frame when switching the output files.
Of course, there is also question of queue-ing and storing the samples (when switching the writters) and properly re-aligning the audio/video timestamps, but from my research, that's the only 'normal' way to do it, and I used in practice.
The solution is in writing a custom DShow filter with two input pins in your case. One for audio stream and the other for video stream. Inside that filter (doesn't have to be inside from the architecture point of view, because you can also use callbacks for example and do the job somewhere else) you should create asf files. While switching files, A/V data would be stored in cache (e.g. big enough circular buffer). You can also watch and modify A/V sync in that filter. For writing ASF files I would recommend Windows Media Format SDK.You can also add output pins if you like to pass A/V data further if necessary for preview, parallel streaming etc...
GMFBridge is a viable, but complicated solution, a more direct approach I have implemented in the past is querying your ASF Writer for the IWMWriterAdvanced2 interface and setting a custom sink. Within that interface you have methods to remove and add sinks to your ASF writer. The sink automatically connected will write to the file that you speficifed. One way to write whereever you want to is
1.) remove all default sinks:
pWriterAdv->RemoveSink(NULL);
2.) register a custom sink:
pWriterAdv->AddSink((IWMWriterSink*)&streamSink);
The custom sink can be a class that implements IWMWriterSink, which requires implementing callback methods that are called i.e. when the ASF header is written (OnHeader(/* [in] */ INSSBuffer *pHeader);) and when a data packet is written (OnDataUnit(/* [in] */ INSSBuffer *pDataUnit);) - in your implementation you can then write them wherever you want, for example offer additional methods on this class where you can specify the file name you want to write to.
Note that this solution does not quite get you were you want to if you need to write out the header information in each of the 60 minute files - after the initial header you will only get ASF packet data. A workaround for that could be to re-write the intial header before any packet data of each file, however this will produce an unindexed (non-seekable) ASF file.