GDCL Mpeg-4 Multiplexor - Filter's can't agree on connection - directshow

I am attempting to publish some mp4 files with the GDCL Mpeg-4 Multiplexor but it's not accepting the input from my camera (QuickCam
Orbit/Sphere AF).
I see that it has set the the sub type is set to MEDIASUBTYPE_NULL.
I can't seem to figure out a set of filters that will adapt to
successfully link the pins. What do I need to do to successfully
adapt from my Capture pin to the multiplexor?

GDCL Mpeg-4 Multiplexor multiplexes compressed data and your camera captures raw (uncompressed) video. You need to insert a compressor in between in order to deliver MPEG-4 compatible video into the multiplexer. That is, MPEG-4 Part 2 or MPEG-4 Part 10 AKA H.264 video compressor. The multiplexer filter itslef does not do data compression/encoding.

Related

Contradiction in the Media Foundation AAC specification regarding LATM/LAOS streams

Reading the official specification for the ms media foundation AAC decoder there is the following section in the middle of the document:
The AAC decoder does not support any of the following:
Main profile, Sample-Rate Scalable (SRS) profile, or Long Term Prediction (LTP) profile.
Audio data interchange format (ADIF).
LATM/LAOS transport streams.
Coupling channel elements (CCEs). The decoder will skip audio frames with CCEs.
AAC-LC with a 960-sample frame size. Only
1024-sample frames are supported.
But in the beginning of the same page the following is stated:
Starting in Windows 8, the AAC decoder also supports decoding MPEG-4
audio transport streams with a multiplex layer (LATM) and
synchronization layer (LOAS). It can also convert an LATM/LOAS stream
to ADTS.
I assume and am quite sure that LATM/LAOS transport streams are supported and that the part which says LATM/LAOS is not supported was simply not removed when support for it was added. But is this assumption correct?
What I am afraid of is that I am somehow not seeing some nuance here and that both statements are true but in different cases.

Video Processor MFT and deinterlacing

MSDN Video Processor MFT mentions that the MFT can be used to deinterlace interlaced video.
I set the output Media type to the same as the input + the MF_MT_INTERLACE_MODE to progressive on the output media type.
But the output samples are still interleaved.
I can't test the Video Proccessor MFT because it needs Windows8/10. But i will say two things :
The documentation says it is GPU accelerated, but does not say if it does fallback to software processing. So, if it's only GPU accelerated, and if your GPU does not support deinterlacing, it can explain that your frames are still interleaved. You can check DXVAHD_PROCESSOR_CAPS.
For a correct deinterlacing, sample needs to be assigned with some of those values : MFSampleExtension_Interlaced, MFSampleExtension_BottomFieldFirst, MFSampleExtension_RepeatFirstField , and so on (Sample Attributes). So you can check if parser/decoder correctly set those values. If it is not, the Video Processor MFT will not be able to do deinterlacing.

v4l2 -> QByteArray(?) -> QWebsocket -> internet -> {PC, Android, web}

as you can assume from the tile, I would like to broadcast a webcam stream to different clients. I know that there are many solutions (as motion), but I have already a working infrastructure based on a Qt server software and a websocket as connection to the outer world.
I have read source code of other linux applications like kopete and motion to find out the most efficient way, but don't come to a good conclusion. Another goal is to keep the websocket stream in a format which can be decoded by e.g. javascript in a browser.
The source, a v4l2 device, is already accessed. There a different formats (YUV, MJPEG, ...) but I don't know which (standard) format to choose when it comes to streaming. Another requirement is to save the stream to a harddrive and to process those stream (opencv?) to find motion. So the question is should i transmit a QByteArray thats zlib compressed or use mjpeg, which I don't know how to use. The used webcam is a uvcvideo device:
enter ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'MJPG' (compressed)
Name : MJPEG
Index : 1
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUV 4:2:2 (YUYV)code
To be honest, I am not sure how motion does this in detail, because this might be the ways to choose.
Thanks
small

Encoding videos for use with Adobe Live Streaming

I have an original video coded at 20Mbps, 1920x1080, 30fps and want to convert it down to be 640x480 30fps at a range of (3 different) bitrates for use by Adobe Live Streaming.
Should I use ffmpeg to resize and encode at the 3 bitrates then use f4fpackager to create the f4m f4f and f4x files or just use ffmpeg to reduce the resolution and then f4fpackager to encode the relevant bitrates?
I've had several tries so far, but when encoded the videos seem to play at a much larger bitrate than they've been encoded at. For example, if I set up the OSMF to play from my webserver, I'd be expecting my best encoded video to play at 1,500kbps but it's way above that.
Has anyone had any experience of encoding for use like this?
I'm using the following options to f4fpackager
--bitrate=1428 --segment-duration 30 --fragment-duration 2
f4fpackager doesn't do any encoding, it does 2 things:
- fragment the mp4 files (mp4 -> f4f)
- generate a Manifest (f4m) file referencing all you fragmented files (f4f)
So the process is:
- transcode your source file in all the size/bitrate that you want to provide (eg: 1920x01080#4Mbps, 1280x720#2Mbps, etc)
- use f4fpackager to convert the mp4 to f4f (this is the fragmentation step)
- use f4fpackager to generate the Manifest.f4m referencing the files that you generated in the previous step
the --bitrate option of f4fpackager should match the value that you use with ffmpeg, this parameter is used to generate the manifest file with the correct bitrate value of each quality

GDCL Mpeg-4 Multiplexor Problem

I just create a simple graph
SourceFilter(*.mp4 file format) ---> GDCL MPEG 4 Mux Filter ---> File writer Filter
It works fine. But when the source is in h264 file format
SourceFilter( *.h264 file format) ---> GDCL MPEG 4 Mux Filter---> File writer Filter
It record a file but the recorded file does not play in VLC Player, QuickTime, BS Player, WM Player.
What i am doing wrong? Any ideas to record h264 video source? Do i need H264 Mux?
Best Wishes
PS: i JUST want to record video by the way...Why i need a mux?
There are two H.264 formats used by DirectShow filters. One is Byte Stream Format, in which each NALU is preceded by a start code 00 00 01. The other is the format used within MP4 files, in which each start code is preceded by a length (the media type or the MP4 file metadata specifies how many bytes are used in the length field). The problem is that some FOURCCs are used for both formats.
The MP4 mux sample accepts either BSF or length-preceded data, depending on the subtype give. It does not attempt to work out which it is. Most likely, when you are feeding the H.264 elementary stream, you are giving the mux a FOURCC or media type that the mux thinks means length-prepended, when you are giving BSF data. Check in TypeHandler::CanSupport.
If you just want to save H.264 video to a file, you can use a Dump filter to just write the bits to a file. If you are saving BSF, this is a valid H.264 elementary stream file. If you want support for the majority of players, or if you want seeking support, then you will want to write the elementary stream into a container with an index, such as MP4. In this case, you need a mux, not for the multiplexing, but for the indexing and metadata creation.
G

Resources