DirectShow RGB-YUV filter - directshow

I would like to encode video in my app with VP8. I use RGB24 format in my app but VP8 DirectShow filter accepts only YUV format (http://www.webmproject.org/tools/#directshow_filters).
I've googled the "RGB to YUV directshow filter" but no success. I don't want to write this filter myself from scratch, so I would appreciate if you help me with the information on where to find such filter.
Thanks!

You could try Geraint Davies' YUV transform filter to see if it supports the conversion.

Starting from Vista you can use Color Converter DSP, does this help?
If you know how to implement a transform filter, I have a fast YUV to RGB algorithm somewhere. I used DirectShow a looong time ago, so I can't be of anymore help than this :P

Related

How to display decoded YUV420P format frames into Qt in an efficient way but without OpenGL?

I've been trying to render video decoded from ffmpeg ito Qt in several ways. I tried using QAbstractVideoBuffer here: How to map a decoded buffer from ffmpeg into QVideoFrame? but ALL code examples I find construct a QImage and paint it on the screen, which I think it's very inneficient.
I've found here: https://stackoverflow.com/a/12925009/10116440 that OpenGL can also be used in Qt, but I think it is a bit of an overkill because OpenGL is for rendering intense graphics.
I'm sure there must be a way but I couldn't find anywhere.
So: how to display decoded YUV420P format frames into Qt in an efficient way but without OpenGL?
I just need a guide, as https://doc.qt.io/qt-5/videooverview.html#working-with-low-level-video-frames won't help me at all!

How can I convert avi to mp4 using graphedit and ffdshow?

I´m working on an application based on directshow that has to convert an AVI source file to to an mp4-file that can be played back with Quicktime.
Since 3ivx, according to my web research the most popular way to fulfill this task, has become commercial (and my budget is quite limited), I decided to use a solution based on ffdshow.
I created a simple graph in graphedit, using LAME for audio encoding and GDCL MPEG 4 Multiplexor for the muxing, but everytime I try to play the movie with Quicktime, I´m getting an error indicating a wrong "sample description".
Playback with Windows Media Player is working, except that there is no sound.
My guess is that there´s a problem with the muxer, because every time I try to add audio encoding, graphedit automatically adds an decoder after the encoding unit (see picture link).
http://imageshack.us/photo/my-images/39/graphjrgr.png/
Any ideas on how to integrate ffdshow in a better way, tips for alternative mp4 muxers, or a complete different approach are appreciated!
The GDCL muxer has limited number of audio formats that it supports, probably you should check the source code for the muxer to see if the formats you are using are in fact supported. Basically, you need to choose an audio encoder that the mux recognizes as valid. It might be possible to use GraphEdit to choose different properties for the encoder filter that allow things to work better.
I have had some luck with the Monogram x264(video) and AAC(audio) encoders. See http://blog.monogram.sk/janos/directshow-filters/
Finally, try the debug version of the GDCL mp4 muxer.
Also, you must be aware of MPEG-4 LA licensing requirements for x264 http://www.mpegla.com/main/programs/AVC/Pages/FAQ.aspx

image format best for display

I am working on an image processing application. I have to display an image sequence. I would like to avoid any extra overhead for {internal} format conversions.
I believe RGB should be the optimal format for display. But SDL accepts various YUV formats and there is no native{to SDL} support for RGB. Whereas Qt does not accept YUV format at all. X accepts RGBX format {native}. Images can be generated in any desired format for display. But CPU/GPU cycles for format conversion should be avoided. Any suggestion on what's the right way of displaying image sequences would be great.
The output format is ARGB. SDL works with RGB surfaces, so I don't understand your claim that "there is no native{to SDL} support for RGB.".
The native video acceleration interface of X only supports YUV input however. The YUV->RGB conversion on the GPU comes for free if you use the video acceleration interface. No "cycles" wasted here.
Perhaps you should go into more detail about your purposes. What is the framerate we are dealing with here?
I think you should use any uncompressed image + QPixmap.

Is there any example to show how to write a DirectShow transform filter?

I want to capture a current frame and its previous one to do analysis and produce a new frame to show. Is it to say I must write a transform DirectShow filter? But I am a newbie to DirectShow. I was confused by MSDN's lots of documents. So I wonder if there is any simple example to show how to do it.
Thanks.
Cook
In the directshow samples that come with the Platform SDK you, at least, always USED to get examples on how to make all sorts of filters. I can't believe they would have removed that. It made DirectShow almost usable :)
This may help:
Writing Transform Filters
EZRGB24 Filter Sample

Automatic YUV -> RGB in DirectShow for custom decoder

after hours of searching on the net I'm quite desperate to find solution for this. I've up & running OGG Theora decoder in DirectShow which ouputs YV12 and YUY2 color models.
Now, I want to make a RGB pixel manipulation filter for this output and to process it into video renderer.
According to this and
this, it should be really easy and transparent but it isn't.
For example, I implemented in CheckInputType() this check:
if( IsEqualGUID(*mtIn->Type(), MEDIATYPE_Video )
&& IsEqualGUID(*mtIn->Subtype(), MEDIASUBTYPE_RGB565 ) )
{
return S_OK;
}
and I would expect it inserts that MSYUV between Theora and my decoder and do the job for me (i.e. convert it into RGB). The problem is I got error everytime (in GraphEdit application). And I'm 100% sure it's YV12 as input (checked in debugger). Only explanation I could think of is that mention of AVI decompressor but there's no further info about it.
Does it mean I have to use AVI container if I want to get this automatic functionality?
Strange thing is it works for example for WMV videos (with YUV on their ouput), only this OGG decoder has a problem with it. So it's probably a question what this OGG decoder miss?
Too bad that MSYUV filter doesn't work as the Color Space Converter, i.e. visible and directly usable in GraphEdit...
I appreciate any hint on this, programming own YV12 -> RGB converter I take as the last resort.
There is no YUV to RGG colorspace converter built into Directshow. The reason that WMV files are working for you is that the WMV decoder filter will spit out RGB or YUV data depending on the type of filter you connect it too.
The best you can do here is write a colorspace converter filter yourself, or just convert the YUV data after you get it.
Fourcc.org has nice article on converting from YUV to RGB. Also the book Video Demystified by Keith Jack has all the details on colorspace conversions.

Resources