image format best for display - qt

I am working on an image processing application. I have to display an image sequence. I would like to avoid any extra overhead for {internal} format conversions.
I believe RGB should be the optimal format for display. But SDL accepts various YUV formats and there is no native{to SDL} support for RGB. Whereas Qt does not accept YUV format at all. X accepts RGBX format {native}. Images can be generated in any desired format for display. But CPU/GPU cycles for format conversion should be avoided. Any suggestion on what's the right way of displaying image sequences would be great.

The output format is ARGB. SDL works with RGB surfaces, so I don't understand your claim that "there is no native{to SDL} support for RGB.".
The native video acceleration interface of X only supports YUV input however. The YUV->RGB conversion on the GPU comes for free if you use the video acceleration interface. No "cycles" wasted here.
Perhaps you should go into more detail about your purposes. What is the framerate we are dealing with here?

I think you should use any uncompressed image + QPixmap.

Related

How assert that a TIFF file conforms to standard?

Today, I use FreeImage 3.15.4.0 to generate TIFF images. Some of my users tells me that he cannot read such images because its library (C++/QT I think) can't read them.
The images generated are readable with ImageJ and some other image processing tools.
So I wonder:
How to be sure that my image respect the standard (FreeImage rely on libtiff 4.0.3)?
Are my images too complex? (32 bits float images)
Does a simple standard format exist for 32 bits depth float images?
EDIT
I check by hand that my image followed the format described by adobe: http://partners.adobe.com/public/developer/en/tiff/TIFF6.pdf.
So, does exist a library comparative to know what part of specification is supported by which library?
Answer to 3 questions: yes.
The fact is QT, like others, is not able to display 32 bits float images: This is not supported by the QImage class (http://doc.qt.io/qt-5/qimage.html#Format-enum).
So the user will have to convert the image in something QT (the system in fact) is able to display.

How to view floating-point image on the web asp.net?

Is there any way to display floating point image on web form?
Alternatively, I would like to search for an algorithm to convert floating point image into a PNG or JPEG format.
I am looking for an open source project.
It depends very much on the format of the floating point imagery.
If you have straight uncompressed RGBA floats you can simply iterate over them using unsafe methods to clamp the float values to byte values and fill up the pixels of a dotnet bitmap which has been locked using the lockbits method.
For more information on lockbits see this article by Bob Powell.
If the floating point data is not uncompressed or not in a known or accessible format, you might try an open source library such as FreeImage.NET to open the image.

How can I convert avi to mp4 using graphedit and ffdshow?

I´m working on an application based on directshow that has to convert an AVI source file to to an mp4-file that can be played back with Quicktime.
Since 3ivx, according to my web research the most popular way to fulfill this task, has become commercial (and my budget is quite limited), I decided to use a solution based on ffdshow.
I created a simple graph in graphedit, using LAME for audio encoding and GDCL MPEG 4 Multiplexor for the muxing, but everytime I try to play the movie with Quicktime, I´m getting an error indicating a wrong "sample description".
Playback with Windows Media Player is working, except that there is no sound.
My guess is that there´s a problem with the muxer, because every time I try to add audio encoding, graphedit automatically adds an decoder after the encoding unit (see picture link).
http://imageshack.us/photo/my-images/39/graphjrgr.png/
Any ideas on how to integrate ffdshow in a better way, tips for alternative mp4 muxers, or a complete different approach are appreciated!
The GDCL muxer has limited number of audio formats that it supports, probably you should check the source code for the muxer to see if the formats you are using are in fact supported. Basically, you need to choose an audio encoder that the mux recognizes as valid. It might be possible to use GraphEdit to choose different properties for the encoder filter that allow things to work better.
I have had some luck with the Monogram x264(video) and AAC(audio) encoders. See http://blog.monogram.sk/janos/directshow-filters/
Finally, try the debug version of the GDCL mp4 muxer.
Also, you must be aware of MPEG-4 LA licensing requirements for x264 http://www.mpegla.com/main/programs/AVC/Pages/FAQ.aspx

DirectShow RGB-YUV filter

I would like to encode video in my app with VP8. I use RGB24 format in my app but VP8 DirectShow filter accepts only YUV format (http://www.webmproject.org/tools/#directshow_filters).
I've googled the "RGB to YUV directshow filter" but no success. I don't want to write this filter myself from scratch, so I would appreciate if you help me with the information on where to find such filter.
Thanks!
You could try Geraint Davies' YUV transform filter to see if it supports the conversion.
Starting from Vista you can use Color Converter DSP, does this help?
If you know how to implement a transform filter, I have a fast YUV to RGB algorithm somewhere. I used DirectShow a looong time ago, so I can't be of anymore help than this :P

Automatic YUV -> RGB in DirectShow for custom decoder

after hours of searching on the net I'm quite desperate to find solution for this. I've up & running OGG Theora decoder in DirectShow which ouputs YV12 and YUY2 color models.
Now, I want to make a RGB pixel manipulation filter for this output and to process it into video renderer.
According to this and
this, it should be really easy and transparent but it isn't.
For example, I implemented in CheckInputType() this check:
if( IsEqualGUID(*mtIn->Type(), MEDIATYPE_Video )
&& IsEqualGUID(*mtIn->Subtype(), MEDIASUBTYPE_RGB565 ) )
{
return S_OK;
}
and I would expect it inserts that MSYUV between Theora and my decoder and do the job for me (i.e. convert it into RGB). The problem is I got error everytime (in GraphEdit application). And I'm 100% sure it's YV12 as input (checked in debugger). Only explanation I could think of is that mention of AVI decompressor but there's no further info about it.
Does it mean I have to use AVI container if I want to get this automatic functionality?
Strange thing is it works for example for WMV videos (with YUV on their ouput), only this OGG decoder has a problem with it. So it's probably a question what this OGG decoder miss?
Too bad that MSYUV filter doesn't work as the Color Space Converter, i.e. visible and directly usable in GraphEdit...
I appreciate any hint on this, programming own YV12 -> RGB converter I take as the last resort.
There is no YUV to RGG colorspace converter built into Directshow. The reason that WMV files are working for you is that the WMV decoder filter will spit out RGB or YUV data depending on the type of filter you connect it too.
The best you can do here is write a colorspace converter filter yourself, or just convert the YUV data after you get it.
Fourcc.org has nice article on converting from YUV to RGB. Also the book Video Demystified by Keith Jack has all the details on colorspace conversions.

Resources