Convert raw v4l2 buffer to QVideoframe in qt - qt

I will get raw video data from the V4L2 driver using VIDIOC_DQBUF, I wanted to render this frame in qt using QVideoFrame(which construct video frame) and QLabel/QPaint(for rendering a video frame).
QVideoFrame::QVideoFrame(QAbstractVideoBuffer *buffer, const QSize &size, QVideoFrame::PixelFormat format)
Constructs a video frame from a buffer with the given pixel format and size in pixels.
Qvideoframe from Qt
As of now, I’m using QImage to rendering RGB24 and QImage supports the only RGB format. However raw video frame which is received from VIDIOC_DQBUF is having different color formats and QVideoFrame support most of them.
Queries:
How to use QVideoFrame::QVideoFrame(QAbstractVideoBuffer *buffer, const QSize &size, QVideoFrame::PixelFormat format) for v4l2 buffer ?
How I can use map(), bits() and mappedBytes() function so that, I can get QVideoFrame constructed for given raw video data?
How I can use QPaint/QLabel to render QVideoFrame?
Regards,
Kulakrni

Lets reverse the order.
How I can use QPaint/QLabel to render QVideoFrame?
You can not. You need to use a QAbstractVideoSurface() derived class. In QML, this is VideoOutput. If you want a single image, then QVideoFrame is not the correct class to use for QPaint/QLabel.
How I can use map(), bits() and mappedBytes() function so that, I can get QVideoFrame constructed for given raw video data?
These functions are your interface to the QAbstractVideoSurface. It depends on how you want to store the VL4 buffer. Are you copying/translating it or are you mapping it directly; then there are ownership issues which this API attempts to address.
How to use QVideoFrame::QVideoFrame(QAbstractVideoBuffer *buffer, const QSize &size, QVideoFrame::PixelFormat format) for v4l2 buffer
You need to sub-class a QAbstractVideoBuffer by either copying/translating the data and keeping it with the class or provide a reference if you are using zero-copy of some sort.
By default, QML Camera and QCamera will find and use /dev/videoX which is a v4l device, via GStreamer. This class should already do the right thing to supply a VideoOutput widget.
See: Qt Video overview

Related

OpenGL / QT : Need help in Converting from QImage to Opengl format and render the pixels

We are migrating from Old OpenGl to Modern OpenGL. I am trying to port two functions which uses QT/OpenGL and want to convert to Modern OpenGL. QImage content should be converted to OpenGL Format. Then I want to read the pixels of QImage and render in OpenGL. How to do this in Modern OpenGL. I know glcopypixels() / glDrawPixels() is deprecated. Any pointers? I have the following code but it is in old OpenGL. Basically the whole idea is writing to back buffer and restoring the back buffer and render pixels to avoid redraw. I am using QOpenglWidget Class given by QT Framework (QT 5.1). I have tried many things converting to OpenGL format from QImage. But it did not work. Need your help. Thanks in Advance.
QImage _savedBackBuffer;
void SaveBackBuffer()
{
glReadBuffer(GL_BACK);
QImage buf = this->grabFramebuffer();
_savedBackBuffer = convertToGLFormat(buf); // convertToGLFormat is not available in
QOpenGLWidget class
}
void restoreBackBuffer()
{
glDrawBuffer(GL_BACK);
**glDrawPixels**( _savedBackBuffer.width(), _savedBackBuffer.height(),
GL_RGBA, GL_UNSIGNED_BYTE, _savedBackBuffer.bits() ); ---> glDrawPixels is Deprecated. How to handle this call.
}
flush () {
glReadBuffer (GL_BACK);
glDrawBuffer(GL_FRONT);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
**glCopyPixels**(0, 0, _scrWidth, _scrHeight, GL_COLOR); // glCopyPixels Deprecated
...
glflush();
}
I have added below code to grab the Framebuffer. But still I am getting an Empty QImage. Anything wrong with my code.
saveBackBuffer()
{
_bSavingBackBuffer = true;
QString fileName("C:\\Users\\ey617e\\Desktop\\yourFile.png");
QFile file(fileName);
file.open(QIODevice::WriteOnly);
glReadBuffer(GL_BACK);
makeCurrent();
QOpenGLFramebufferObjectFormat format;
format.setAttachment(QOpenGLFramebufferObject::CombinedDepthStencil);
QOpenGLFramebufferObject * fbo = new
QOpenGLFramebufferObject(_scrWidth, _scrHeight, format);
fbo->bind();
paintGL();
_savedBackBuffer = fbo->toImage();
_savedBackBuffer.save(file.fileName(), "PNG");
fbo->release();
}
paintGL()
{
QOpenGLPaintDevice fboPaintDev(_scrWidth, _scrHeight);
QPainter painter(&fboPaintDev);
painter.setRenderHints(QPainter::Antialiasing | QPainter::TextAntialiasing);
painter.beginNativePainting();
drawDisplayLists(_underIllusDisplayLists);
drawDisplayLists(_illusDisplayLists);
painter.endNativePainting();
painter.drawText(20, 40, "Foo");
painter.end();
}
You can create a QOpenGLTexture object directly from a QImage: https://doc.qt.io/qt-5/qopengltexture.html#QOpenGLTexture-1
You can then use that texture directly for any image related OpenGL operations.
Basically the whole idea is writing to back buffer and restoring the back buffer and render pixels to avoid redraw.
Don't do that! It will actually impair performance, since drawing on top of previously rendered content introduces implicit synchronization points, thereby eliminating options to render new contents in parallel to advancing the presentation swap chain.
As "counterintuitive" as it may sound, just redraw the whole thing, each and every frame. If your codebase is that old, then the complexity of what you're drawing very likely is going to be so low, that you could easily render thousands of frames per second.
On the other hand retaining the contents of the backbuffer constitutes a cache and thus introduces the complexity of deciding upon cache invalidation.
I bet, that just redrawing using modern methods (geometry in buffer objects, index buffers, untangling of sync points) and simplifying the rendering code path by mere elimination the code that's responsible for determining when to actually redraw portions of the picture will actually vastly outperform anything what you had before.

Display image using QImage without using pixmap in Qt?

I have a requirement to read pixel values from the picture displayed on the GraphicScene layout. How can I display image using QImage without using pixmap in Qt so that I am able to read the pixel values?
On most platforms, a QPixmap is a thin wrapper around a QImage. The conversions between the two are cheap - especially the pixmap-to-image conversion. Thus, you can use the QGraphicsPixmapItem and use item->pixmap().toImage() without much worry. To confirm that QPixmap is indeed a wrapper, the following check will do:
bool isPixmapThin(const QPixmap &pix) {
auto const a = pix.toImage();
auto const b = pix.toImage();
return a.bits() == b.bits();
}
In all cases, ensure that the image you take from the pixmap won't detach, i.e. always make it const (as in the code example above).

QOpenGLFramebufferObject toImage efficiency

The toImage() method seems to be very slow in terms of real-time rendering. I render to the frame buffer using OpenGL but I need to then fetch the colour data back effeciently, is there a better way to do this than with a Qt fbo's toImage method? I do want to return a QImage.
Note: Using the toImage() function I get ~ 10fps when my target is at least 30fps. The buffer needs to hold RGBA data on a texture that is 1200*674 (so 1200*674*4 bytes are fetched)

how to get image file format from QPixmap?

In my program, user choice and load an image into QPixmap in some class and after some works on loaded QPixmap the QPixmap passed into an other class, in new class I want to save the passed QPixmap as file, but I don't know what's the QPixmap format!
How we can get image file format from QPixmap?
A pixmap is conceptually system-specific, has no format per se, and may well lose data from the image that you've loaded. Also note that the image format and file format are two different things.
To preserve the image format, you must use the QImage class.
To preserve the file format, you must explicitly use QImageReader to read the image. The file format is available through the reader's format() method. It needs to be stored separately from the image, and used when saving the image later.
Finally, you might wish to preserve the file's name.
As a matter of implementation detail, with the default Qt's raster backend, a QPixmap is a thin wrapper around QImage. But your intent is that of an image, not a pixmap, thus you should use the image class.
QImage img = pixmap.toImage();
img.save("/media/mmcblk0p1/xx.jpeg");

Qt: send QPixmap in QDrag's QMimeData?

I create a drag object from a QListWidgetItem.
I can send text as mime data in this drag object.
How can I send a pixmap and retrieve it from the mime data?
Would it even be possible to create a QGraphicsItem and retrieve it?
I try to drag & drop from the QListWidget into a QGraphicsView.
There are multiple ways to send a QPixmap through QMimeData:
by encoding it into a file format such as PNG and sending that with mime-type image/png (QMimeData has built-in support for that, cf. QMimeData::imageData()).
by serialising the QPixmap into a QByteArray using a QDataStream and sending the serialisation under an app-specific mime-type application/x-app-name.
by writing the image data to a file on disk and sending a file-URL for it with mime-type text/uri-list (QMimeData has built-in support for this, cf. QMimeData::urls()). This allows to drag these images onto a file manager or the desktop.
similar to (2) above, you can also create a QGraphicsItem, stuff its address into a QByteArray and send that under an app-specific mime-type. This doesn't work if the drag ends in another process, of course (the receiving site can test, because QDragEvent::source() returns 0 in that case), and it requires special care to handle the graphic item's lifetime.
Seeing as QMimeData allows you to pass several formats at once, these options are non-exclusive. You should, however, sort the formats you return from your reimplementation of QMimeData::formats() in order of decreasing specificity, i.e. your app-private formats come first, and text/uri-list comes last.

Resources