QOpenGLFramebufferObject toImage efficiency - qt

The toImage() method seems to be very slow in terms of real-time rendering. I render to the frame buffer using OpenGL but I need to then fetch the colour data back effeciently, is there a better way to do this than with a Qt fbo's toImage method? I do want to return a QImage.
Note: Using the toImage() function I get ~ 10fps when my target is at least 30fps. The buffer needs to hold RGBA data on a texture that is 1200*674 (so 1200*674*4 bytes are fetched)

Related

OpenGL / QT : Need help in Converting from QImage to Opengl format and render the pixels

We are migrating from Old OpenGl to Modern OpenGL. I am trying to port two functions which uses QT/OpenGL and want to convert to Modern OpenGL. QImage content should be converted to OpenGL Format. Then I want to read the pixels of QImage and render in OpenGL. How to do this in Modern OpenGL. I know glcopypixels() / glDrawPixels() is deprecated. Any pointers? I have the following code but it is in old OpenGL. Basically the whole idea is writing to back buffer and restoring the back buffer and render pixels to avoid redraw. I am using QOpenglWidget Class given by QT Framework (QT 5.1). I have tried many things converting to OpenGL format from QImage. But it did not work. Need your help. Thanks in Advance.
QImage _savedBackBuffer;
void SaveBackBuffer()
{
glReadBuffer(GL_BACK);
QImage buf = this->grabFramebuffer();
_savedBackBuffer = convertToGLFormat(buf); // convertToGLFormat is not available in
QOpenGLWidget class
}
void restoreBackBuffer()
{
glDrawBuffer(GL_BACK);
**glDrawPixels**( _savedBackBuffer.width(), _savedBackBuffer.height(),
GL_RGBA, GL_UNSIGNED_BYTE, _savedBackBuffer.bits() ); ---> glDrawPixels is Deprecated. How to handle this call.
}
flush () {
glReadBuffer (GL_BACK);
glDrawBuffer(GL_FRONT);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
**glCopyPixels**(0, 0, _scrWidth, _scrHeight, GL_COLOR); // glCopyPixels Deprecated
...
glflush();
}
I have added below code to grab the Framebuffer. But still I am getting an Empty QImage. Anything wrong with my code.
saveBackBuffer()
{
_bSavingBackBuffer = true;
QString fileName("C:\\Users\\ey617e\\Desktop\\yourFile.png");
QFile file(fileName);
file.open(QIODevice::WriteOnly);
glReadBuffer(GL_BACK);
makeCurrent();
QOpenGLFramebufferObjectFormat format;
format.setAttachment(QOpenGLFramebufferObject::CombinedDepthStencil);
QOpenGLFramebufferObject * fbo = new
QOpenGLFramebufferObject(_scrWidth, _scrHeight, format);
fbo->bind();
paintGL();
_savedBackBuffer = fbo->toImage();
_savedBackBuffer.save(file.fileName(), "PNG");
fbo->release();
}
paintGL()
{
QOpenGLPaintDevice fboPaintDev(_scrWidth, _scrHeight);
QPainter painter(&fboPaintDev);
painter.setRenderHints(QPainter::Antialiasing | QPainter::TextAntialiasing);
painter.beginNativePainting();
drawDisplayLists(_underIllusDisplayLists);
drawDisplayLists(_illusDisplayLists);
painter.endNativePainting();
painter.drawText(20, 40, "Foo");
painter.end();
}
You can create a QOpenGLTexture object directly from a QImage: https://doc.qt.io/qt-5/qopengltexture.html#QOpenGLTexture-1
You can then use that texture directly for any image related OpenGL operations.
Basically the whole idea is writing to back buffer and restoring the back buffer and render pixels to avoid redraw.
Don't do that! It will actually impair performance, since drawing on top of previously rendered content introduces implicit synchronization points, thereby eliminating options to render new contents in parallel to advancing the presentation swap chain.
As "counterintuitive" as it may sound, just redraw the whole thing, each and every frame. If your codebase is that old, then the complexity of what you're drawing very likely is going to be so low, that you could easily render thousands of frames per second.
On the other hand retaining the contents of the backbuffer constitutes a cache and thus introduces the complexity of deciding upon cache invalidation.
I bet, that just redrawing using modern methods (geometry in buffer objects, index buffers, untangling of sync points) and simplifying the rendering code path by mere elimination the code that's responsible for determining when to actually redraw portions of the picture will actually vastly outperform anything what you had before.

Convert raw v4l2 buffer to QVideoframe in qt

I will get raw video data from the V4L2 driver using VIDIOC_DQBUF, I wanted to render this frame in qt using QVideoFrame(which construct video frame) and QLabel/QPaint(for rendering a video frame).
QVideoFrame::QVideoFrame(QAbstractVideoBuffer *buffer, const QSize &size, QVideoFrame::PixelFormat format)
Constructs a video frame from a buffer with the given pixel format and size in pixels.
Qvideoframe from Qt
As of now, I’m using QImage to rendering RGB24 and QImage supports the only RGB format. However raw video frame which is received from VIDIOC_DQBUF is having different color formats and QVideoFrame support most of them.
Queries:
How to use QVideoFrame::QVideoFrame(QAbstractVideoBuffer *buffer, const QSize &size, QVideoFrame::PixelFormat format) for v4l2 buffer ?
How I can use map(), bits() and mappedBytes() function so that, I can get QVideoFrame constructed for given raw video data?
How I can use QPaint/QLabel to render QVideoFrame?
Regards,
Kulakrni
Lets reverse the order.
How I can use QPaint/QLabel to render QVideoFrame?
You can not. You need to use a QAbstractVideoSurface() derived class. In QML, this is VideoOutput. If you want a single image, then QVideoFrame is not the correct class to use for QPaint/QLabel.
How I can use map(), bits() and mappedBytes() function so that, I can get QVideoFrame constructed for given raw video data?
These functions are your interface to the QAbstractVideoSurface. It depends on how you want to store the VL4 buffer. Are you copying/translating it or are you mapping it directly; then there are ownership issues which this API attempts to address.
How to use QVideoFrame::QVideoFrame(QAbstractVideoBuffer *buffer, const QSize &size, QVideoFrame::PixelFormat format) for v4l2 buffer
You need to sub-class a QAbstractVideoBuffer by either copying/translating the data and keeping it with the class or provide a reference if you are using zero-copy of some sort.
By default, QML Camera and QCamera will find and use /dev/videoX which is a v4l device, via GStreamer. This class should already do the right thing to supply a VideoOutput widget.
See: Qt Video overview

openGL glBufferData usage

I am using glBufferData to save some information for rendering.
glBufferData(GL_ARRAY_BUFFER, vertex_size * sizeof(VertexData), vertices, GL_DYNAMIC_DRAW);
where vertices save data for each vertexs. I changed vertices data later to render a different image. However, it is still showing the original one. I believe changing GL_STATIC_DRAW to GL_DYNAMIC_DRAW should solve the problem, but it failed. What should I do for that?
To update your entire buffer, you should call glBufferData() once again:
glBufferData(GL_ARRAY_BUFFER, vertex_size * sizeof(VertexData), vertices, GL_DYNAMIC_DRAW);
Furthermore, it is possible to update only part of the data using the glBufferSubData() call:
glBufferSubData(GLenum target, GLintptr offset, GLsizeiptr size, const GLvoid* data);
The glBufferSubData() is faster since it will not reallocate the underlying buffer .
Each time you update your array, it needs to call glBindBuffer relative BufferObject handler to active this Array Buffer or Element Buffer. In addition, if your new array is larger than the old Buffer, calling glBufferData is needed, otherwise, it only needs to call glBufferSubData.

Delphi/C++Builder TBitmap: How to determinine scanline order?

Delphi's TBitmap type is basically a wrapper over a GDI BITMAP and PALETTE, and can support both top-down and bottom-up scanline ordering.
I have a TBitmap which I need to convert to GDI+ Bitmap, in order to rotate and composite it.
My bitmap is 32-bit ARGB, which windows supports, but VCL doesn't natively 'understand'.
TBitmap *bmp;
...
When I use the following constructor, the alpha channel doesn't work for compositing, but otherwise everything works.
Gdiplus::Bitmap b(bmp->Handle, NULL);
So, I tried the constructor below, which takes size, pixel data and format params.
Gdiplus::Bitmap b(bmp->Width, bmp->Height, bmp->Width *4, PixelFormat32bppARGB,
(BYTE*) bmp->ScanLine[bmp->Height-1]); // bottom up storage
This gets the alpha, but the bitmap is upside down, so I tried this
Gdiplus::Bitmap b(bmp->Width, bmp->Height, - bmp->Width *4, PixelFormat32bppARGB,
(BYTE*) bmp->ScanLine[0]); // negative stride for bottom up bitmaps?!
Now, that works, but of course I'm hard-coded into bottom-up bitmaps. However, I can't find a way of determining if the TBitmap is top-down or bottom-up. They're stored internally with negative height but the height value is massaged before it's passed back to user code.
How can I find out the scanline ordering, or - better yet - is there another way of creating a GDIPlus bitmap from a TBitmap?
The TBitmap::ScanLine property accounts for top-down and bottom-up. For a bottom-up bitmap, ScanLine[0] returns the last row, and ScanLine[Height-1] returns the first row, of the raw pixel data. For a top-down bitmap, ScanLine[0] returns the first row, and ScanLine[Height-1] returns the last row, of the raw pixel data.
To determine if a TBitmap is bottom-up or top-down, you have to manually retreive its BITMAPINFOHEADER structure, which TBitmap does not natively expose. You can use the Win32 API GetObject() function to retreive a DIBSECTION structure, which has a BITMAPINFOHEADER member.

FileReference.save() duplicates ByteArray

I've encountered a memory problem using FileReference.save(). My Flash application generates of a lot of data in real-time and needs to save this data to a local file. As I understand, Flash 10 (as opposed to AIR) does not support streaming to a file. But, what's even worse is that FileReference.save() duplicates all the data before saving it. I was looking for a workaround to this doubled memory usage and thought about the following approach:
What if I pass a custom subclass of ByteArray as an argument to FileReference.save(), where this ByteArray subclass would override all read*() methods. The overridden read*() methods would wait for a piece of data to be generated by my application, return this piece of data and immediately remove it from the memory. I know how much data will be generated, so I could also override length/bytesAvailable methods.
Would it be possible? Could you give me some hint how to do it? I've created a subclass of ByteArray, registered an alias for it, passed an instance of this subclass to FileReference.save(), but somehow FileReference.save() seems to treat it just as it was a ByteArray instance and doesn't call any of my overridden methods...
Thanks a lot for any help!
It's not something I've tried before, but can you try sending the data out to a php application that would handle saving the ByteArray to the server, much like saving an image to the server, so then you'd use URLLoader.data instead, using something like this:
http://www.zedia.net/2008/sending-bytearray-and-variables-to-server-side-script-at-the-same-time/
It's an interesting idea. Perhaps to start you should just add traces in your extended ByteArray to see how the FileReference#save() functions internally.
If it has some kind of
while( originalByteArray.bytesAvailable )
writeToSaveBuffer( originalByteArray.readByte() );
functionality the overrides could just truncate the original buffer on every read like you say, something like:
override function readByte() : uint {
var b : uint = super.readByte();
// Truncate the bytes (assuming bytesAvailable = length - removedBytes)
length = length - bytesAvailable;
return b;
}
On the other hand, if this now works I guess the original byte array would not be available afterwards in the application anymore.
(i havn't tested this myself, truncating might require more work than the example)

Resources