How to glReadPixels properly to write the data into QImage in Linux - qt

Summary
I want to write the opengl pixels(GL_RGB) by glReadPixels to a QImage.This renders correct, but when i resize the window, it scales weird and distorts my shape(triangle).
What i tried
I tried (QImage)img.scale(width(),height(),Qt::KeepAspectRatio)
but it didn't solve the problem.
Played with how i write the pixels buffer from glReadPixels to QImage but No.Didn't work.
Should i read the pixels in three buffers(GLubyte *rpixel,*gpixel,*bpixel) or on one(GLubyte **pixels)?Which one is the easiest because i will resize the array when i will resize my window(so i want dynamic arrays).
Some code
I have uploaded a minimal code recreating the bug-weird behaviour in github.Download and compile using the Qt Creator.
https://github.com/rivenblades/GlReadPixelsQT/tree/master
Pictures
Here is how i wanted(it works when not resizing)
Here is after resizing(Weird behaviour)
As you can see, when resizing, the image gets splitted at right and contunues at left at probably another row.So i am guessing the size of the image is wrong(needs more width?).

By default, the start of each row of an image is assumed to be aligned to 4 bytes. This is because the GL_PACK_ALIGNMENT respectively GL_UNPACK_ALIGNMENT parameter is by default 4, see glPixelStore.
When a framebuffer is read by glReadPixels the GL_PACK_ALIGNMENT parameter is considered.
If you want to read the image in a tightly packed memory, with not alignment at the start of each line, then you've to change the GL_PACK_ALIGNMENT parameter to 1, before reading the color plane of the framebuffer:
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glReadPixels(0,0,unchangable_w, unchangable_h, GL_RED, GL_UNSIGNED_BYTE, tga.rpic);
glReadPixels(0,0,unchangable_w, unchangable_h, GL_GREEN, GL_UNSIGNED_BYTE, tga.gpic);
glReadPixels(0,0,unchangable_w, unchangable_h, GL_BLUE, GL_UNSIGNED_BYTE, tga.bpic);
If that is missed, this cause a shift effect at each line of the image, except if the length of a line of the image in bytes is divisible by 4.

Related

glReadPixels on DEPTH_COMPONENT throws GL_INVALID_OPERATION

I'm having a problem reading the depth buffer of a offscreen framebuffer rendering pass. When using OpenGL 4.5 it works as intended, but on OpenGL ES 2.0 (Angle) I get an error on my glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, depthBuffer) call. The error is 0x502, which is a GL_INVALID_OPERATION error code.
Some more background. I'm working in a Qt environment, which has a main rendering routine and now I'm doing some offscreen rendering. Usually we use the opengl desktop implementation, but on some machine, we are experiencing some problems with bad opengl versions. Therefore I'm current working on making the whole setup more robust. One thing I did is to use angle instead. So I'm just trying to get angle to work, which SHOULD correspond to using OpenGL ES 2.0.
Well here the framebuffer creation:
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glGenRenderbuffers(1, &cboId);
glBindRenderbuffer(GL_RENDERBUFFER, cboId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, cboId);
glGenRenderbuffers(1, &dboId);
glBindRenderbuffer(GL_RENDERBUFFER, dboId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, dboId);
The framebuffer is complete and no error is thrown. The code between the previous and the following final piece does not throw any errors aswell.
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, colorData);
//No error up to this point
glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, depthData));
//This call throws 0x502 (GL_INVALID_OPERATION)
Usually I use GL_DEPTH_COMPONENT24, which is not working for some reason. So I used GL_DEPTH_COMPONENT16 instead. Maybe that's a hint what is wrong. FYI the main framebuffer is using a 24bit Depth and 8bit stencil. (I tried using the GL_DEPTH24_STENCIL8 format aswell with no success on the glReadPixels call).
Using a texture instead or a pbuffer is not working, because the needed functions for this workaround (glGetTexImage(...), glMapBuffer(...)) are not implemented in the GL version I'm stuck with.
According to Khronos specification:
format
Specifies the format of the pixel data. The following symbolic values are accepted: GL_ALPHA, GL_RGB, and GL_RGBA.
type
Specifies the data type of the pixel data. Must be one of GL_UNSIGNED_BYTE, GL_UNSIGNED_SHORT_5_6_5, GL_UNSIGNED_SHORT_4_4_4_4, or GL_UNSIGNED_SHORT_5_5_5_1.
GL_INVALID_OPERATION is generated if type is GL_UNSIGNED_SHORT_5_6_5 and format is not GL_RGB.
GL_INVALID_OPERATION is generated if type is GL_UNSIGNED_SHORT_4_4_4_4 or GL_UNSIGNED_SHORT_5_5_5_1 and format is not GL_RGBA.
GL_INVALID_OPERATION is generated if format and type are neither GL_RGBA and GL_UNSIGNED_BYTE, respectively, nor the format/type pair returned by querying GL_IMPLEMENTATION_COLOR_READ_FORMAT and GL_IMPLEMENTATION_COLOR_READ_TYPE.
Neither format nor type in your code qualify for this requirements.
As #xeed has already pointed out, #Reaper has answered the "why" part of the question. As to the "how" of getting depth information in OpenGL ES, after a lot of searching online, as well as talking to some peers, I have only seen two methods proposed:
(ES 3+ only) Attach a depth texture to the framebuffer you render your scene in. Then render a "full-screen" quad, sample the depth texture in the fragment shader for this quad, and output the depth texture values as your colour values, e.g. in your red channel. Finally, use glReadPixels on this colour information to get the depth values back. See: https://stackoverflow.com/a/35041374/11295586
Get the values with gl_FragCoord.z. See: https://stackoverflow.com/a/6140714/11295586
I have tried both options, and both seem to work, though I still have a depth-inversion problem with the second that, while easy to fix (just subtract the value from 1.0f), I'm trying to decipher the cause of. Also, the details for the second option depends on when you need the values, and what colour information you need for your original scene. E.g. in my case, I can just put gl_FragCoord.z as my alpha channel value, because I'm not actually displaying my render result. If you do need the alpha channel of your original render intact, though, perhaps multiple render targets, one of which gets the gl_FragCoord.z value, would be the solution?

QPainter::drawImage prints different size than QImage::save and print from Photoshop

I'm scaling a QImage, currently as so (I understand there may be more elegant ways):
img.setDotsPerMeterX(img.dotsPerMeterX() * 2);
img.setDotsPerMeterY(img.dotsPerMeterY() * 2);
When I save:
img.save("c:\\users\\me\\desktop\\test.jpg");
and subsequently open and print the image from Photoshop, it is, as expected, half of the physical size of the same image without the scaling applied.
However, when I simply print the scaled QImage, directly from code:
myQPainter.drawImage(0,0,img);
the image prints at the original physical size - not scaled to half the physical size.
I'm using the same printer in each case; and, as far as I can tell, the settings are consistent between both print cases.
Am I misunderstanding something? The end goal is to successfully scale and print the scaled image directly from code.
If we look at the documentation for setDotsPerMeterX it states: -
Together with dotsPerMeterY(), this number defines the intended scale and aspect ratio of the image, and determines the scale at which QPainter will draw graphics on the image. It does not change the scale or aspect ratio of the image when it is rendered on other paint devices.
I expect that the reason for the latter case being the original size is that the image has already been drawn before the call to the functions to set the dots per meter. Or alternatively, set the dots per meter on the original image, before loading its content.
In contrast, when saving, it appears that the device which you save to is copying the values you have set for the dots per meter on the image, then drawing to that device.
I would expect creating a second QImage, setting its dots per meter, then copying from the original to that second image, it would achieve the result you're looking for. Alternatively, you may just be able to set the dots per meter before loading the content on the original QImage.

How to get the length of the left/right offsets in a QSlider?

I'm trying to make a subclass of QSlider that allows the user to insert "bookmarks" so they can remember significant locations on the slider. I've run into a problem painting the tabs on the slider - using QStyle.sliderPositionFromValue, I get a value but it is slightly inaccurate. If I'm on the left side of the slider, the tab is painted too far left, and on the right side it is painted too far right. I believe this is because QSlider.width() returns the width of the whole object, including the small offsets at the left and right. So width() might return 630 pixels, when the length of the slider itself is really 615.
This is the code I'm using to get the pixel position and draw a line across the slider.
pos = QStyle.sliderPositionFromValue(self.minimum(),self.maximum(),sliderIndex,self.width())
painter.drawLine(pos,0,pos,self.height())
I've been looking at the QT Source here starting on line 2699 and it seems like I need to be using the PixelMetric class from QStyle. I've tried using:
self.style().pixelMetric(QStyle.PM_SliderSpaceAvailable)
But that returns 0, which is clearly not the value I need.
I'd appreciate any advice.
Edit: As suggested in the comments, I changed the call to:
self.style().pixelMetric(QStyle.PM_SliderSpaceAvailable, QStyleOption(1,QStyleOption.SO_Slider),self)
This however, returns -14, which also doesn't match for the value of the offsets (I tried using self.width()-14 but the offset remains.

Qt/PyQt - frequently drawing pixmap to widget, partly not drawing correctly

I'm working on a Qt based application (actually in PyQt but I don't think that's relevant here), part of which involves plotting a potentially continuous stream of data onto a graph in real time.
I've implemented this by creating a class derived from QWidget which buffers incoming data, and plots the graph every 30ms (by default). In __init__(), a QPixmap is created, and on every tick of a QTimer, (1) the graph is shifted to the left by the number of pixels that the new data will take up, (2) a rectangle painted in the space, (3) the points plotted, and (4) update() called on the widget, as follows (cut down):
# Amount of pixels to scroll
scroll=penw*len(points)
# The first point is not plotted now, so don't shift the graph for it
if (self.firstPoint()):
scroll-=1
p=QtGui.QPainter(pm)
# Brush setup would be here...
pm.scroll(0-scroll, 0, scroll, 0, pm.width()-scroll, pm.height())
p.drawRect(pm.width()-scroll, 0, scroll, pm.height())
# pen setup etc happens here...
offset=scroll
for point in points:
yValNew = self.graphHeight - (self.scalePoint(point))
# Skip first point
if (not(self.firstPoint())):
p.drawLine(pm.width()-offset-penw, self.yVal, pm.width()-offset, yValNew)
self.yVal = yValNew
offset-=penw
self.update()
Finally, the paintEvent simply draws the pixmap onto the widget:
p = QtGui.QPainter(self)
p.drawPixmap(0, 0, self.graphPixmap)
As far as I can see, this should work correctly, however, when data is received very fast (i.e. the entire graph is being plotted on each tick), and the widget is larger than a certain size (approx 700px), everything to the left of the 700px area lags considerably. This is perhaps best demonstrated in this video: http://dl.dropbox.com/u/1362366/keep/Graph_bug.swf.html (the video is a bit laggy due to the low frame rate, but the effect is visible)
Any ideas what could be causing this or things I could try?
Thanks.
I'm not 100% sure if this is the problem or not, but I thought I might make at least some contribution.
self.update() is an asynchronous call, which will cause a paint event at some point later when the main event loop is reached again. So it makes me wonder if your drawing is having problems because of the sync issue between when you are modifying your pixmap vs when its actually getting used in the paintEvent. Almost seems like what you would need for this exact code to work is a lock in your paintEvent, but thats pretty naughty sounding.
For a quick test, you might try forcing the event loop to flush right after your call to update:
self.update()
QtGui.QApplication.processEvents()
Not sure that will fix it, but its worth a try.
This actually might be a proper situation to be using repaint() and causing a direct paint event, since you are doing an "animation" using a controlled framerate: self.repaint()
I noticed a similar question to yours, by someone trying to graph a heart monitor in real time: http://qt-project.org/forums/viewthread/10677
Maybe you could try restructuring your code similar to that. Instead of splitting the painting into two stages, he is using a QLabel as the display widget, setting the pixmap into the QLabel, and painting the entire graph immediately instead of relying on calls to the widget.update()

DirectShow: IVMRWindowlessControl::SetVideoPosition stride(?)

I have my own video source and using VMR7. When I use 24 color depth, my graph contains Color Space Converter filter which converts 24 bits to ARGB32. Everything works fine. When I use 32 bit color depth, my image looks desintegrated. In this case my source produces RGB32 images and passes them directly to VMR7 without color conversion. During window sizing I noticed that when destination height is changing the image becomes "integrated" (normal) in some specific value of destination height. I do not know where is the problem. Here are the example photos: http://talbot.szm.com/desintegrated.jpg and http://talbot.szm.com/integrated.jpg
Thank you for your help.
You need to check for a MediaType change in your FillBuffer method.
HRESULT hr = pSample->GetMediaType((AM_MEDIA_TYPE**)&pmt);
if (S_OK == hr)
{
SetMediaType(pmt);
DeleteMediaType(pmt);
}
Depending on your graphic you get different width for your buffer. This means, you connect with an image width of 1000 pixels but with the first sample you get a new width for your buffer. In my example it was 1024px.
Now you have the new image size in the BitmapInfoHeader.biWidth and got the old size in the VideoInfoHeader.rcSource. So one line of your image has a size of 1024 pixels and not 1000 pixels. If you don't remember this you can sometimes get pictures like you.

Resources