glReadPixels on DEPTH_COMPONENT throws GL_INVALID_OPERATION - qt

I'm having a problem reading the depth buffer of a offscreen framebuffer rendering pass. When using OpenGL 4.5 it works as intended, but on OpenGL ES 2.0 (Angle) I get an error on my glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, depthBuffer) call. The error is 0x502, which is a GL_INVALID_OPERATION error code.
Some more background. I'm working in a Qt environment, which has a main rendering routine and now I'm doing some offscreen rendering. Usually we use the opengl desktop implementation, but on some machine, we are experiencing some problems with bad opengl versions. Therefore I'm current working on making the whole setup more robust. One thing I did is to use angle instead. So I'm just trying to get angle to work, which SHOULD correspond to using OpenGL ES 2.0.
Well here the framebuffer creation:
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glGenRenderbuffers(1, &cboId);
glBindRenderbuffer(GL_RENDERBUFFER, cboId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, cboId);
glGenRenderbuffers(1, &dboId);
glBindRenderbuffer(GL_RENDERBUFFER, dboId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, dboId);
The framebuffer is complete and no error is thrown. The code between the previous and the following final piece does not throw any errors aswell.
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, colorData);
//No error up to this point
glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, depthData));
//This call throws 0x502 (GL_INVALID_OPERATION)
Usually I use GL_DEPTH_COMPONENT24, which is not working for some reason. So I used GL_DEPTH_COMPONENT16 instead. Maybe that's a hint what is wrong. FYI the main framebuffer is using a 24bit Depth and 8bit stencil. (I tried using the GL_DEPTH24_STENCIL8 format aswell with no success on the glReadPixels call).
Using a texture instead or a pbuffer is not working, because the needed functions for this workaround (glGetTexImage(...), glMapBuffer(...)) are not implemented in the GL version I'm stuck with.

According to Khronos specification:
format
Specifies the format of the pixel data. The following symbolic values are accepted: GL_ALPHA, GL_RGB, and GL_RGBA.
type
Specifies the data type of the pixel data. Must be one of GL_UNSIGNED_BYTE, GL_UNSIGNED_SHORT_5_6_5, GL_UNSIGNED_SHORT_4_4_4_4, or GL_UNSIGNED_SHORT_5_5_5_1.
GL_INVALID_OPERATION is generated if type is GL_UNSIGNED_SHORT_5_6_5 and format is not GL_RGB.
GL_INVALID_OPERATION is generated if type is GL_UNSIGNED_SHORT_4_4_4_4 or GL_UNSIGNED_SHORT_5_5_5_1 and format is not GL_RGBA.
GL_INVALID_OPERATION is generated if format and type are neither GL_RGBA and GL_UNSIGNED_BYTE, respectively, nor the format/type pair returned by querying GL_IMPLEMENTATION_COLOR_READ_FORMAT and GL_IMPLEMENTATION_COLOR_READ_TYPE.
Neither format nor type in your code qualify for this requirements.

As #xeed has already pointed out, #Reaper has answered the "why" part of the question. As to the "how" of getting depth information in OpenGL ES, after a lot of searching online, as well as talking to some peers, I have only seen two methods proposed:
(ES 3+ only) Attach a depth texture to the framebuffer you render your scene in. Then render a "full-screen" quad, sample the depth texture in the fragment shader for this quad, and output the depth texture values as your colour values, e.g. in your red channel. Finally, use glReadPixels on this colour information to get the depth values back. See: https://stackoverflow.com/a/35041374/11295586
Get the values with gl_FragCoord.z. See: https://stackoverflow.com/a/6140714/11295586
I have tried both options, and both seem to work, though I still have a depth-inversion problem with the second that, while easy to fix (just subtract the value from 1.0f), I'm trying to decipher the cause of. Also, the details for the second option depends on when you need the values, and what colour information you need for your original scene. E.g. in my case, I can just put gl_FragCoord.z as my alpha channel value, because I'm not actually displaying my render result. If you do need the alpha channel of your original render intact, though, perhaps multiple render targets, one of which gets the gl_FragCoord.z value, would be the solution?

Related

How to glReadPixels properly to write the data into QImage in Linux

Summary
I want to write the opengl pixels(GL_RGB) by glReadPixels to a QImage.This renders correct, but when i resize the window, it scales weird and distorts my shape(triangle).
What i tried
I tried (QImage)img.scale(width(),height(),Qt::KeepAspectRatio)
but it didn't solve the problem.
Played with how i write the pixels buffer from glReadPixels to QImage but No.Didn't work.
Should i read the pixels in three buffers(GLubyte *rpixel,*gpixel,*bpixel) or on one(GLubyte **pixels)?Which one is the easiest because i will resize the array when i will resize my window(so i want dynamic arrays).
Some code
I have uploaded a minimal code recreating the bug-weird behaviour in github.Download and compile using the Qt Creator.
https://github.com/rivenblades/GlReadPixelsQT/tree/master
Pictures
Here is how i wanted(it works when not resizing)
Here is after resizing(Weird behaviour)
As you can see, when resizing, the image gets splitted at right and contunues at left at probably another row.So i am guessing the size of the image is wrong(needs more width?).
By default, the start of each row of an image is assumed to be aligned to 4 bytes. This is because the GL_PACK_ALIGNMENT respectively GL_UNPACK_ALIGNMENT parameter is by default 4, see glPixelStore.
When a framebuffer is read by glReadPixels the GL_PACK_ALIGNMENT parameter is considered.
If you want to read the image in a tightly packed memory, with not alignment at the start of each line, then you've to change the GL_PACK_ALIGNMENT parameter to 1, before reading the color plane of the framebuffer:
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glReadPixels(0,0,unchangable_w, unchangable_h, GL_RED, GL_UNSIGNED_BYTE, tga.rpic);
glReadPixels(0,0,unchangable_w, unchangable_h, GL_GREEN, GL_UNSIGNED_BYTE, tga.gpic);
glReadPixels(0,0,unchangable_w, unchangable_h, GL_BLUE, GL_UNSIGNED_BYTE, tga.bpic);
If that is missed, this cause a shift effect at each line of the image, except if the length of a line of the image in bytes is divisible by 4.

Add a QTextureMaterial to a custom mesh

I have a custom mesh (created in blender) that I insert into Qt3D using the following code:
QMesh *mesh = new QMesh(rootEntity);
mesh->setSource(QUrl::fromLocalFile(baseUrl+"mesh.obj"));
This works fine; I can add it to an entity with a material and everything.
Then I create a custom material using a texture loaded from a .png. I do this using the following code:
Qt3DRender::QTextureLoader *loader = new Qt3DRender::QTextureLoader(rootEntity);
Qt3DExtras::QTextureMaterial *material = new Qt3DExtras::QTextureMaterial(rootEntity);
loader->setSource(QUrl::fromLocalFile(baseUrl+"pattern.jpg"));
material->setTexture(loader);
This also works fine. When I add this material to a built-in Qt mesh (e.g. QPlaneMesh or QSphereMesh) it shows perfectly on the surface as one would expect.
However - now comes the problem - if I add it with the QMesh specified above, the mesh just gets one homogeneous color which seems to be the average over the colors in the pattern. Here you can see what I mean: both objects have the same material. The top one is inserted externally while the bottom one is a QPlaneMesh.
Can someone explain me why that is the case? And is there a way to successfully add textures to custom meshes?
Note: I have tried this with 2D and 3D meshes and it is the same outcome.
Note 2: I have also tried it with diferent images and it still just gets one homogeneous average color.
UPDATE: I tried (following the suggestion in the answer) to add a texture attribute to the geometry of my imported mesh like the following:
Qt3DCore::QEntity *entity = new Qt3DCore::QEntity(rootEntity);
QMesh *mesh = new QMesh(entity);
mesh->setSource(QUrl::fromLocalFile(baseUrl+"mesh.obj"));
const int stride = (3 + 2 + 3 + 4) * sizeof(float);
QSize resolution = QSize(2,2);
const int nVerts = resolution.width() * resolution.height();
QAttribute *texCoordAttr = new QAttribute(mesh->geometry());
Qt3DRender::QBuffer *vertexBuffer = new Qt3DRender::QBuffer(mesh->geometry());
texCoordAttr->setName(QAttribute::defaultTextureCoordinate1AttributeName());
texCoordAttr->setVertexBaseType(QAttribute::Float);
texCoordAttr->setVertexSize(2);
texCoordAttr->setAttributeType(QAttribute::VertexAttribute);
texCoordAttr->setBuffer(vertexBuffer);
texCoordAttr->setByteStride(stride);
texCoordAttr->setByteOffset(3*sizeof(float));
texCoordAttr->setCount(nVerts);
vertexBuffer->setDataGenerator(QSharedPointer<PlaneVertexBufferFunctor>::create(1.0f,1.0f,resolution, false)); //these input values (width, height, resolution, mirrored) are probably the cause of the problem
mesh->geometry()->addAttribute(texCoordAttr); //it crashes here
entity->addComponent(mesh);
entity->addComponent(transform);
entity->addComponent(material);
I created the functor for setDataGenerator like in the QPlaneMesh code. Now I am suspecting the segmentation fault is because of sizing mismatch. So how can I get the correct width and height of an external mesh from its QGeometry? And what else might be wrong here?
It looks like the mesh is missing the texture coordinates. When you open the file with a text editor, do you see the key vt somewhere? Those are the texture coordinates. You can read about the format here.
If you still want the obj file that you have, you have to add texture coordinates if it doesn't have any. It's probably best to open the file in Blender and use its texture mapper - at least for more complex meshes. Guessing which vertex needs which texture coordinate is not really feasible.
The texture coordinates work as follows:
If you have an image of, say 500 by 400 pixels, the texture coordinate (0.7, 0.3) is (500 * 0.7, 400 * 0.3) = (350, 120), meaning that the vertex which has that texture coordinate will receive the color value of the pixel at (350, 120). Values inside a triangle will get interpolated.
If your obj file comes along with a mtl file then it probably already has texture coordinates. If you want to load this mtl file use the QSceneLoader and add it to its parent QEntity to display everything.

Copying a single layer of a 2D Texture Array from GPU to CPU

I'm using a 2D texture array to store some data. As I often want to bind single layers of this 2D texture array, I create individual GL_TEXTURE_2D texture views for each layer:
for(int l(0); l < m_layers; l++)
{
QOpenGLTexture * view_texture = m_texture.createTextureView(QOpenGLTexture::Target::Target2D,
m_texture_format,
0,0,
l,l);
view_texture->setMinMagFilters(QOpenGLTexture::Filter::Linear, QOpenGLTexture::Filter::Linear);
view_texture->setWrapMode(QOpenGLTexture::WrapMode::MirroredRepeat);
assert(view_texture != 0);
m_texture_views.push_back(view_texture);
}
These 2D TextureViews work fine. However, if I want to retrieve the 2D texture data from the GPU side using that texture view it doesn't work.
In other words, the following copies no data (but throws no GL errors):
glGetTexImage(GL_TEXTURE_2D, 0, m_pixel_format, m_pixel_type, (GLvoid*) m_raw_data[layer] )
However, retrieving the entire GL_TEXTURE_2D_ARRAY does work:
glGetTexImage(GL_TEXTURE_2D_ARRAY, 0, m_pixel_format, m_pixel_type, (GLvoid*) data );
There would obviously be a performance loss if I need to copy across all layers of the 2D texture array when only data for a single layer has been modified.
Is there a way to copy GPU->CPU only a single layer of a GL_TEXTURE_2D_ARRAY? I know there is for the opposite (i.e CPU->GPU) so I would be surprised if there wasn't.
Looks like you found a solution with using glGetTexSubImage() from OpenGL 4.5. There is also a simple solution that works with OpenGL 3.2 or higher.
You can set the texture layer as an FBO attachment, and then use glReadPixels():
GLuint fboId = 0;
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fboId);
glFramebufferTextureLayer(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
textureId, 0, layer);
glReadPixels(...);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
What version of GL are you working with?
You are probably not going to like this, but... GL 4.5 introduces glGetTextureSubImage (...) to do precisely what you want. That is a pretty hefty version requirement for something so simple; it is also available in extension form, but that extension is relatively new as well.
There is no special hardware requirement for this functionality, but it requires a very recent driver.
I would not despair just yet, however.
You can copy the entire texture array into a PBO and then read a sub-rectangle of that PBO back using the buffer object API (e.g. glGetBufferSubData (...)). That requires extra memory on the GPU-side, but will allow you to transfer a single slice of this 2D array.

How many lines can Qt draw on-screen?

I am currently working on a Qt application to draw maps. I am trying to draw 400,000+ lines and it crashes after using ~2GB but I still have memory left on my machine. I am wondering if I am hitting some limit inside of Qt that is causing the problem. Anyone know if there is a limit to the number of things you can draw or if you can change this limit?
If it is helpful, I am coding in C++ with a class that has a member function to draw the lines. The code is roughly as follows
QPointF fromPoint;
QPointF toPoint;
fromPoint = foo( x );
toPoint = foo( y );
m_Painter.drawLine(fromPoint, toPoint );
//m_Painter is a QPainter
Edit: Turns out the problem was somewhere else in the code. It had to do with the custom caching that was being done. Though I am still interested if there is a limit to how many lines Qt can draw. Does anyone know?
QPainter executes its underlying graphics through QPaintEngine, which has several implementations (like qpaintengine_mac.cpp, qpaintengine_x11.cpp, or qpaintengine_preview.cpp).
Some devices are raster...and are likely drawing each line into an image buffer and throwing away the endpoints after that drawing is done. There should be no limit to the number of lines you can draw in that case.
If the target device is OpenGL, or to a printer that is doing some kind of PostScript-like output, then the limitations of that particular paint engine may well be a factor. You'd have to look at the specific one.
For example: if you trace down the X11 implementation of drawLine you'll see it passes through to drawPolygon() down through strokePolygon_dev()...and bottoms out at a call to XDrawLines:
XDrawLines(dpy, hd, gc, pts, numberPoints, CoordModeOrigin);
So there you have another abstraction layer...and so the question becomes whether the XWindows display parameter is guaranteed to be raster. (My guess would be that it is.)
Anyway, so the answer is "unlimited if raster. may depend otherwise--but the limitations (if any) are probably coming from the underlying device for the paint engine, not Qt."

XNA FBX model drawing problems, it's either my code, the way I export models or the way content is exported

So basically when I try to draw more a mesh inside an FBX file its orientation is always removed and it's scaled down. I'm not sure if the issue is caused by code or the way I'm exporting the FBX files. I have been trying to narrow down the cause and I am fairly sure it's not caused by the way I export the FBX (but I could be wrong), so it's either the XNA content pipeline or my drawing code
Here are some pics I took to show my problem, where the gray background is in 3Ds Max as I see it and red background is in XNA:
THis is as it appears in 3D StudioMax: http://i.stack.imgur.com/e0oW4.png
This is how it appears in XNA: http://i.stack.imgur.com/1vOcx.png
Both are being viewed from the same angle and direction but varying distances.
Now what is really odd is if I create another mesh in max, say a box, and export that (along with the original model), it works fine: http://i.stack.imgur.com/SIDg9.png
So long as there is more than one mesh in the fbx model it draws properly (though I'm still suspicious if it's drawing with proper scaling applied, i.e. if in Max it is 1 unit long in XNA it becomes something like 1.27 units long), if there is less its orientation which I applied to it in 3D studio max is removed when I draw it.
This is how I draw the model:
model.CopyAbsoluteBoneTransformsTo(boneTransforms);
foreach (ModelMesh mesh in model.Meshes)
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.World = boneTransforms[mesh.ParentBone.Index];
Vector3 cameraPosition = Camera.Get.Position;// new Vector3(0, 0, 0);
//cameraPosition.X = -Camera.Get.PosX;
//cameraPosition.Y = Camera.Get.PosY;
effect.View = Camera.Get.View;// Matrix.CreateLookAt(cameraPosition, cameraPosition + Camera.Get.LookDir, Camera.Get.Up);
effect.Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4,
BaseGame.Get.GraphicsDevice.Viewport.AspectRatio,
0.01f, 1000000); //Matrix.CreateOrthographic(800 / 1, 480 / 1, 0, 1000000);
//effect.TextureEnabled = true;
effect.LightingEnabled = true;
effect.PreferPerPixelLighting = true;
//effect.SpecularColor = new Vector3(1, 0, 0);
}
mesh.Draw();
}
Obviously mesh.draw() is called twice when there is more than one mesh in the fbx file..
Generally if you are having a problem with the position or scale of the mesh while rendering, then it's likely to be related to the matrices. Not necessarily the exporting, but rather how you use them in the code.
I use blender3d for modelling, but I know that Blender3d actually defines different spaces when you are creating the meshes within the editor. For example, if you create a mesh while in 'object' mode, the position/rotation/scale of the object in the scene will not be exported (because that object will be the root of a new tree, centered around 0,0,0). So I would check for a similar situation in 3DMax - make sure you are transforming the vertices in Max relative to 0,0,0, or else you may lose the 'initial' translation and when you render in XNA, all the objects will be rendered around your 0,0,0 (i.e. appear mixed together).
Failing that, and I can't remember exactly off the top of my head, but I think you may need to multiply the current mesh's absolute matrix transform with that of the parent's world matrix transform. Although it's been a while so I'm not too sure.

Resources