glReadPixels GL_DEPTH_COMPONENT does not work in mousePressEvent - qt

I am using QT QOpenGLWidget, I want to unproject my mouse click position back into 3D, so I used glReadPixels. (I also read about the source code of Pangolin, a very good rotation, translation, zoom example, it uses glReadPixels as well)
Here's part of my simple code:
void myGLWidget::initializeGL()
{
glClearColor(0.2, 0.2, 0.2, 1.0); //background color
glClearDepthf(1.0); //set depth test
glEnable(GL_DEPTH_TEST); //enable depth test
}
void myGLWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); //clear color and depth buffer
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(cameraView_.data()); // cameraView_ is a QMatrix4x4
drawingTeapot();
// reading pixels in paintGL works well!!! returns lots of 1s
GLfloat zs[10 * 10];
glReadPixels(0, 0, 10, 10, GL_DEPTH_COMPONENT, GL_FLOAT, &zs);
}
void myGLWidget::mousePressEvent(QMouseEvent *event)
{
// glReadBuffer(GL_FRONT); // also tried this, nothing works
GLfloat zs[10 * 10];
glReadPixels(0, 0, 10, 10, GL_DEPTH_COMPONENT, GL_FLOAT, &zs);
GLenum e = glGetError(); // this gives 1282 err code!!!
}
I'm using macOS Sierra, Pangolin works perfectly on my laptop, however, my qt project does work??!!
By saying not working, I mean the output variable zs remains random values like 0 and 123123e-315 and it never change before and after glReadPixels.
Why glReadPixels works only in PaintGL function??
I also tried python version, it gives my an error says:
File "errorchecker.pyx", line 53, in OpenGL_accelerate.errorchecker._ErrorChecker.glCheckError (src/errorchecker.c:1218)
OpenGL.error.GLError: GLError(
err = 1282,
description = b'invalid operation',
baseOperation = glReadPixels,
which might be the case that:
GL_INVALID_OPERATION is generated if format is GL_DEPTH_COMPONENT and there is no depth buffer. reference from document
But I still don't know what to do

OpenGL operations should be performed only when an OpenGL context is active. This is true in the paintGL() method because this is probably set by the framework for you. You can't assume the OpenGL is active in other methods, like in other event responding methods and callbacks as mousePressEvent(), because those methods can also be run by a different thread where the OpenGL context is not active.

Related

Performance Issue while using QGLWidget with Qt5

I'm trying to develop an application which will be used for the visualization of 3D objects and its simulations. In this I have to draw 'n' number of objects (may be a triangle, rectangle or some other non-convex polygons) with individual color shades.For this I'm using QGLWidget in Qt5 (OS - Windows 7/8/10).
structure used for populating objects information:
typedef struct {
QList<float> r,g,b;
QList<double> x,y,z;
}objectData;
The number of objects and their corresponding coordinate values will be read from a file.
paintGL function:
void paintGL() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(25, GLWidget::width()/(float)GLWidget::height(), 0.1, 100);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0,0,5, 0,0,0, 0,1,0);
glRotatef(140, 0.0, 0.0, 1.0);
glRotatef(95, 0.0, 1.0, 0.0);
glRotatef(50, 1.0, 0.0, 0.0);
glTranslated(-1.0, 0.0, -0.6);
drawObjects(objData, 1000)
}
Drawing of Objects Function:
void drawObjects(objectData objData,int objCnt) {
glPushMatrix();
glBegin(GL_POLYGON);
for(int i = 0; i < objCnt; i++) {
glColor3f(objData.r[i],objData.g[i],objData.b[i] );
glVertex3d(objData.x[i],objData.y[i],objData.z[i]);
}
glEnd();
glFlush();
glPopMatrix();
}
Issue:
Now, when the number of objects to be drawn exceeds a certain maximum value (for example say n = 5000), the application speed gradually decreases. I'm unable to use QThread since it already inherits QGLWidget.
Please suggest how to improve the performance of the application when number of objects count is higher. I don't know where I'm doing mistake.
Screenshot of that sample:
Sample image which contains number of objects in mesh view
You are using the fixed pipeline instead of the programmable one, where you tell to each stage of the rendering process, what should be done, and nothing more. Among other noticeable differences that I encourage you to research (research "modern opengl", which will lead you to doing OpenGL 3.3 and above type of work).
The old fixed pipeline is terribly inefficient, when the computer has to talk to the graphics card for every geometries while rendering. By contrast, the modern programmable pipeline allows you to push the data of the models to render into the VRAM, from where it will be directly accessed during rendering (very fast memory accesses).
You also get rid of the generic ways of "doing stuff", that are mechanically slower than customized ones.
Also, I encourage you to use QOpenGLWidget instead of the former QGLWidget class. As mentioned in http://doc.qt.io/qt-5/qglwidget.html, this class is obsolete.
Modern OpenGL quick start:
http://www.opengl-tutorial.org/
So, you are not doing anything "wrong". You are just not using the current technology. Have fun!
You are using OpenGLs immediate mode which is very slow for large numbers of vertices und should almost never be used. Use the retained mode instead. See this answer for more detail: https://stackoverflow.com/a/6734071
Thank you #dave and #Zedka9. It works fine for me when I started to use the intermediate mode in openGL. I have modified the drawObject function like this
Drawing of Objects Function:
After organizing and copying the vertices and colors to these buffers
GLfloat vertices[1024*1024],colors[1024*1024];
int vertArrayCnt; // number of verticies
void drawObjects(void) {
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer(3, GL_FLOAT, 0, colors);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glPushMatrix();
glDrawArrays(GL_TRIANGLES, 0, vertArrayCnt);
glPopMatrix();
glDisableClientState(GL_VERTEX_ARRAY); // disable vertex arrays
glDisableClientState(GL_COLOR_ARRAY);
}

QOpenGLWidget with custom framebuffer and multiple render targets

Related to my other question, I'm trying to render a segmentation mask to enable object picking. But I am not able to achieve the desired result.
Option 1 did not work at all. I was not able to retrieve the content of color attachment 1, or check if it existed at all (I created the attachment using only native OpenGL calls).
Using this post, I was able to reproduce the green.png and red.png images by creating a custom frame buffer with a second color attachment which is then bound and drawn to (all in paintGL()).
Somehow I had to use the person's frame buffer creation code because when I created the frame buffer myself there was always a warning saying toImage called for missing color attachment, although I attached the color attachment and textures() called on the frame buffer returned two objects. I then tried to insert my rendering code after
GLfloat red[4] = { 1.0f, 0.0f, 0.0f, 1.0f };
f->glClearBufferfv(GL_COLOR, 0, red);
GLfloat green[4] = { 0.0f, 1.0f, 0.0f, 1.0f };
f->glClearBufferfv(GL_COLOR, 1, green);
but this still resulted in the red and green image. But the code renders fine when using the normal default framebuffer. I adapted the shader to (short version for testing purposes):
void main() {
gl_FragData[0] = vec4(1.0, 1.0, 1.0, 1.0);
gl_FragData[1] = vec4(0.0, 0.0, 0.0, 1.0);
}
Since I was able to produce the red and green image, I'm assuming there must be a way to retrieve the frag data with this custom framebuffer. The solution I have right now is a complete (!) copy of the program and another dedicated fragment shader which's sole purpose is to render the segmentation, and perform all OpenGL draw calls a second time. As you can guess, this is a somewhat ugly solution, although the scenery is not that large and my computer is able to handle it easily. Has anyone got an idea/link?
If you want to write to multiple render targets in a Fragment shader, then you have to declare multiple output variables:
#version 330
layout(location = 0) out vec4 fragData0;
layout(location = 1) out vec4 fragData1;
void main()
{
fragData0 = vec4(1.0, 1.0, 1.0, 1.0);
fragData1 = vec4(0.0, 0.0, 0.0, 1.0);
}
From GLSL version 1.1 (#version 110, OpenGL 2.0) to GLSL version 1.5 (#version 150, OpenGL 3.2), the same can be achieved by writing to the built in fragment shader output variable gl_FragData.
void main()
{
gl_FragData[0] = vec4(1.0, 1.0, 1.0, 1.0);
gl_FragData[1] = vec4(0.0, 0.0, 0.0, 1.0);
}
See also Fragment Shader Outputs - Through The Ages
To use multiple render targets in Qt, a 2nd color attachment has to be add to the framebuffer and the list of color buffers has to be specified by glDrawBuffers:
QOpenGLShaderProgram *program;
QOpenGLFramebufferObject *fb;
int fb_width;
int fb_height,
fb = new QOpenGLFramebufferObject( fb_width, fb_height );
fb->addColorAttachment( fb_width, fb_height );
glViewport(0, 0, fb_width, fb_height);
fb->bind();
glClearColor(0, 0, 0, 1);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
GLenum buffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 };
glDrawBuffers(2, buffers);
program->bind();
// ..... do the drawing
program->release();
fb->release();
The OpenGL texture objects which are attached to the framebuffer, can be accessed:
QVector<GLuint> fb_textrues = fb->textures();

Create Cubemap from QOpenGLFramebuffer

I want to implement cubemap convolution for IBL using a Qt widget.
When implementing conversion from an equirectangular map to a cubemap I ran into an error I do not understand:
Here is how I create my renderbuffer:
QOpenGLFramebufferObjectFormat format;
format.setAttachment(QOpenGLFramebufferObject::CombinedDepthStencil);
format.setInternalTextureFormat(GL_RGBA32F_ARB);
envTarget = new QOpenGLFramebufferObject(QSize(256, 256), format);
Here is how I create my cubemap texture:
envCubemap = new QOpenGLTexture(QOpenGLTexture::TargetCubeMap);
envCubemap->create();
envCubemap->bind();
envCubemap->setSize(256, 256, 4);
envCubemap->setFormat(QOpenGLTexture::RGBAFormat);
envCubemap->allocateStorage(QOpenGLTexture::RGB, QOpenGLTexture::Float32);
envCubemap->setMinMagFilters(QOpenGLTexture::Nearest, QOpenGLTexture::Linear);
I then proceed to render the different cubemap views to the corresponding parts of the texture:
envCubemap->bind(9);
glViewport(0, 0, 256, 256);
envTarget->bind();
for (unsigned int i = 0; i < 6; ++i)
{
ActiveScene->ActiveCamera->View = captureViews[i];
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 9, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
drawBackground();
}
envTarget->release();
The drawBackground() method draws an environment sphere which works fine with my default buffer.
The openGL error I get is 1282. This turns to 0 if I comment out the glFramebufferTexture2D line. 1282 corresponds to GL_INVALID_OPERATION or GL_INVALID_VALUE, where both of these have multiple errors attached to them according to the glFramebufferTexture2D documentation.
What did I get wrong? I tried iterating over each parameter in order to solve this error but did not come up with a solution. As this should be fairly standard stuff I hope to find a solution here :D Help?
You need to actually tell the framebuffer, which texture to render to using its ID, and not '9':
glFramebufferTexture2D(
GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_TEXTURE_CUBE_MAP_POSITIVE_X + i,
envCubemap->textureId(), // <--- The change
0);
The same goes for envCubemap->bind(9);, which can be simply removed.

OpenGL + QT: render to texture and display it back

After some trouble I've managed to correctly render to texture inside a Frame Buffer Object in a Qt 4.8 application: I can open an OpenGL context with a QGLWidget, render to a FBO, and use this one as a texture.
Now I need to display the texture rendered in a QPixmap and show it in some other widget in the gui. But.. nothing is shown.
Those are some pieces of code:
// generate texture, FBO, RBO in the initializeGL
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glGenRenderbuffers(1, &rboId);
glBindRenderbuffer(GL_RENDERBUFFER, rboId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, TEXTURE_WIDTH, TEXTURE_HEIGHT);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
// now in paintGL
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
// .... render into texture code ....
if(showTextureInWidget==false) {
showTextureInWidget = true;
char *pixels;
pixels = new char[TEXTURE_WIDTH * TEXTURE_HEIGHT * 4];
glReadPixels(0, 0, TEXTURE_WIDTH, TEXTURE_HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, pixels);
QPixmap qp = QPixmap(pixels);
QLabel *l = new QLabel();
// /* TEST */ l->setText(QString::fromStdString("dudee"));
l->setPixmap(qp);
QWidget *d = new QWidget;
l->setParent(d);
d->show();
}
glBindFramebuffer(GL_FRAMEBUFFER, 0); // unbind
// now draw the scene with the rendered texture
I see the Widget opened but.. there is nothing inside it. If I decomment the test line.. I see the "dudee" string so I know that there is a qlabel but.. no image from the QPixmap.
I know that the original data are ´unsigned char´ and I'm using ´char´ and I've tried with some different color parameters (´GL_RGBA´, ´GL_RGB´ etc) but I don't think this is the point.. the point is that I don't see anything..
Any advice? If I have to post more code I will do it!
Edit:
I haven't posted all the code, but the fact I'd like to be clear is that the texture is correctly rendered as a texture inside a cube. I'm just not able to put it back in the cpu from gpu
Edit 2:
Thanks to the peppe answer I found out the problem: I needed a Qt object that accept as a constructor some raw pixels data. Here is the complete snippet:
uchar *pixels;
pixels = new uchar[TEXTURE_WIDTH * TEXTURE_HEIGHT * 4];
for(int i=0; i < (TEXTURE_WIDTH * TEXTURE_HEIGHT * 4) ; i++ ) {
pixels[i] = 0;
}
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glReadPixels( 0,0, TEXTURE_WIDTH, TEXTURE_HEIGHT, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
qi = QImage(pixels, TEXTURE_WIDTH, TEXTURE_HEIGHT, QImage::Format_ARGB32);
qi = qi.rgbSwapped();
QLabel *l = new QLabel();
l->setPixmap(QPixmap::fromImage(qi));
QWidget *d = new QWidget;
l->setParent(d);
d->show();
Given that that's not all of your code and -- as you say -- the texture is correctly filled, then there's a little mistake going on here:
glReadPixels(0, 0, TEXTURE_WIDTH, TEXTURE_HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, pixels);
QPixmap qp = QPixmap(pixels);
The QPixmap(const char *) ctor wants a XPM image, not raw pixels. You need to use one of the QImage ctors to create a valid QImage. (You can also pass ownership to the QImage, solving the fact that you're currently leaking pixels...)
Once you do that, you'll figure out that
the image is flipped vertically, as OpenGL has the origin in the bottom left corner, growing upwards/rightwards, while Qt assumes origin in the top left, growing to downwards/rightwards;
the channels might be swapped -- i.e. OpenGL is returning data with the wrong endianess. I don't remember in this case if using glPixelStorei(GL_PACK_SWAP_BYTES) or GL_UNSIGNED_INT_8_8_8_8 as the type may help, eventually you need to resort to a CPU-side loop to fix your pixel data :)

How to draw QGLFrameBufferObject onto the painter from within QGraphicsItem::paint()

Small version of my question
In a QGraphicsItem::paint() function I have a QGLFrameBufferObject. How do I get it on the paintdevice of the painter that is passed as an argument? (provided that the QGraphicsItem is in a scene that is being rendered by a QGraphicsView that has a QGLWidget as viewport => painter is using opengl engine)
QGraphicsItem::paint(QPainter* painter, ...)
{
QGLFramebufferObject fbo;
QPainter p(fbo);
... // Some painting code on the fbo
p.end();
// What now? How to get the fbo content drawn on the painter?
}
I have looked at the framebufferobject and pbuffer examples provided with Qt. There the fbo/pbuffer is drawn in a QGLWidget using custom opengl code. Is it possible to do the same thing within a paint() method of a QGraphicsItem and take the position of the QGraphisItem in the scene/view into account?
Big version of my question
Situation sketch
I have a QGraphicsScene. In it is an item that has a QGraphicsEffect (own implementation by overriding draw() from QGraphicsEffect). The scene is rendered by a QGraphicsView that has a QGLWidget as viewport.
In the QGraphicsEffect::draw(QPainter*) I have to generate some pixmap which I then want to draw using the painter provided (the painter has the QGLWidget as paintdevice). Constructing the pixmap is a combination of some draw calls and I want these to be done in hardware.
Simplified example: (I don't call sourcepixmap in my draw() method as it is not needed to demonstrate my problem)
class OwnGraphicsEffect: public QGraphicsEffect
{
virtual void draw(QPainter* painter);
}
void OwnGraphicsEffect::draw(QPainter* painter)
{
QRect rect(0,0,100,100);
QGLPixelBuffer pbuffer(rect.size(), QGLFormat(QGL::Rgba));
QPainter p(pbuffer);
p.fillRect(rect, Qt::transparent);
p.end();
painter->drawImage(QPoint(0,0), pbuffer->toImage(),rect);
}
Actual problem
My concerns are with the last line of my code: pbuffer->toImage(). I don't want to use this. I don't want to have a QImage conversion because of performance reasons. Is there a way to get a pixmap from my glpixelbuffer and then use painter->drawpixmap()?
I know I also can copy the pbuffer to a texture by using :
GLuint dynamicTexture = pbuffer.generateDynamicTexture();
pbuffer.updateDynamicTexture(dynamicTexture);
but I have no idea on how to get this texture onto the "painter".
Extending leemes' answer, here is a solution which can also handle multisample framebuffer objects.
First, if you want to draw on a QGLWidget, you can simply use the OpenGL commands
leemes suggested in his answer. Note that there is a ready-to-use drawTexture()
command available, which simplifies this code to the following:
void Widget::drawFBO(QPainter &painter, QGLFramebufferObject &fbo, QRect target)
{
painter.beginNativePainting();
drawTexture(target, fbo.texture());
painter.endNativePainting();
}
To draw multisample FBOs, you can convert them into non-multisample ones
using QGLFramebufferObject::blitFramebuffer (Note that not every hardware
/ driver combination supports this feature!):
if(fbo.format().samples() > 1)
{
QGLFramebufferObject texture(fbo.size()); // the non-multisampled fbo
QGLFramebufferObject::blitFramebuffer(
&texture, QRect(0, 0, fbo.width(), fbo.height()),
&fbo, QRect(0, 0, fbo.width(), fbo.height()));
drawTexture(targetRect, texture.texture());
}
else
drawTexture(targetRect, fbo.texture());
However, as far as I know, you can't draw using OpenGL commands on a non-OpenGL context.
For this, you first need to convert the framebuffer to a (software) image, like
a QImage using fbo.toImage() and draw this using your QPainter instead of the
fbo directly.
I think I figured it out. I use the QPainter::beginNativePainting() to mix OpenGL commands in a paintEvent:
void Widget::drawFBO(QPainter &painter, QGLFramebufferObject &fbo, QRect target)
{
painter.beginNativePainting();
glEnable(GL_TEXTURE_2D);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glBindTexture(GL_TEXTURE_2D, fbo.texture());
glBegin(GL_QUADS);
glTexCoord2d(0.0,1.0); glVertex2d(target.left(), target.top());
glTexCoord2d(1.0,1.0); glVertex2d(target.right() + 1, target.top());
glTexCoord2d(1.0,0.0); glVertex2d(target.right() + 1, target.bottom() + 1);
glTexCoord2d(0.0,0.0); glVertex2d(target.left(), target.bottom() + 1);
glEnd();
painter.endNativePainting();
}
I hope this will also work in the paintEvent of a QGraphicsItem (where the QGraphicsView uses a QGLWidget as the viewport), since I only tested it in QGLWidget::paintEvent directly.
There is, however, still the following problem: I don't know how to paint a multisample framebuffer. The documentation of QGLFramebufferObject::texture() says:
If a multisample framebuffer object is used then the value returned from this function will be invalid.

Resources