glDeleteTextures, leaking? - qt

I found a rather disguting behaviour of glDeleteTexture, deleteing only parts of the aqcuired memory (GPU side and as Textures get saved back for the sake of speed in RAM), which in my case, is a showstopper bug, my program eating up all memory.
I don't want/require you to read all of the code, it's just a demo, I'd rather know how to actually use glDeleteTextures so it does not leak any memory.
The example code requires Qt 4.5 or later to compile:
glleak.pro
QT += opengl
SOURCES += main.cpp \
glleak.cpp
HEADERS += glleak.h
main.cpp
#include <QtOpenGL>
#include <QtGui>
#include "glleak.h"
int main(int argc, char** argv){
QApplication app(argc, argv);
glleak gll(0);
gll.show();
return app.exec();
}
glleak.h
#ifndef GLLEAK_H
#define GLLEAK_H
#include <QGLWidget>
#include <QMouseEvent>
#include <QDebug>
#include <QList>
class glleak : public QGLWidget
{
Q_OBJECT
public:
glleak(QWidget* parent = 0);
virtual ~glleak();
protected:
void initializeGL();
void paintGL();
void resizeGL(int w, int h);
void drawScene(GLenum mode);
void wheelEvent(QWheelEvent* event);
void hardcoreTexturing();
private:
QList<GLuint> texels;
};
#endif // GLLEAK_H
glleak.cpp
glleak::glleak(QWidget* parent) :
QGLWidget(parent)
{
}
glleak::~glleak()
{
}
void glleak::initializeGL(){
glClearColor(0.0f,0.0f,0.0f,0.0f);
glEnable(GL_TEXTURE_2D);
glEnable(GL_MULTISAMPLE);
glLineWidth (1.5f);
glPointSize(4.5f);
glEnable (GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
}
void glleak::resizeGL(int w, int h){
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-w/2.0, w/2.0, h/2.0, -h/2.0, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glViewport(0, 0, w, h);
glLoadIdentity();
}
void glleak::paintGL(){
glPushMatrix();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glColor3f(1.0f,1.0f,1.0f);
drawScene(GL_RENDER);
glPopMatrix();
}
void glleak::drawScene(GLenum mode){
qDebug() << "drawed #" << texels.count() << " Textures";
hardcoreTexturing();
}
void glleak::hardcoreTexturing(){
glEnable(GL_TEXTURE_2D);
for ( int i(0); i<texels.count(); ++i){
glPushMatrix();
glTranslatef(1.1f*i, 2.2f*i, 0.0f);
glBindTexture(GL_TEXTURE_2D, texels.at(i));
glBegin(GL_QUADS);
{
glTexCoord2i(0,0);
glVertex2i(-128,-128);
glTexCoord2i(0,1);
glVertex2i(-128,128);
glTexCoord2i(1,1);
glVertex2i(128,128);
glTexCoord2i(1,0);
glVertex2i(128,-128);
}
glEnd();
glPopMatrix();
}
glDisable(GL_TEXTURE_2D);
}
void glleak::wheelEvent(QWheelEvent* event){
glEnable(GL_TEXTURE_2D);
int n(50);
if (event->delta()>0){
qDebug() << "gen textures";
for (int i(0); i<n; ++i){
QImage t("./ballmer_peak.png","png");
GLuint tex(0);
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D( GL_TEXTURE_2D, 0, 3, t.width(), t.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, t.bits() );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
texels.append(tex);
}
}
else{
qDebug() << "del textures";
for (QList<GLuint>::iterator i(texels.begin()); i!=texels.end();){
glDeleteTextures(1, &(*i));
i = texels.erase(i);
if (--n <= 0)
break;
}
}
glDisable(GL_TEXTURE_2D);
updateGL();
}
ballmer_peak.png
A Image to load and render
Note: Compile demo: Just put it all in a folder, rename your image to ballmer_peak.png, call qmake, make, ./glleak
Note: Demo usage: Use mousewheel to generate or delete 50 Textures at once
If I use glDeleteTextures completly wrong, please tell me how to use it.
I am way out of ideas as my usage complies to the official OpenGL glDeleteTextures usage.

This may or may not be the reason for your leak, but for starters you are using glGenTextures wrongly.
1) You should not put this inside the for loop which initializes the textures. You need to put it before the loop and call it ONCE, with the number of textures required as the first parameter. Say n == 50:
glGenTextures(50, &tex);
2) tex should be a static array of n GLuints and should be persisted (not an auto variable as you have it!) until glDeleteTextures has been called, again, ONCE - not in a loop:
glDeleteTextures(50, &tex);
Think of tex as a repository for storing texture ids. It is important you use it and not say a separate QList, as you have done, for binding textures, since (as specified in the OpenGL reference) there is no guarantee that the texture ids will be a contiguous set of integers. I should imagine your leak happens because internally OpenGL loses the original pointer to the local (auto) variable you used to generate each texture, so the texture memory becomes orphaned.
Hope this helps!

I did not run your example code, but I get a similar thing on Windows7-64bits. Using per-texture glGenTextures() and glDeleteTextures(), it might leak memory, but I'm seeing my thread's handle count increase (in TaskManager for example, but I can also check it from the source).
It seems glDeleteTextures() does not release a handle. Perhaps it'd do it later on, but 24-hour tests indicate it never releases the handle. Seems like a leak inside the driver (nVidia GTX285, driver 270.61).
Eventually indeed the program runs out of memory. I'm beginning to think it's a driver issue...

There's nothing that looks wrong in your code. So... What makes you think you have a memory leak ? What makes you think it's textures specifically that leak ?
It is possible, but highly unlikely, that the OpenGL implementation you use leaks. That would be implementation specific.
Whatever the mechanism you use to look at memory leaks, what happens once you free the OpenGL context ?

You may need to call makeCurrent() at the top of wheelEvent.
For paintEvent, resizeEvent etc, Qt provides an implementation which handles this before calling paintGL/resizeGL/etc, but for other events like wheelEvent you have to do it yourself.

I might be doing this wrong, but when I compiled and ran your code I didn't run into any problems? Going up to 650 textures (can't increase further: get a 'killed' message then) and back my ram usage goes up from 1% to 24% and back to 1%. Going up to about 200 and back down repeatedly also doesn't cause problems: ram usage is still 1% at the end. From what I understand this would've caused massive leaks on your system? Ubuntu 10.10 here (Qt 4.7.0).

Your test on my system eats memory well, and not releases it immediatly when I delete all the textures, but if I wait for some time, memory is returned to system.
It seems that OGL driver uses some lazy memory releasing algorithm.

for (QList<GLuint>::iterator i(texels.begin()); i!=texels.end();)
switch to
for (QList<GLuint>::iterator i(texels.end()); i!=texels.begin();)

Related

Using QGLFramebufferObject and shaders without QGLWidget

I want to create an OGL data processor using QGLFunctions shaders and framebuffers. I don't need any widgets. But to create valid Shader and framebuffer instances, I need a valid QGLContext with support for the appropriate glExtensions.
With zero context, of course, nothing works. With context of zero QPaintDevice too. With Qpixmap as device it creates a valid context, but it lacks glExtensions for Shader and framebuffer.
#include <QGLFramebufferObject>
#include <QGLShaderProgram>
#include <QtOpenGL/QGLFunctions>
// ...
void GLProcessor::init()
{
auto format = QGLFormat::defaultFormat();
if (!context()){
m_context = new QGLContext(format, new QPixmap(1, 1));
bool ok = m_context->create();
qDebug() << "CREATING CONTEXT "<< ok;
Q_ASSERT(context()->isValid());
}
context()->makeCurrent();
initializeGLFunctions(context());
m_binFBO = new QGLFramebufferObject(lowsize ,lowsize ,QGLFramebufferObject::NoAttachment, GL_TEXTURE_2D, GL_RED);
m_outFBO = new QGLFramebufferObject(lowsize ,1 ,QGLFramebufferObject::NoAttachment, GL_TEXTURE_2D, GL_RED);
setupShaders();
// ...
}
There is an option, of course, to do as always is to get the context from the QGLWidget and hide it. But somehow inelegant. PS CUDA, OpenCL, AMP and so on I don't need. For my tasks need OpenGL.
How do I use shaders and framebuffers in qt4 without creating a QGLWidget?

glReadPixels GL_DEPTH_COMPONENT does not work in mousePressEvent

I am using QT QOpenGLWidget, I want to unproject my mouse click position back into 3D, so I used glReadPixels. (I also read about the source code of Pangolin, a very good rotation, translation, zoom example, it uses glReadPixels as well)
Here's part of my simple code:
void myGLWidget::initializeGL()
{
glClearColor(0.2, 0.2, 0.2, 1.0); //background color
glClearDepthf(1.0); //set depth test
glEnable(GL_DEPTH_TEST); //enable depth test
}
void myGLWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); //clear color and depth buffer
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(cameraView_.data()); // cameraView_ is a QMatrix4x4
drawingTeapot();
// reading pixels in paintGL works well!!! returns lots of 1s
GLfloat zs[10 * 10];
glReadPixels(0, 0, 10, 10, GL_DEPTH_COMPONENT, GL_FLOAT, &zs);
}
void myGLWidget::mousePressEvent(QMouseEvent *event)
{
// glReadBuffer(GL_FRONT); // also tried this, nothing works
GLfloat zs[10 * 10];
glReadPixels(0, 0, 10, 10, GL_DEPTH_COMPONENT, GL_FLOAT, &zs);
GLenum e = glGetError(); // this gives 1282 err code!!!
}
I'm using macOS Sierra, Pangolin works perfectly on my laptop, however, my qt project does work??!!
By saying not working, I mean the output variable zs remains random values like 0 and 123123e-315 and it never change before and after glReadPixels.
Why glReadPixels works only in PaintGL function??
I also tried python version, it gives my an error says:
File "errorchecker.pyx", line 53, in OpenGL_accelerate.errorchecker._ErrorChecker.glCheckError (src/errorchecker.c:1218)
OpenGL.error.GLError: GLError(
err = 1282,
description = b'invalid operation',
baseOperation = glReadPixels,
which might be the case that:
GL_INVALID_OPERATION is generated if format is GL_DEPTH_COMPONENT and there is no depth buffer. reference from document
But I still don't know what to do
OpenGL operations should be performed only when an OpenGL context is active. This is true in the paintGL() method because this is probably set by the framework for you. You can't assume the OpenGL is active in other methods, like in other event responding methods and callbacks as mousePressEvent(), because those methods can also be run by a different thread where the OpenGL context is not active.

QProcess dies for no obvious reason

While coding a seemingly simple part of a Qt application that would run a subprocess and read data from its standard output, I have stumbled upon a problem that has me really puzzled. The application should read blocks of data (raw video frames) from the subprocess and process them as they arrive:
start a QProcess
gather data until there is enough for one frame
process the frame
return to step 2
The idea was to implement the processing loop using signals and slots – this might look silly in the simple, stripped-down example that I provide below, but seemed entirely reasonable within the framework of the original application. So here we go:
app::app() {
process.start("cat /dev/zero");
buffer = new char[frameLength];
connect(this, SIGNAL(wantNewFrame()), SLOT(readFrame()), Qt::QueuedConnection);
connect(this, SIGNAL(frameReady()), SLOT(frameHandler()), Qt::QueuedConnection);
emit wantNewFrame();
}
I start here a trivial process (cat /dev/zero) so that we can be confident that it will not run out of data. I also make two connections: one starts the reading when a frame is needed and the other calls a data handling function upon the arrival of a frame. Note that this trivial example runs in a single thread so the connections are made to be of the queued type to avoid infinite recursion. The wantNewFrame() signal initiates the acquisition of the first frame; it gets handled when the control returns to the event loop.
bool app::readFrame() {
qint64 bytesNeeded = frameLength;
qint64 bytesRead = 0;
char* ptr = buffer;
while (bytesNeeded > 0) {
process.waitForReadyRead();
bytesRead = process.read(ptr, bytesNeeded);
if (bytesRead == -1) {
qDebug() << "process state" << process.state();
qDebug() << "process error" << process.error();
qDebug() << "QIODevice error" << process.errorString();
QCoreApplication::quit();
break;
}
ptr += bytesRead;
bytesNeeded -= bytesRead;
}
if (bytesNeeded == 0) {
emit frameReady();
return true;
} else
return false;
}
Reading the frame: basically, I just stuff the data into a buffer as it arrives. The frameReady() signal at the end announces that the frame is ready and in turn causes the data handling function to run.
void app::frameHandler() {
static qint64 frameno = 0;
qDebug() << "frame" << frameno++;
emit wantNewFrame();
}
A trivial data processor: it just counts the frames. When it is done, it emits wantNewFrame() to start the reading cycle anew.
This is it. For completeness, I'll also post the header file and main() here.
app.h:
#include <QDebug>
#include <QCoreApplication>
#include <QProcess>
class app : public QObject
{
Q_OBJECT
public:
app();
~app() { delete[] buffer; }
signals:
void wantNewFrame();
void frameReady();
public slots:
bool readFrame();
void frameHandler();
private:
static const quint64 frameLength = 614400;
QProcess process;
char* buffer;
};
main.cpp:
#include "app.h"
int main(int argc, char** argv)
{
QCoreApplication coreapp(argc, argv);
app foo;
return coreapp.exec();
}
And now for the bizarre part. This program processes a random number of frames just fine (I've seen anything from fifteen to more than thousand) but eventually stops and complains that the QProcess had crashed:
$ ./app
frame 1
...
frame 245
frame 246
frame 247
process state 0
process error 1
QIODevice error "Process crashed"
Process state 0 means "not running" and process error 1 means "crashed". I investigated into it and found out that the child process receives a SIGPIPE – i.e., the parent had closed the pipe on it. But I have absolutely no idea of where and why this happens. Does anybody else?
The code is a bit weird looking (not using the readyRead signal and instead relying on delayed signals/slots). As you pointed out in the discussion, you've already seen the thread on the qt-interest ML where I asked about a similar problem. I've just realized that I, too, used the QueuedConnection at that time. I cannot explain why it is wrong -- the queued signals "should work", in my opinion. A blind shot is that the invokeMethod which is used by the Qt's implementation somehow races with your signal delivery so that you empty your read buffer before Qt gets a chance to process the data. This would mean that Qt will ultimately read zero bytes and (correctly) interpret that as an EOF, closing the pipe.
I cannot find the referenced "Qt task 217111" anymore, but there is a couple of reports in their Jira about waitForReadyRead not working as users expect, see e.g. QTBUG-9529.
I'd bring this to the Qt's "interest" mailing list anmd stay clear of the waitFor... family of methods. I agree that their documentation deserves updating.

Qt OpengGL Shader Program fails

I am trying to write a modern OpenGL (programmable pipeline) program using Qt SDK .Qt OpenGL examples show only the fixed pipeline implementation.The documentation on how to initialize Shader Program is very poor.This is the best example on how to setup a shader program and load shaders they have:http://doc.trolltech.com/4.6/qglshaderprogram.html#details
This is not very descriptive as one can see.
I tried to follow this doc and cann't get the Shader program working .Getting segmentation error when the program tries to assign attributes to the shaders.I think the problem is that I access the context in the wrong way.But I can't find any reference on how to setup or retrieve the rendering context.My code goes like this:
static GLfloat const triangleVertices[] = {
60.0f, 10.0f, 0.0f,
110.0f, 110.0f, 0.0f,
10.0f, 110.0f, 0.0f
};
QColor color(0, 255, 0, 255);
int vertexLocation =0;
int matrixLocation =0;
int colorLocation =0;
QGLShaderProgram *pprogram=0;
void OpenGLWrapper::initShaderProgram(){
QGLContext context(QGLFormat::defaultFormat());
QGLShaderProgram program(context.currentContext());
pprogram=&program;
program.addShaderFromSourceCode(QGLShader::Vertex,
"attribute highp vec4 vertex;\n"
"attribute mediump mat4 matrix;\n"
"void main(void)\n"
"{\n"
" gl_Position = matrix * vertex;\n"
"}");
program.addShaderFromSourceCode(QGLShader::Fragment,
"uniform mediump vec4 color;\n"
"void main(void)\n"
"{\n"
" gl_FragColor = color;\n"
"}");
program.link();
program.bind();
vertexLocation= pprogram->attributeLocation("vertex");
matrixLocation= pprogram->attributeLocation("matrix");
colorLocation= pprogram->uniformLocation("color");
}
And here is the rendering loop:
void OpenGLWrapper::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
QMatrix4x4 pmvMatrix;
pmvMatrix.ortho(rect());
pprogram->enableAttributeArray(vertexLocation);
pprogram->setAttributeArray(vertexLocation, triangleVertices, 3);
pprogram->setUniformValue(matrixLocation, pmvMatrix);
pprogram->setUniformValue(colorLocation, color);
glDrawArrays(GL_TRIANGLES, 0, 3);
pprogram->disableAttributeArray(vertexLocation);
}
Anybody has can help with this setup? Thanks a lot .
You create a local program variable and let your pprogram pointer point to its address. But when initShaderProgram returns, the local program's lifetime ends and you pprogram points to garbage, therefore the segfault when you try to use it. You should rather create the program dynamically and let Qt handle the memory management:
pprogram = new QGLShaderProgram(context.currentContext(), this);
This assumes OpenGLWrapper derives somewhoe from QObject, if not, then you need to delete the program in its destructor manually (or use some smart pointer, or whatever).
Otherwise your initialization code looks quite reasonable. Your matrix variable should be a uniform and not an attribute, but I'm willing to classfiy this as a typo. You should also not bind the program for the whole lifetime, as this is equivalent to a call to glUseProgram. You should rather use bind (and release, which does glUseProgram(0)) in your render routine.
In my experience the Qt wrappers for OpenGL objects are rather poor and limited, I just made a thin-wrapper for straight OpenGL objects (made cross-platform and easy via GLEW), and made the usual OpenGL calls in QGLWidget. It worked no problem, after struggling for awhile with Qt's equivalents.

Memory leak in a simple opengl program

I've wrote a simple opengl program to make some test. Here is the program:
#include <QApplication>
#include <QGLWidget>
#include <QTimer>
#include <glut.h>
class Ren : public QGLWidget
{
public:
Ren() : QGLWidget()
{
timer = new QTimer(this);
connect(timer, SIGNAL(timeout()),
this, SLOT(updateGL()));
}
void startUpdateTimer()
{
timer->start(40);
}
void initializeGL()
{
glShadeModel(GL_SMOOTH);
glClearColor(0.5f, 0.5f, 0.5f, 0.0f);
glClearDepth(1.0f);
glEnable(GL_DEPTH_TEST);
}
void resizeGL(int width, int height)
{
if(height == 0){
height = 1;
}
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
GLfloat aspectRatio = (GLfloat)width / (GLfloat)height;
gluPerspective(60.0, aspectRatio, 0.01, 10000.0);
glMatrixMode(GL_MODELVIEW);
}
void paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(0, 0, 1, 0, 0, 0, 0, 1, 0);
glColor3d(1, 0, 0);
glutSolidCube(0.3);
}
QTimer *timer;
};
int main(int argc, char **argv)
{
QApplication app(argc, argv);
Ren r;
r.show();
r.startUpdateTimer();
return app.exec();
}
The problem is that the application is leaking memory, when timer is active.
For leak detection I used windows task manager.
Since Rend is a subclass you must declare a virtual destructor. Otherwise you have memory leaks and you can have heap corruption if you delete your Ren object while using it as QGLObject.
Edit: Removed part:
In the Constructor, your are allocating memory for the timer but you never release it. You need to delete the timer pointer.
Sorry. I was wrong. There is now leaking. The 'leaking' is stopped after some time (about a minute). I think it's a kind of opengl or Qt job. I've looked some Qt examples and saw the same thing in Textures example. The same thing is happened, if I change another examples to draw something else in PaintGL() function (depending on it there is different 'leaking' time.
Please give us more detail about how much memory your program is leaking. I highly doubt that your code is leaking because you're just allocating one QTimer object. Even if you don't delete your timer this wouldn't be a problem, because the OS releases the memory anyway. This is ugly, of course, but not leaking in a strict sense.
If the allocated memory grows steadily over time, then there's a memory leak. If that's the case, it's not your code's fault, since there's simply nothing you allocate except the timer once.
GLUT primitives are supposed to be used in GLUT programs running the GLUT main loop which cleans up quadrics allocated by the primitives, by calling GLUT primitives outside of a GLUT program you bypass the clean-up tasks and then you have leaked memory.
If you add used a memory leak detection program - Valgrind on Windows or Instruments on the Mac - you would have seen that the leaked blocks come from gluNewQuadric.
Remove the call to glutSolidCude and the leak will disappear.
An easy solution is to use Free GLUT primitives instead of the whole GLUT shipping with your OS.

Resources