I am using Qt 5.4 and setting up the projection matrix and viewport as follows in my resizeGL function override:
glViewport(_off_x, _off_y, _width, _height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, _width, 0, _height, -1, 1);
I can verify this and when I print out the projection matrix as follows, it shows the correct value:
GLdouble projection[16];
glGetDoublev(GL_PROJECTION_MATRIX, projection );
// printing this shows the correct projection matrix.
However, somewhere this is getting overridden. When I print the projection matrix in the paintGL() function, it shows it as identity.
Interestingly, I switched to the old QGLWidget and it performs as expected.
However, somewhere this is getting overridden. When I print the projection matrix in the paintGL() function, it shows it as identity.
And you're surprised exactly why? Qt5 may use OpenGL for drawing its stuff. Which means that Qt will have to set the state of the OpenGL context according to its needs.
What you observed is what is to be expected, so don't be surprised.
I am using Qt 5.4 and setting up the projection matrix and viewport as follows in my resizeGL function override:
You should not be doing that. As with every state based system it's essential to set the state right when you need it to what you need it – or keep track of all of the state changes, which is much more difficult.
Do the right thing and move everything you did in resizeGL to where it belongs: paintGL. The sole purpose of resizeGL is to update resources like FBO renderbuffers and to reflect the new size. But don't use it to set drawing related OpenGL state.
Related
I am recently building a Qt application with native openGL streaming data in the Qt window.
I have 2 widgets inherited from QOpenGLWidget, one has parent, the other doesn't have a parent. They both work well individually (just show() one widget per time). However, when I try to render them simultaneously, one of the texture I bind via glBindTexture() appears in the wrong window. It's like they are using the same context(). But by inheriting from QOpenGLWidget, they should have two different context.
In my code, I just override initializeGL, paintGL and resizeGL as usual
void initializeGL(){
initializeOpenGLFunctions();
// generate buffer, allocation, shaders...
}
void paintGL(){
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// bind vao, bind texture, drawElements...
}
Basically, my second window(window2) is a "video player" plays the image sequence from memory. But it appears on the window1. I have also set a QSurfaceFormat by:
QSurfaceFormat format;
format.setRenderableType(QSurfaceFormat::OpenGL);
format.setVersion(3,3);
format.setProfile(QSurfaceFormat::CoreProfile);
setFormat(format);
in the constructor.
Could someone tell me what might be wrong here? I think the context() the two windows use are different, then how could I glBindTexture in window2 could apply for window1? If you found these information are not enough, please tell me, Thanks.
platform: Ubuntu16.04, Qt5.6.2, OpenGL3.3
Update:
I have the same issue with this post: OpenGL multiple window rendering. However, mine is inside Qt5 environment, theoretically, it should works.
I did some survey, if it might be possible to use QtTest to test some of my custom Qt Widgets.
I was able to build and run tests and I was also able to simulate events and check them with QSignalSpy.
The widgets I'm going to tests are not revealing their internal subwidgets, so that I have to simulate the positions of my mouse clicks relative to their parent widget.
For some reason I'm failing with this approach. The following snippet shows what I'm trying to achieve.
auto button=new QPushButton("Hello");
auto grpBox = new QGroupBox("Group Box");
grpBox->setLayout(new QVBoxLayout);
grpBox->layout()->addWidget(button);
QSignalSpy spy(button, &QPushButton::clicked);
grpBox->show();
QTest::mouseClick(button, Qt::MouseButton::LeftButton);
QTest::mouseClick(grpBox, Qt::MouseButton::LeftButton, Qt::KeyboardModifier::NoModifier, QPoint(250,70));
QCOMPARE(spy.count(), 2); // 1!=2
The first click is considered correctly, whereas the second one vanishes somehow. Why is that?
I'm wondering, if I really understood how to use the framework correctly, as dealing with mouse positions, seems to be too tedious and fragile for a practical test framework.
Revision:
It's obvious that using coordinates in a GUI test are very fragile. Hence, I found a solution utilizing findChild that actually does the same thing.
auto button=new QPushButton("Hello");
button->setObjectName("PushButton");
auto grpBox = new QGroupBox("Group Box");
grpBox->setLayout(new QVBoxLayout);
grpBox->layout()->addWidget(button);
QSignalSpy spy(button, &QPushButton::clicked);
grpBox->show();
QTest::mouseClick(button, Qt::MouseButton::LeftButton);
if (auto btn = grpBox->findChild<QPushButton*>("PushButton")) {
QTest::mouseClick(btn, Qt::MouseButton::LeftButton, Qt::KeyboardModifier::NoModifier);
} else {
QVERIFY(false);
}
QCOMPARE(spy.count(), 2);
This combines two advantages. Firstly, it is no longer necessary to deal with coordinates and secondly you still doesn't need to touch the code of the widgets you are going to test.
For this approach it seems to be advantageous, if every gui element has a unique objectName() supporting an easy search mechanism.
I'm currently developping a small vector drawing program in wich you can create lines and modify them after creation (those lines are based on a custom QGraphicsItem). For instance, the picture below shows what happens when the leftmost (marked yellow) point of the line is dragged to the right of the screen, effectively lengthening the line :
Everything works fine when the point is moved slowly, however, when moved rapidly, some visual artifacts appear :
The piece of code I'm using to call for a repaint is located in the mouseMoveEvent redefined method, which holds the following lines of code :
QRectF br = boundingRect();
x2 = static_cast<int>(event->scenePos().x()-x());
y2 = static_cast<int>(event->scenePos().y()-y());
update(br);
There's apparently no problem with my boundingRect definition, since adding painter->drawRect(boundingRect()) in the paint method shows this :
And there are also no problem when the line is simply moved (flag QGraphicsItem::ItemIsMovable is set), even rapidly.
Does anyone know what is happening here ? My guess is that update is not being called immediately hence mouseMoveEvent can be called multiple times before a repaint occurs, maybe canceling previous calls ? I'm not sure.
Of course the easy fix is to set the viewport mode of the QGraphicsView object holding the line to QGraphicsView::FullViewportUpdate), but that is ugly (and slow).
Without seeing the full function for how you're updating the line, I would guess that you've omitted to call prepareGeometryChange() before updating the bounding rect of the items.
As the docs state: -
Prepares the item for a geometry change. Call this function before changing the bounding rect of an item to keep QGraphicsScene's index up to date.
I try to code an OpenGL project with Qt (v5.1.1) on OS X 10.9, in the manner of the modern pipeline implementation. The program is supposed to be a multi-agent based system or particle system. However I lack in understanding how to draw something out of another class.
In cinder there were some simple drawThisAndThat() command you could call. I read the 6th edition of the 'OpenGL Superbible'. From this and several tutorials all examples seem to cover just programs where all modifications are made out of the class that initializes OpenGL.
I would like to instantiate some objects moving on a grid and draw pixel to display their position. I know I have to call void glVertexAttrib4fv(GLuint index, const GLfloat * vi); but this is not sufficient.
Do I need to call glEnableVertexAttribArray(1); and glDrawArrays(GL_POINTS, 0, 3); as well and what else?
Am I right, to instantiate the class controlling the particles after instantiating OpenGL and bevor the main loop?
How do I manage that the particle draws himself while erasing the position he was drawn bevor?
The program is based on this code.
To answer your questions completely I would have to write a wall of text, I will try to only point out the most important aspects. I hope this will help you enough to use your knowledge and probably further reading to get it to work.
all modifications are made out of the class that initializes OpenGL
You can encapsulate update(time) and draw() methods for your Objects which you then call in your main loop.
Do I need to call glEnableVertexAttribArray(1); and glDrawArrays(GL_POINTS, 0, 3); as well and what else?
I would put all particles into one vertex array to avoid rebinding of different vertex arrays after each particle. Then you would have to use glBindVertexArray(vaid); and glDrawArrays(GL_POINTS, 0, vertexCount); in your draw() call. Be careful with vertexCount, it's not the number of floats (as your question implies) but the number of vertices, which should be 1 in your example or the number of particles in my suggested approach (If I'm correct in assuming that the 3 stands for "x, y, and z of my vertex").
And since you only have particles glDrawElements(...); would probably already fit your needs.
Am I right, to instantiate the class controlling the particles after instantiating OpenGL and bevor the main loop?
Probably your instantiation order is correct that way. You definitely should do all instantiations before calling the main loop in your case.
How do I manage that the particle draws himself while erasing the position he was drawn bevor?
If understand your last question correctly: Simply by changing the elements in your buffer objects (glBufferData(...);). Since you will clear the screen and swap buffers after each loop this will make them move. Just update their position with an update(time) call, e.g. pos = pos + dir * time;, put the new positions into a buffer and push that buffer with glBufferData(...) to the vertex array. Remember to bind the vertex array before pushing the buffer.
Some additional things I'd like to point out.
glEnableVertexAttribArray(1); is to enable a vertex attribute in your shader program to be able to pass data to that attribute. You should create a shader program
id = glCreateProgram()
// ... create and attach shaders here
// then bind attribute locations, e.g. positionMC
glBindAttribLocation(id, 0, "positionMC");
glLinkProgram(id);
And after initializing the vertex array with glGenVertexArrays(); you should enable all attributes your vertex array needs in your shader program. In this example positionMC would be at location 0, so you would call something like
glUseProgram(pid);
glBindVertexArray(vaid);
glEnableVertexAttribArray(1);
glVertexAttribPointer(...);
This has only to be done once, since OpenGL stores the state for every particular vertex array. By rebinding a vertex array you will restore that state.
In the main loop all you have to do now is calling your update and draw methods, e.g.:
handleInputs();
update(deltaTime);
glClear(...);
draw();
swapBuffers();
I set the format with:
QGLFormat format = QGLFormat(QGL::DoubleBuffer | QGL::DepthBuffer);
setFormat(format);
in the constructor.
Then in initializeGL I set depthTesting on.
void VoxelEditor::initializeGL()
{
glClearDepth(2000.0); // Enables Clearing Of The Depth Buffer
glEnable(GL_DEPTH_TEST); // Enables Depth Testing
glDepthFunc(GL_LESS); // The Type Of Depth Test To Do
glShadeModel(GL_SMOOTH); // Enables Smooth Color Shading
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Really Nice Perspective Calculations
}
In paintGL I clear the depth buffer.
void VoxelEditor::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
draw();
}
I remember it used to work with less vertices, so it might be that I'm using too many for the depthbuffer to handle(?).
I have 32*32*32 voxels which are drawn half of most of the time, so 98304 quads.
Depth testing however still does not work and shows the quads in order of execution.
so it might be that I'm using too many for the depthbuffer to handle(?).
The depth buffer is oblivious to vertices. All it sees are incoming fragments and it doesn't matter how many.
void VoxelEditor::initializeGL()
{
glClearDepth(2000.0); // Enables Clearing Of The Depth Buffer
This line does not enable clearing. It set's the value the depth buffer is cleared to. The value must be in the range 0…1. The clearing depth is in Normalized Device Coordinates, i.e. after modelview, projection and homogenous divide have been applied. The default value is 1.
glEnable(GL_DEPTH_TEST); // Enables Depth Testing
glDepthFunc(GL_LESS); // The Type Of Depth Test To Do
glShadeModel(GL_SMOOTH) // Enables Smooth Color Shading
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Really Nice Perspective Calculations
No, that's not what is does. Perspective is just a linear transformation and always works the same. What it means is, that texture coordinates may be interpolated in a different way to enhance quality.
}
I always recommend putting those calls in the drawing functions, because they don't initialize anything. They set drawing state. OpenGL is a state machine and a important rule of state machines is, that either you keep track of their state or you must put them into a known state whenever you're going to use it.
I fixed this by setting setDepthTest in draw().
glMatrixMode(GL_MODELVIEW);
glEnable(GL_DEPTH_TEST);