I've got a very basic scene rendering with a vertex and color array (some code below). I see how to bind the vertexes and colors to the vertex shaders attributes. Currently this vertex and color information is in a local array variable in my render function as you can see below and then glDrawArrays(GL_TRIANGLES, 0, n) is called to draw them for each frame.
I'm trying to picture the architecture of a larger moving scene where there are lots of models with lots of verticies that need to be loaded and unloaded.
The naïve way I imagine to extend this would be to place all the vertex/color data in one big array in main memory and then call glDrawArrays once for each frame. This seems to be inefficient to me. On every frame the vertex and color information changes only in parts, so arranging and reloading an entire monolithic vertex array for every frame seems wrong.
What do 3D games and so forth do about this? Are they for each frame placing all the vertexes in one big array in main memory, and then calling glDrawArrays once? If not, what architecture and OpenGL calls do they generally use to communicate all the vertexes of the scene to the GPU? Is it possible to load vertexes into GPU memory and then reuse them for several frames? Is it possible to draw multiple vertex arrays from multiple places in main memory?
static const char *vertexShaderSource =
R"(
attribute highp vec4 posAttr;
attribute lowp vec4 colAttr;
varying lowp vec4 col;
uniform highp mat4 matrix;
void main()
{
col = colAttr;
gl_Position = matrix * posAttr;
}
)";
static const char *fragmentShaderSource =
R"(
varying lowp vec4 col;
void main()
{
gl_FragColor = col;
}
)";
void Window::render()
{
glViewport(0, 0, width(), height());
glClear(GL_COLOR_BUFFER_BIT);
m_program->bind();
constexpr float delta = 0.001;
if (forward)
eyepos += QVector3D{0,0,+delta};
if (backward)
eyepos += QVector3D{0,0,-delta};
if (left)
eyepos += QVector3D{-delta,0,0};
if (right)
eyepos += QVector3D{delta,0,0};
QMatrix4x4 matrix;
matrix.perspective(60, 4.0/3.0, 0.1, 10000.0);
matrix.lookAt(eyepos, eyepos+direction, {0, 1, 0});
matrix.rotate(timer.elapsed() / 100.0f, 0, 1, 0);
m_program->setUniformValue("matrix", matrix);
QVector3D vertices[] =
{
{0.0f, 0.0f, 0.0f},
{1.0f, 0.0f, 0.0f},
{1.0f, 1.0f, 0.0f},
};
QVector3D colors[] =
{
{1.0f, 0.0f, 0.0f},
{1.0f, 1.0f, 0.0f},
{1.0f, 0.0f, 1.0f},
};
m_program->setAttributeArray("posAttr", vertices);
m_program->setAttributeArray("colAttr", colors);
m_program->enableAttributeArray("posAttr");
m_program->enableAttributeArray("colAttr");
glDrawArrays(GL_TRIANGLES, 0, 3);
m_program->disableAttributeArray("posAttr");
m_program->disableAttributeArray("colAttr");
m_program->release();
++m_frame;
}
Depends on how you want to structure things.
If you have a detailed model that needs to be moved and rotated and transformed but without changing its shape, then a pretty clear way to do it is to load that model into e.g. a VBO (I'm not sure what your setAttributeArray does), and this has to happen only before the first frame, and subsequent frames can render that model with any transformation you want by simply setting the model view matrix uniform which is a much smaller chunk of data going over the bus.
Vertex shaders can and should be used for letting the GPU help or offload entirely the computation and/or application of these types of operations.
Related
I have been trying to batch render two different pictures. I have 2 different QOpenGLTexture objects I want to draw in a single draw call with batch rendering but am struggling. Both texture objects have id's but only the last texture objects image is drawn. I believe my problem is with setting up the or frag shader.
//..............Setting up uniform...............//
const GLuint vals[] = {m_texture1->textureId(), m_texture2->textureId()};
m_program->setUniformValueArray("u_TextureID", vals, 2);
//..............frag Shader.....................//
#version 330 core
out vec4 color;
in vec2 v_textCoord; // Texture coordinate
in float v_index; // (0, 1) Vertex for which image to draw.
// 0 would draw the image of the first texture object
uniform sampler2D u_Texture[2];
void main()
{
int index = int(v_index);
color = texture(u_Texture[index], v_textCoord);
};
I've tried experimenting with the index value in the frag shader but it only draws the last texture image or blacks out. I tried implementing it how you would with openGL but have had no luck.
I'm trying to use glBlendFunc in QOpenGLWidget (in paintGL), but objects do not mix (alpha is works).
My code:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glEnable(GL_BLEND);
glBlendFunc(blenFunc, GL_ONE);
m_world.setToIdentity();
m_world.rotate((m_xRot / 16.0f), 1, 0, 0);
m_world.rotate(m_yRot / 16.0f, 0, 1, 0);
m_world.rotate(m_zRot / 16.0f, 0, 0, 1);
QOpenGLVertexArrayObject::Binder vaoBinder(&m_vao);
m_program->bind();
m_tex->bind();
fillYoffsetLightning();
const GLfloat scaleFactor = 0.05f;
m_world.scale(scaleFactor, scaleFactor, 0.0f);
m_world.translate(0.f, 0.0f, 0.0f);
const GLfloat fact = 1 / scaleFactor;
const uint8_t X = 0, Y = 1;
for(int i = 0; i < maxElem; ++i) {
const GLfloat offX = m_ELECT[i][X] * fact;
const GLfloat offY = m_ELECT[i][Y] * fact;
m_world.translate(offX, offY);
m_program->setUniformValue(m_projMatrixLoc, m_proj);
m_program->setUniformValue(m_mvMatrixLoc, m_camera * m_world);
QMatrix3x3 normalMatrix = m_world.normalMatrix();
m_program->setUniformValue(m_normalMatrixLoc, normalMatrix);
glDrawArrays(GL_TRIANGLE_FAN, 0, m_logo.vertexCount());
update();
m_world.translate(-offX, -offY);
}
m_program->release();
shaders are simple:
// vertex
"attribute highp vec4 color;\n"
"varying highp vec4 colorVertex;\n"
//......... main:
"colorVertex = color;\n"
// fragment
"varying highp vec4 colorVertex;\n"
//......... main:
"gl_FragColor = colorVertex;\n"
Color is:
a pentagon with a gradient from white from center to blue to the edges is drawn (center color is 1,1,1, edges is 0,0,0.5)
screenshoot
Why is this happening?
If you want to achieve a blending effect, the you have to disable the depth test:
glDisable(GL_DEPTH_TEST);
Note, the default depth test function is GL_LESS. If a fragment is draw on a place of a previous fragment, then it is discarded by the depth test, because this condition is not full filled.
If the depth test is disabled, then the fragments are "blended" by the blending function (glBlendFunc) and equation (glBlendEquation).
I recommend to use the following blending function:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
In my case (Qt 5.15.2) I found that using a color call with no alpha component (eg. glColor3f(1,0,0) ) causes the blending to be disabled for any subsequent rendering. To my surprise I could not even recover it by re-issuing these commands:
glEnable(GL_BLEND); // wtf has no effect
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Blending simply remained disabled until the next paint begins. This did not happen with the original QGLWidget class. It only happens with QOpenGLWidget and only on Windows (Mac + Linux are fine).
The good-enough solution for me was to replace any non-alpha color calls with alpha equivalents, at least for cases where you need to use blending later in the render. Eg.
glColor3f(1,0,0); // before
glColor4f(1,0,0,1); // after
Another issue that might come up is if you use QPainter along with direct rendering, because the QPainter will trash your OpenGL state. See the mention of 'beginNativePainting' in the docs:
https://doc.qt.io/qt-5/qopenglwidget.html#painting-techniques
EDIT: I'll add this here because my comment on Rabbid's answer was deleted for some reason - the depth test does NOT need to be disabled to use blending. Rabbid might be thinking of disabling depth buffer writes which is sometimes done to allow drawing all translucent objects without having to sort them in order of furthest to nearest:
Why we disable Z-write in blending
Related to my other question, I'm trying to render a segmentation mask to enable object picking. But I am not able to achieve the desired result.
Option 1 did not work at all. I was not able to retrieve the content of color attachment 1, or check if it existed at all (I created the attachment using only native OpenGL calls).
Using this post, I was able to reproduce the green.png and red.png images by creating a custom frame buffer with a second color attachment which is then bound and drawn to (all in paintGL()).
Somehow I had to use the person's frame buffer creation code because when I created the frame buffer myself there was always a warning saying toImage called for missing color attachment, although I attached the color attachment and textures() called on the frame buffer returned two objects. I then tried to insert my rendering code after
GLfloat red[4] = { 1.0f, 0.0f, 0.0f, 1.0f };
f->glClearBufferfv(GL_COLOR, 0, red);
GLfloat green[4] = { 0.0f, 1.0f, 0.0f, 1.0f };
f->glClearBufferfv(GL_COLOR, 1, green);
but this still resulted in the red and green image. But the code renders fine when using the normal default framebuffer. I adapted the shader to (short version for testing purposes):
void main() {
gl_FragData[0] = vec4(1.0, 1.0, 1.0, 1.0);
gl_FragData[1] = vec4(0.0, 0.0, 0.0, 1.0);
}
Since I was able to produce the red and green image, I'm assuming there must be a way to retrieve the frag data with this custom framebuffer. The solution I have right now is a complete (!) copy of the program and another dedicated fragment shader which's sole purpose is to render the segmentation, and perform all OpenGL draw calls a second time. As you can guess, this is a somewhat ugly solution, although the scenery is not that large and my computer is able to handle it easily. Has anyone got an idea/link?
If you want to write to multiple render targets in a Fragment shader, then you have to declare multiple output variables:
#version 330
layout(location = 0) out vec4 fragData0;
layout(location = 1) out vec4 fragData1;
void main()
{
fragData0 = vec4(1.0, 1.0, 1.0, 1.0);
fragData1 = vec4(0.0, 0.0, 0.0, 1.0);
}
From GLSL version 1.1 (#version 110, OpenGL 2.0) to GLSL version 1.5 (#version 150, OpenGL 3.2), the same can be achieved by writing to the built in fragment shader output variable gl_FragData.
void main()
{
gl_FragData[0] = vec4(1.0, 1.0, 1.0, 1.0);
gl_FragData[1] = vec4(0.0, 0.0, 0.0, 1.0);
}
See also Fragment Shader Outputs - Through The Ages
To use multiple render targets in Qt, a 2nd color attachment has to be add to the framebuffer and the list of color buffers has to be specified by glDrawBuffers:
QOpenGLShaderProgram *program;
QOpenGLFramebufferObject *fb;
int fb_width;
int fb_height,
fb = new QOpenGLFramebufferObject( fb_width, fb_height );
fb->addColorAttachment( fb_width, fb_height );
glViewport(0, 0, fb_width, fb_height);
fb->bind();
glClearColor(0, 0, 0, 1);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
GLenum buffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 };
glDrawBuffers(2, buffers);
program->bind();
// ..... do the drawing
program->release();
fb->release();
The OpenGL texture objects which are attached to the framebuffer, can be accessed:
QVector<GLuint> fb_textrues = fb->textures();
I am trying to draw a quadrat using a Vertex Buffer Object in OpenGL with Qt.
Here is my geometry:
numVertices = 4;
vertices = new float[3*numVertices];
int i = 0;
vertices[i++] = 0.0f; vertices[i++] = 0.0f; vertices[i++] = 0.0f; // (0,0,0)
vertices[i++] = 1.0f; vertices[i++] = 0.0f; vertices[i++] = 0.0f; // (1,0,0)
vertices[i++] = 1.0f; vertices[i++] = 1.0f; vertices[i++] = 0.0f; // (1,1,0)
vertices[i++] = 0.0f; vertices[i++] = 1.0f; vertices[i++] = 0.0f; // (0,1,0)
i = 0;
// spilt quad into two triangles:
numTriangles = 2;
indices = new unsigned int[numTriangles*3];
indices[i++] = 0; indices[i++] = 1; indices[i++] = 2;
indices[i++] = 0; indices[i++] = 2; indices[i++] = 3;
Next in initializeGL method:
QGLBuffer vertexBuffer;
vertexBuffer.create();
vertexBuffer.bind();
vertexBuffer.allocate(vertices, numVertices*sizeof(float));
QGLShaderProgram* shaderProgram_ = new QGLShaderProgram;
shaderProgram_->addShaderFromSourceFile(QGLShader::Vertex,"C:/src/light.vert.glsl") ) {
shaderProgram_->addShaderFromSourceFile(QGLShader::Fragment, "C:/src/light.frag.glsl");
bool ok = shaderProgram_->link();
ok = shaderProgram_->bind();
I think all the VBO part does is to copy vertices to GPU? (Why so many lines?)
The shader part worked fine with old style glBegin(GL_QUADS);
Next in my paintGL method:
shaderProgram_->setAttributeBuffer("vertex", GL_FLOAT, 0, 3, 0);
shaderProgram_->enableAttributeArray("vertex");
glDrawElements(GL_TRIANGLES, numTriangles, GL_UNSIGNED_INT, indices);
What are the two first lines doing? Maybe telling the shader that there is a vertex buffer named "vertex" of type GL_FLOAT?
However I did not specify any name when creating the VBO!? How do OpenGL know that this is "vertex"?
Anyway I am not seing anything!?
Are there any steps I am missing?
My shaders are simple pass trough:
# version 120
varying vec4 color;
void main() {
vec4 vertex = gl_Vertex;
// pass trough:
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * vertex;
color = gl_Color;
}
# version 120
varying vec4 color;
void main (void)
{
// pass-trough:
gl_FragColor = color;
}
I guess you've lost 3 here:
vertexBuffer.allocate(vertices, numVertices*sizeof(float));
->
vertexBuffer.allocate(vertices, numVertices*sizeof(float)*3);
I think all the VBO part does is to copy vertices to GPU? (Why so many
lines?)
Yes. That's how OpenGL works. To store something in VBO you have to create it, bind it and copy data.
What are the two first lines doing? Maybe telling the shader that
there is a vertex buffer named "vertex" of type GL_FLOAT? However I
did not specify any name when creating the VBO!? How do OpenGL know
that this is "vertex"?
First line:
QGLShaderProgram::setAttributeBuffer() "Sets an array of vertex values on the attribute called name in this shader program, starting at a specific offset in the currently bound vertex buffer." - from manual. Again, that's how OpenGL works. It's a state machine. You bind specific buffer to GL_ARRAY_BUFFER binding point, next tell OpenGL that this is the buffer where "vertex" attribute data is stored. "vertex" is attribute name that you have in your shader program. (May be you should change it to "gl_Vertex") Guess, Qt calls glVertexAttribLocation() to find location of your attribute and next calls glVertexAttribPointer().
Second line:
OpenGL must know that it should copy data from some buffer to some special place where shader program can find it. This is done by enabling attribute arrays for specific location. Guess, Qt calls glVertexAttribLocation() and glEnableAttribArray() here.
BTW:
Do you specify any modelview and projection matrices in your code? I' not sure if they are set by default. Try removing these values form shaders for testing purposes.
Added:
glDrawElements(GL_TRIANGLES, numTriangles, GL_UNSIGNED_INT, indices);
Second parameter is not number of triangles but number of indices, that will be read from indices array. Here should be numTriangles*3
I am writing a rendering engine using Qt and am running into problems with texturing my models
I have a very simple shader to test texturing:
vertex shader:
Attribute vec4 Vertex;
Attribute vec2 texcoords;
uniform mat4 mvp;
varying vec2 outTexture;
void main() {
gl_Position = mvp * Vertex;
outTexture = texcoords;
}
and fragment shader:
uniform sampler2D tex;
varying vec2 outTexture;
void main() {
vec4 color = texture2D(tex, outTexture);
gl_FragColor = color;
}
I am passing my texture coordinates to the shaders correctly
My problem is with binding a QImage and sending it to its texture uniform.
I am using the following code to bind the texture:
const QString& filename;
GLuint m_texture;
QImage image(filename);
image = image.convertToFormat(QImage::Format_ARGB32);
glGenTextures(1, &m_texture);
glBindTexture(GL_TEXTURE_2D, m_texture);
glTexParameteri(GL_TEXTURE2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, image.width(), image.height(), 0, GL_BGRA, GL_UNSIGNED_BYTE, image.bits());
glGenerateMipmap(GL_TEXTURE2D);
glEnable(GL_TEXTURE_2D);
The shader works and I can pass a uniform to the matrix and attributes to the vertex and texture coordinates, but when I try to send a uniform to the texture the same way as such:
effect->setUniformValue(effect->uniformLocation("tex", texture->m_texture));
the program crashes with an “access violation reading location” error with glGetError() returning “invalid enumerant”
Interestingly, when I try running the program without attempting to send the texture to the sampler, the texture is actually appearing on the model. Which makes me think the way I’m binding it has something to do with the legacy texture handling and the texture is being bound to a particular texture address which is being picked up by the shader. This is not the effect I want because I want the programmer to be able to explicitly state at draw time what texture should be passed to the uniform (just as any other uniform is set)
How can I pass the texture to it’s sampler, what do I need to change when binding a texture?
Change it to
effect->setUniformValue(effect->uniformLocation("tex"), texture->m_texture);
or
effect->setUniformValue("tex", texture->m_texture);
Try converting the QImage using:
image = QGLWidget::convertToGLFormat(image);
Another thought, if you are using ES2, then GL_RGBA8 is not valid. I think GL_BGRA may be an optional extension, or not ES 2. Hope this helps.