I'm trying to use glBlendFunc in QOpenGLWidget (in paintGL), but objects do not mix (alpha is works).
My code:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glEnable(GL_BLEND);
glBlendFunc(blenFunc, GL_ONE);
m_world.setToIdentity();
m_world.rotate((m_xRot / 16.0f), 1, 0, 0);
m_world.rotate(m_yRot / 16.0f, 0, 1, 0);
m_world.rotate(m_zRot / 16.0f, 0, 0, 1);
QOpenGLVertexArrayObject::Binder vaoBinder(&m_vao);
m_program->bind();
m_tex->bind();
fillYoffsetLightning();
const GLfloat scaleFactor = 0.05f;
m_world.scale(scaleFactor, scaleFactor, 0.0f);
m_world.translate(0.f, 0.0f, 0.0f);
const GLfloat fact = 1 / scaleFactor;
const uint8_t X = 0, Y = 1;
for(int i = 0; i < maxElem; ++i) {
const GLfloat offX = m_ELECT[i][X] * fact;
const GLfloat offY = m_ELECT[i][Y] * fact;
m_world.translate(offX, offY);
m_program->setUniformValue(m_projMatrixLoc, m_proj);
m_program->setUniformValue(m_mvMatrixLoc, m_camera * m_world);
QMatrix3x3 normalMatrix = m_world.normalMatrix();
m_program->setUniformValue(m_normalMatrixLoc, normalMatrix);
glDrawArrays(GL_TRIANGLE_FAN, 0, m_logo.vertexCount());
update();
m_world.translate(-offX, -offY);
}
m_program->release();
shaders are simple:
// vertex
"attribute highp vec4 color;\n"
"varying highp vec4 colorVertex;\n"
//......... main:
"colorVertex = color;\n"
// fragment
"varying highp vec4 colorVertex;\n"
//......... main:
"gl_FragColor = colorVertex;\n"
Color is:
a pentagon with a gradient from white from center to blue to the edges is drawn (center color is 1,1,1, edges is 0,0,0.5)
screenshoot
Why is this happening?
If you want to achieve a blending effect, the you have to disable the depth test:
glDisable(GL_DEPTH_TEST);
Note, the default depth test function is GL_LESS. If a fragment is draw on a place of a previous fragment, then it is discarded by the depth test, because this condition is not full filled.
If the depth test is disabled, then the fragments are "blended" by the blending function (glBlendFunc) and equation (glBlendEquation).
I recommend to use the following blending function:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
In my case (Qt 5.15.2) I found that using a color call with no alpha component (eg. glColor3f(1,0,0) ) causes the blending to be disabled for any subsequent rendering. To my surprise I could not even recover it by re-issuing these commands:
glEnable(GL_BLEND); // wtf has no effect
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Blending simply remained disabled until the next paint begins. This did not happen with the original QGLWidget class. It only happens with QOpenGLWidget and only on Windows (Mac + Linux are fine).
The good-enough solution for me was to replace any non-alpha color calls with alpha equivalents, at least for cases where you need to use blending later in the render. Eg.
glColor3f(1,0,0); // before
glColor4f(1,0,0,1); // after
Another issue that might come up is if you use QPainter along with direct rendering, because the QPainter will trash your OpenGL state. See the mention of 'beginNativePainting' in the docs:
https://doc.qt.io/qt-5/qopenglwidget.html#painting-techniques
EDIT: I'll add this here because my comment on Rabbid's answer was deleted for some reason - the depth test does NOT need to be disabled to use blending. Rabbid might be thinking of disabling depth buffer writes which is sometimes done to allow drawing all translucent objects without having to sort them in order of furthest to nearest:
Why we disable Z-write in blending
Related
I'm trying to generate realistic stars for an open source game I'm working on. I'm generating the stars using principles covered here. I'm using the three.js library in a Chromium engine (NW.js). The problem I've found is that the star glow fades into black instead of into transparency.
Whilst it looks nice for single star,
multiple stars have a serious problem:
My code is as follows:
Vertex shader
attribute vec3 glow;
varying vec3 vGlow;
void main() {
vGlow = glow;
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
gl_PointSize = 100.0;
gl_Position = projectionMatrix * mvPosition;
}
Fragment shader
varying vec3 vGlow;
void main() {
float starLuminosity = 250.0;
float invRadius = 60.0;
float invGlowRadius = 2.5;
// Get position relative to center.
vec2 position = gl_PointCoord;
position.x -= 0.5;
position.y -= 0.5;
// Airy disk calculation.
float diskScale = length(position) * invRadius;
vec3 glow = vGlow / pow(diskScale, invGlowRadius);
glow *= starLuminosity;
gl_FragColor = vec4(glow, 1.0);
}
I've tried discarding pixels that are darker, but this does not solve the problem, it only hides it a tad:
if (gl_FragColor.r < 0.1 && gl_FragColor.g < 0.1 && gl_FragColor.b < 0.1) {
discard;
}
The actual effect I'm after is as follows,
but I have no idea how to achieve this.
Any advice will be appreciated.
You cannot achieve this effect in the fragment shader because you are rendering multiple meshes or primitives. You have to enable Blending before rendering the geometry:
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
Also make sure that the Depth test is disabled.
Additionally you must set the alpha channel from. e.g.:
gl_FragColor = vec4(glow, 1.0);
vec4(glow, (glow.r+glow.g.+glow.b)/3.0 * 1.1 - 0.1);
Related to my other question, I'm trying to render a segmentation mask to enable object picking. But I am not able to achieve the desired result.
Option 1 did not work at all. I was not able to retrieve the content of color attachment 1, or check if it existed at all (I created the attachment using only native OpenGL calls).
Using this post, I was able to reproduce the green.png and red.png images by creating a custom frame buffer with a second color attachment which is then bound and drawn to (all in paintGL()).
Somehow I had to use the person's frame buffer creation code because when I created the frame buffer myself there was always a warning saying toImage called for missing color attachment, although I attached the color attachment and textures() called on the frame buffer returned two objects. I then tried to insert my rendering code after
GLfloat red[4] = { 1.0f, 0.0f, 0.0f, 1.0f };
f->glClearBufferfv(GL_COLOR, 0, red);
GLfloat green[4] = { 0.0f, 1.0f, 0.0f, 1.0f };
f->glClearBufferfv(GL_COLOR, 1, green);
but this still resulted in the red and green image. But the code renders fine when using the normal default framebuffer. I adapted the shader to (short version for testing purposes):
void main() {
gl_FragData[0] = vec4(1.0, 1.0, 1.0, 1.0);
gl_FragData[1] = vec4(0.0, 0.0, 0.0, 1.0);
}
Since I was able to produce the red and green image, I'm assuming there must be a way to retrieve the frag data with this custom framebuffer. The solution I have right now is a complete (!) copy of the program and another dedicated fragment shader which's sole purpose is to render the segmentation, and perform all OpenGL draw calls a second time. As you can guess, this is a somewhat ugly solution, although the scenery is not that large and my computer is able to handle it easily. Has anyone got an idea/link?
If you want to write to multiple render targets in a Fragment shader, then you have to declare multiple output variables:
#version 330
layout(location = 0) out vec4 fragData0;
layout(location = 1) out vec4 fragData1;
void main()
{
fragData0 = vec4(1.0, 1.0, 1.0, 1.0);
fragData1 = vec4(0.0, 0.0, 0.0, 1.0);
}
From GLSL version 1.1 (#version 110, OpenGL 2.0) to GLSL version 1.5 (#version 150, OpenGL 3.2), the same can be achieved by writing to the built in fragment shader output variable gl_FragData.
void main()
{
gl_FragData[0] = vec4(1.0, 1.0, 1.0, 1.0);
gl_FragData[1] = vec4(0.0, 0.0, 0.0, 1.0);
}
See also Fragment Shader Outputs - Through The Ages
To use multiple render targets in Qt, a 2nd color attachment has to be add to the framebuffer and the list of color buffers has to be specified by glDrawBuffers:
QOpenGLShaderProgram *program;
QOpenGLFramebufferObject *fb;
int fb_width;
int fb_height,
fb = new QOpenGLFramebufferObject( fb_width, fb_height );
fb->addColorAttachment( fb_width, fb_height );
glViewport(0, 0, fb_width, fb_height);
fb->bind();
glClearColor(0, 0, 0, 1);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
GLenum buffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 };
glDrawBuffers(2, buffers);
program->bind();
// ..... do the drawing
program->release();
fb->release();
The OpenGL texture objects which are attached to the framebuffer, can be accessed:
QVector<GLuint> fb_textrues = fb->textures();
All figures outside of z-axis: (1;-1) range get clipped. Here is some code:
void MainWindow::initializeGL()
{
glDepthRange(-2,2);
glEnable(GL_TEXTURE_2D);
glDisable(GL_COLOR_MATERIAL);
glEnable(GL_BLEND);
glEnable(GL_POLYGON_SMOOTH);
//glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glClearColor(1, 1, 0, 0);
glDisable(GL_DEPTH_TEST); // Enables Depth Testing
glDepthFunc(GL_LESS); // The Type Of Depth Test To Do
glShadeModel(GL_SMOOTH); // Enables Smooth Color Shading
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
}
void MainWindow::paintGL(){
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
// glEnable(GL_TEXTURE_2D);
// glBindTexture(GL_TEXTURE_2D,texture);
// glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0 , image.width(), image.height(), 0, 0, image.bits() );
//glMatrixMode(GL_MODELVIEW);
//glEnable(GL_DEPTH_TEST);
glTranslated(0.0, 0.0, 1.9);
qglColor(Qt::black);
glBegin(GL_TRIANGLES);
glVertex3d(-0.1,0.1,1);
glVertex3d(-0.1,-0.1,-1);
glVertex3d(0.1,-0.1,0);
glEnd();
}
Any idea why does it happen?
This is actually completely normal behavior.
When you use identity modelview and projection matrices, your coordinates are in clip-space. The default W value for a 3D vertex in OpenGL is 1.0 (vertices are always 4D), and clip-space -> NDC works by dividing each component of a vertex by its W component and then clipping anything with a coordinate outside the range [-1,1].
I think what is confusing you is the glDepthRange (...) call. That does not affect clipping. Depth range is part of the viewport transformation, which happens after clipping.
I've got a very basic scene rendering with a vertex and color array (some code below). I see how to bind the vertexes and colors to the vertex shaders attributes. Currently this vertex and color information is in a local array variable in my render function as you can see below and then glDrawArrays(GL_TRIANGLES, 0, n) is called to draw them for each frame.
I'm trying to picture the architecture of a larger moving scene where there are lots of models with lots of verticies that need to be loaded and unloaded.
The naïve way I imagine to extend this would be to place all the vertex/color data in one big array in main memory and then call glDrawArrays once for each frame. This seems to be inefficient to me. On every frame the vertex and color information changes only in parts, so arranging and reloading an entire monolithic vertex array for every frame seems wrong.
What do 3D games and so forth do about this? Are they for each frame placing all the vertexes in one big array in main memory, and then calling glDrawArrays once? If not, what architecture and OpenGL calls do they generally use to communicate all the vertexes of the scene to the GPU? Is it possible to load vertexes into GPU memory and then reuse them for several frames? Is it possible to draw multiple vertex arrays from multiple places in main memory?
static const char *vertexShaderSource =
R"(
attribute highp vec4 posAttr;
attribute lowp vec4 colAttr;
varying lowp vec4 col;
uniform highp mat4 matrix;
void main()
{
col = colAttr;
gl_Position = matrix * posAttr;
}
)";
static const char *fragmentShaderSource =
R"(
varying lowp vec4 col;
void main()
{
gl_FragColor = col;
}
)";
void Window::render()
{
glViewport(0, 0, width(), height());
glClear(GL_COLOR_BUFFER_BIT);
m_program->bind();
constexpr float delta = 0.001;
if (forward)
eyepos += QVector3D{0,0,+delta};
if (backward)
eyepos += QVector3D{0,0,-delta};
if (left)
eyepos += QVector3D{-delta,0,0};
if (right)
eyepos += QVector3D{delta,0,0};
QMatrix4x4 matrix;
matrix.perspective(60, 4.0/3.0, 0.1, 10000.0);
matrix.lookAt(eyepos, eyepos+direction, {0, 1, 0});
matrix.rotate(timer.elapsed() / 100.0f, 0, 1, 0);
m_program->setUniformValue("matrix", matrix);
QVector3D vertices[] =
{
{0.0f, 0.0f, 0.0f},
{1.0f, 0.0f, 0.0f},
{1.0f, 1.0f, 0.0f},
};
QVector3D colors[] =
{
{1.0f, 0.0f, 0.0f},
{1.0f, 1.0f, 0.0f},
{1.0f, 0.0f, 1.0f},
};
m_program->setAttributeArray("posAttr", vertices);
m_program->setAttributeArray("colAttr", colors);
m_program->enableAttributeArray("posAttr");
m_program->enableAttributeArray("colAttr");
glDrawArrays(GL_TRIANGLES, 0, 3);
m_program->disableAttributeArray("posAttr");
m_program->disableAttributeArray("colAttr");
m_program->release();
++m_frame;
}
Depends on how you want to structure things.
If you have a detailed model that needs to be moved and rotated and transformed but without changing its shape, then a pretty clear way to do it is to load that model into e.g. a VBO (I'm not sure what your setAttributeArray does), and this has to happen only before the first frame, and subsequent frames can render that model with any transformation you want by simply setting the model view matrix uniform which is a much smaller chunk of data going over the bus.
Vertex shaders can and should be used for letting the GPU help or offload entirely the computation and/or application of these types of operations.
I am writing a rendering engine using Qt and am running into problems with texturing my models
I have a very simple shader to test texturing:
vertex shader:
Attribute vec4 Vertex;
Attribute vec2 texcoords;
uniform mat4 mvp;
varying vec2 outTexture;
void main() {
gl_Position = mvp * Vertex;
outTexture = texcoords;
}
and fragment shader:
uniform sampler2D tex;
varying vec2 outTexture;
void main() {
vec4 color = texture2D(tex, outTexture);
gl_FragColor = color;
}
I am passing my texture coordinates to the shaders correctly
My problem is with binding a QImage and sending it to its texture uniform.
I am using the following code to bind the texture:
const QString& filename;
GLuint m_texture;
QImage image(filename);
image = image.convertToFormat(QImage::Format_ARGB32);
glGenTextures(1, &m_texture);
glBindTexture(GL_TEXTURE_2D, m_texture);
glTexParameteri(GL_TEXTURE2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, image.width(), image.height(), 0, GL_BGRA, GL_UNSIGNED_BYTE, image.bits());
glGenerateMipmap(GL_TEXTURE2D);
glEnable(GL_TEXTURE_2D);
The shader works and I can pass a uniform to the matrix and attributes to the vertex and texture coordinates, but when I try to send a uniform to the texture the same way as such:
effect->setUniformValue(effect->uniformLocation("tex", texture->m_texture));
the program crashes with an “access violation reading location” error with glGetError() returning “invalid enumerant”
Interestingly, when I try running the program without attempting to send the texture to the sampler, the texture is actually appearing on the model. Which makes me think the way I’m binding it has something to do with the legacy texture handling and the texture is being bound to a particular texture address which is being picked up by the shader. This is not the effect I want because I want the programmer to be able to explicitly state at draw time what texture should be passed to the uniform (just as any other uniform is set)
How can I pass the texture to it’s sampler, what do I need to change when binding a texture?
Change it to
effect->setUniformValue(effect->uniformLocation("tex"), texture->m_texture);
or
effect->setUniformValue("tex", texture->m_texture);
Try converting the QImage using:
image = QGLWidget::convertToGLFormat(image);
Another thought, if you are using ES2, then GL_RGBA8 is not valid. I think GL_BGRA may be an optional extension, or not ES 2. Hope this helps.