Wrong alpha blending when rendering on a QGLFramebufferObject - qt

I write an OpenGL based vector graphics renderer for my application. It needs to render to a framebuffer object rather to the screen directly. Since I write the application in Qt, I use a QGLFramebufferObject which is a wrapper class for a OpenGL framebuffer object.
I created a minimal example which shows a wrong result I also get when rendering more complex stuff (for example using a fragment shader which sets colors with a non-one alpha value). I just render a red circle and a half-transparent green one on a black cleared screen, and then the same on the FBO:
void MainWidget::initializeGL()
{
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glClearColor(0, 0, 0, 0);
}
void MainWidget::resizeGL(int w, int h)
{
glViewport(0, 0, w, h);
}
void MainWidget::paintGL()
{
// DRAW ON THE SCREEN
{
glClear(GL_COLOR_BUFFER_BIT);
glPointSize(100);
glEnable(GL_POINT_SMOOTH);
glBegin(GL_POINTS);
glColor4f(1, 0, 0, 1);
glVertex2f(-.2, 0);
glColor4f(0, 1, 0, .5);
glVertex2f( .2, 0);
glEnd();
}
QGLFramebufferObject fbo(width(), height());
fbo.bind();
// DRAW ON THE FBO USING THE SAME CODE AND THE SAME CONTEXT
{
glClear(GL_COLOR_BUFFER_BIT);
glPointSize(100);
glEnable(GL_POINT_SMOOTH);
glBegin(GL_POINTS);
glColor4f(1, 0, 0, 1);
glVertex2f(-.2, 0);
glColor4f(0, 1, 0, .5);
glVertex2f( .2, 0);
glEnd();
}
fbo.release();
fbo.toImage().save("debug.png");
}
The result looks like this on the screen (scaled 400%):
The rendering to the QGLFramebufferObject looks like this (also scaled 400%):
Note that this image is not fully opaque, so here it is the same image with a checkerboard added behind it:
Even the area in which the two circles overlap isn't fully opaque. And the anti-aliasing looks pretty ugly.
How does this happen? And how can I fix this?
I already tried:
Different blend functions.
Explicitly disabling the depth buffer, stencil buffer and sampling on the QGLFramebufferObject. I'm not sure if the QGLFramebufferObject default format adds something I don't want.

Try the following:
QGLFramebufferObjectFormat fmt;
fmt.setSamples(1); // or 4 or disable this line
fmt.setInternalTextureFormat(GL_RGBA8);
QGLFramebufferObject fbo(width(), height(), fmt);
This forces a specific pixel format and also disables rendering to a texture by using multisampling (otherwise QT always renders to a texture). That might produce different results. You can also experiment with the format.
Also, what is your hardware? My maximal point size is only 64 pixels (GTX 260), you are trying to render 100 pixel points. That might be an issue. Are any OpenGL errors generated? Does the same happen on small points?
You might also try hinting (if it's possible in QT):
glHint(GL_POINT_SMOOTH_HINT, GL_NICEST);
But i wouldn't expect this to change anything.

Related

How to use Qt QColormap in code?

I want to achieve something like this .
I looked into the Qt QColormap but I did'nt get enough information to code it. If someone knows how to do this. Please share code snippet.
It's more question about color models then Qt really but basically you are trying to make a full circle around edge of HSL color model while keeping saturation.
To produce something like that in Qt you will utilize gradient brush; since we want continuous blend I used QLinearGradient. If you look at the color wheel above you will notice that red color is at 0 degrees, green is at 120 degrees and blue is at 240 degrees. QLinearGradient works with range from 0-1 so this will transform to 0, 1/3, 2/3 respectively. We also need need to add final stop which will complete the gradient back to red color.
I added bit of alpha channel to keep the color tone down so you can experiment with that; final code will look something like this:
class ColorScale : public QWidget {
Q_OBJECT
public:
using QWidget::QWidget;
protected:
void paintEvent(QPaintEvent *event) override {
QPainter painter(this);
painter.setOpacity(0.9);
painter.setRenderHint(QPainter::HighQualityAntialiasing);
QLinearGradient gradient(0, 0, 0, height());
QGradientStops stops;
stops << QGradientStop(0, Qt::red);
stops << QGradientStop(1.0/3, Qt::blue);
stops << QGradientStop(2.0/3, Qt::green);
stops << QGradientStop(1, Qt::red);
gradient.setStops(stops);
painter.fillRect(rect(), gradient);
}
};
And it produces this:
You can add labels by calling QPainter::drawText.

How do I add a border to a rounded QPixmap?

I'm trying to show a rounded avatar QPixMap with a white border around it. However, I have no clue as to how I could add the border... Is it even possible?
This is the function I have to draw the avatar image.
void AccountDropDownMenu::setAvatar(
const QByteArray& bytes)
{
QPixmap new_avatar;
new_avatar.loadFromData(bytes);
new_avatar = new_avatar.scaledToHeight(40, Qt::SmoothTransformation);
QBitmap map(new_avatar.size());
map.fill(Qt::color0);
QPainter painter(&map);
painter.setRenderHint(QPainter::Antialiasing);
painter.setBrush(Qt::color1);
painter.setPen(QPen(Qt::white, 10));
painter.drawRoundedRect(
m_avatar_label->x(),
m_avatar_label->y(),
new_avatar.width(),
new_avatar.height(),
45,
45);
new_avatar.setMask(map);
avatar_label->setPixmap(new_avatar);
}
Update
Thanks to dtech I was able to get the desired output using the following updated function.... Although it's a bit pixly (the border).
void AccountDropDownMenu::setAvatar(
const QByteArray& bytes)
{
QPixmap new_avatar;
new_avatar.loadFromData(bytes);
new_avatar = new_avatar.scaledToHeight(40, Qt::SmoothTransformation);
QBitmap map(new_avatar.size());
map.fill(Qt::color0);
QPainter painter(&map);
painter.setRenderHint(QPainter::Antialiasing);
painter.setBrush(Qt::color1);
painter.drawRoundedRect(
QRectF(
avatar_label->x(),
avatar_label->y(),
new_avatar.width(),
new_avatar.height()),
40,
40);
new_avatar.setMask(map);
QPainter painter2(&new_avatar);
painter2.setRenderHint(QPainter::Antialiasing);
painter2.setPen(QPen(Qt::white, 1));
painter2.drawRoundedRect(
QRectF(
avatar_label->x(),
avatar_label->y(),
new_avatar.width(),
new_avatar.height()),
40,
40);
avatar_label->setPixmap(new_avatar);
}
In Qt you draw fills with a brush, but outlines are drawn with a QPen.
I haven't used QPainter in a long time, but IIRC, the default pen is zero width, which would explain why you aren't getting anything - you are not setting a pen.
Also, you don't need "another" rounded rectangle, you can apply fill and outline to the same geometry, as demonstrated in this answer.
Update:
Your updated code sets a mask, which sets an alpha channel. That cuts away from what you already have, it could not possibly add anything. You have to paint on the pixmap. Simply use new_avatar as the paint device - QPainter painter(&new_avatar); and get rid of the rest.

QPainter mirror image not working

I have a program which functions similar to paint, and I have a change request as such:
Add a new function named mirror mode, in a mirror mode, the canvas is divided by left and right half. All the pictures that are drawn in one of the half should be mirrored to the opposite half.
I have added code such that the mirrored portion appears; however the original image does not appear to be drawn by QPainter. Is there a simple thing I'm missing to get QPainter to show both the mirrored image and drawn image. The relevant source code is as follows :
//
void ImageArea::paintEvent(QPaintEvent *event)
{
QPainter *painter = new QPainter();
painter->begin(this);
QRect *rect = new QRect(event->rect());
painter->setBrush(QBrush(QPixmap(":media/textures/transparent.jpg")));
painter->drawRect(0, 0,
mImage->rect().right() - 1,
mImage->rect().bottom() - 1);
painter->drawImage(event->rect(), *mImage, event->rect());
painter->setPen(Qt::NoPen);
painter->setBrush(QBrush(Qt::black));
painter->drawRect(QRect(mImage->rect().right(),
mImage->rect().bottom(), 6, 6));
painter->drawImage(event->rect(), *mImage, event->rect());
painter->end();
painter->begin(this);
QImage tmp(mImage->mirrored(true,false));
painter->drawImage(0, 0, tmp);
painter->end();
}

How do I combine a QImage and QPixmap?

I'm using PyQt, and I've loaded an image from disk into a QPixmap. I've also created a mask, using:
self.mask = QImage(self.image.width(), self.image.height(), QImage.Format_Mono)
self.mask.fill(0)
I'd like to combine the two for display, such that any pixels colored black in the mask are drawn in translucent red over the image when I render it.
I've created a custom widget, that renders the image in the paint event like so:
def paintEvent(self, event):
p = QPainter(self)
r = event.rect()
p.drawPixmap(r, self.image, r)
This works fine. What I'm less clear on is how to take the data in the mask and paint a translucent red only over those pixels in the destination image.
I've tried turning the mask into a clipping region, like this:
mask = QPixmap.fromImage(self.mask.createMaskFromColor(self.mask.color(0)))
p.setClipRegion(QRegion(mask))
color = QColor(255, 0, 0, 128)
p.setPen(Qt.NoPen)
p.setBrush(QBrush(color))
p.drawRect(r)
... but it doesn't draw anything (and draws a translucent red box over the whole image if I don't call setClipRegion).
I also tried creating the mask as a QImage.Format_ARGB4444_Premultiplied, and using transparency. And while this does work, and I can edit the mask in my program (and verify that some portions of the mask are transparent and some portions are opaque) the self.mask.createAlphaMask() method returns a solid white rectangle.
Do the "create mask" methods actually do anything?
Either change your mask's design or create a new QImage based on your mask. Then draw the QImage on the destination. It is not only a method that works, but also a method that is faster than drawing single pixels or sth. similar. I tried several ways and this was the fastest so far (on a QGLWidget).
The idea is that you encode the transparency, and also the non-marked pixels, in the QImage directly, like this:
QImage dest(<width>, <height>, QImage::Format_ARGB32);
dest.fill(qRgba(0, 0, 0, 0));
for (int y = 0; y < <height>; ++y) {
QRgb *destrow = (QRgb*)dest.scanLine(y);
for (int x = 0; x < <width>; ++x) {
if (<should be marked>)
destrow[x] = qRgba(255, 0, 0, 127);
}
}
painter.drawImage(0, 0, dest);
For reference, take a look at the code here:
https://sourceforge.net/p/gerbil/svn/60/tree/trunk/gerbil-gui/bandview.cpp#l59

openGL texturing shows only red component of texture

I am trying to texture map an image to a single polygon. My image is being read correctly, but only the red plane of the image is being textured.
I am doing this within a QGLWidget
I have checked the image after it is read, and it's components are being read correctly--ie, I get valid values for the green and blue planes.
Here is the code
QImageReader *theReader = new QImageReader();
theReader->setFileName(imageFileName);
QImage theImageRead = theReader->read();
if(theImageRead.isNull())
{
validTile = NOT_VALID_IMAGE_FILE;
return;
}
else
{
int newW = 1;
int newH = 1;
while(newW < theImageRead.width())
{
newW *= 2;
}
while(newH < theImageRead.height())
{
newH *= 2;
}
theImageRead = theImageRead.scaled(newW, newH, Qt::IgnoreAspectRatio, Qt::SmoothTransformation);
// values checked in theImageRead are OK here
glGenTextures(1,&textureObject);
theTextureImage = QGLWidget::convertToGLFormat(theImageRead);
// values checked in theTextureImage are OK here
glBindTexture(GL_TEXTURE_2D, textureObject);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,newW, newH, 0, GL_RGBA, GL_UNSIGNED_BYTE,theTextureImage.bits() );
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glFlush();
validTile = VALID_TEXTURE;
return;
}
then I draw like this:
{
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,textureTiles[tN]->getTextureObject() );
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0);
glVertex2f(textureTiles[tN]->lowerLeft.x(), textureTiles[tN]->lowerLeft.y());
glTexCoord2f(1.0,0.0);
glVertex2f(textureTiles[tN]->lowerRight.x(), textureTiles[tN]->lowerRight.y());
glTexCoord2f(1.0,1.0);
glVertex2f(textureTiles[tN]->upperRight.x(), textureTiles[tN]->upperRight.y());
glTexCoord2f(0.0,1.0);
glVertex2f(textureTiles[tN]->upperLeft.x(), textureTiles[tN]->upperLeft.y());
glEnd();
glDisable(GL_TEXTURE_2D);
}
Does anybody see anything that would cause my texture to be interpreded as if it is values of (r,0,0,1)? (r,g,b,a)?
QT 4.7.1, Ubuntu 10.04, openGl 2.something or other
thanks in advance for any help
I've had a similar problem. I found that I had to "reset" the gl color to white and opaque before drawing a texturized quad, or the colors would get messed up. Like this:
...
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,textureTiles[tN]->getTextureObject() );
glColor4f(1.0, 1.0, 1.0, 1.0); // reset gl color
glBegin(GL_QUADS);
...
This is a very common problem. First, set your MIN/MAG filters to something, since the defaults use mipmaps, and since you didn't provide mipmaps, the texture is incomplete.
Sampling a incomplete texture usually gives white. A glColor call with the default texture environment will multiply the vertex color with the texture color. You probably have something like glColor(red), and red * white = red, so that's why you're seeing red.
To fix it, set the MIN/MAG filters to GL_LINEAR or GL_NEAREST.

Resources