Cairo masking - is there something I'm missing? - vector-graphics

So I'm trying to set up a mask in Cairo, but can't get it to make any difference. Below I have a simple program based off the one here: http://snipplr.com/view/22584/cairo-hello-world-examble/.
I'm setting a completely transparent mask so nothing should be getting drawn, but it doesn't seem to have any effect - the text still gets drawn. My code is below. What am I missing?
Thanks!
int main(int argc, char* argv[])
{
cairo_surface_t* surface;
cairo_t* cr;
surface = cairo_image_surface_create (CAIRO_FORMAT_ARGB32, 200, 40);
cr = cairo_create (surface);
//****
// Here I create a pattern with an alpha of zero and set it to be cairo's mask
// According to http://www.cairographics.org/manual/cairo-context.html#cairo-mask
// "Opaque areas of pattern are painted with the source, transparent areas are not painted."
// Shouldn't this make it so nothing gets drawn?
//****
cairo_pattern_t* nothing = cairo_pattern_create_rgba(0,0,0,0);
cairo_mask (cr, nothing);
cairo_text_extents_t te;
cairo_set_source_rgb (cr, 0.0, 0.0, 0.0);
cairo_select_font_face (cr, "Georgia",
CAIRO_FONT_SLANT_NORMAL, CAIRO_FONT_WEIGHT_BOLD);
cairo_set_font_size (cr, 20.0);
cairo_text_extents (cr, "hello cairo!", &te);
cairo_move_to (cr, 20, 20);
cairo_show_text (cr, "hello cairo!");
cairo_fill(cr);
// An image gets drawn that says "hello cairo!" in big letters
cairo_surface_write_to_png(surface, "hello_cairo.png");
return 0;
}

Ok I figured it out. I was expecting cairo_mask() to behave like cairo_clip(). (in cairo_clip() it establishes a clip path that clips every element drawn afterward)
cairo_mask is explained very simply: "cairo_mask -- Paint current source fill pattern using alpha channel mask pattern." That's exactly what it does - fills in the whole screen with the current fill pattern and blends it by whatever alpha is for that pixel on the mask.

Related

Drawing QImage on a QPainter which has inverted y-axis

I have a scene with an inverted y-axis. Everything is correctly drawn except QImages.
I use drawIage() as:
QRectF aWorldRect = ...
QRectF anImageRect = QRectF(0, 0, theQImage.width(), theQImage.height())
thePainter->drawImage(aWorldRect, theQImage, anImageRect;
I get undefined graphics outside (to the top of) where the image should be displayed. This is normal because y-axis is inverted. So I expected something like that may fix the issue:
QRectF anImageRect = QRectF(0, 0, imgWidth, -imgHeight)
It has the same effect. If I do aWorldRect = aWorldRect.noralized() before calling drawImage(), I get the image in the correct rectangle but mirrored so I did aQImage = aQImage.mirrored(). Now the image is correctly displayed in the correct rectangle
I consider this a workaround which I don't like to keep. So, can someone tell me what should be done to get the image displayed, the right way?
Update
Here I put a minimal sample of my problem that is ready to compile:
Update 2014-04-09 10:05 EET
Updated the sample code little bit to make really work using the workaround
#include <QtGui>
const int WIDTH = 640;
const int HEIGHT = 480;
class View : public QGraphicsView
{
protected:
void drawBackground(QPainter *p, const QRectF & rect)
{
QImage img = QImage("/usr/share/backgrounds/images/stone_bird.jpg"); // or any other
/* The next three lines makes everything displayed correctly but
should be considered a workaround */
/* I ignore the rect that is passed to the function on purpose */
QRectF imageRect = QRectF(QPointF(0, 0), QPointF(img.width(), img.height()));
QRectF theSceneRect = sceneRect().normalized();
p->drawImage(theSceneRect, img.mirrored(), imageRect);
}
};
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
View w;
/* I don't want to change the code below */
w.setScene(new QGraphicsScene(QRectF(QPointF(0, HEIGHT), QPointF(WIDTH, 0))));
w.scale(1, -1);
w.scene()->addLine(0, HEIGHT, WIDTH, 0);
w.showMaximized();
return a.exec();
}
The approach of reversing the Y coordinate value is right but the implementation was faulty.
QRectF's documentation shows that it takes (x, y, width, height). Giving height as negative makes little sense. Instead try the other constructor which takes topLeft and bottomRight.
QRectF anImageRect(QPointF(0.0f, -imgHeight), QPointF(imageWidth, 0.0f));
EDIT:
It seems that the only drawings like line, arc, etc. are affected by the scale (1, -1) transform you set on the view. drawImage continues to render upside down due to the scale set. The simple fix is to set the scale back to (1, -1). Here's the updated code:
void drawBackground(QPainter *p, const QRectF & rect)
{
QImage img = QImage("/usr/share/backgrounds/images/stone_bird.jpg");
// backup the current transform set (which has the earlier scale of (1, -1))
const QTransform oldTransform = p->transform();
// set the transform back to identity to make the Y axis go from top to bottom
p->setTransform(QTransform());
// draw
QRectF theSceneRect = sceneRect().normalized();
p->drawImage(theSceneRect, img);
// revert back to the earlier transform
p->setTransform(oldTransform);
}
Updated on 2014-04-14 14:35 EET
I could finally solve the problem reliably by replacing the two lines
QRectF theSceneRect = sceneRect().normalized();
p->drawImage(theSceneRect, img.mirrored(), imageRect);
of my question to
QRectF theSceneRect = sceneRect(); // Not normalized. It is no more workaround :)
qreal x = theSceneRect.x();
qreal y = theSceneRect.y();
qreal w = theSceneRect.width();
qreal h = theSceneRect.height();
qreal sx = imageRect.x();
qreal sy = imageRect.y();
qreal sw = imageRect.width();
qreal sh = imageRect.height();
p->translate(x, y);
p->scale(w / sw, h / sh);
p->setBackgroundMode(Qt::TransparentMode);
p->setRenderHint(QPainter::Antialiasing, p->renderHints() &
QPainter::SmoothPixmapTransform);
QBrush brush(img);
p->setBrush(brush);
p->setPen(Qt::NoPen);
p->setBrushOrigin(QPointF(-sx, -sy));
p->drawRect(QRectF(0, 0, sw, sh));
p->restore();
This is inspired by the implementation of the QPainter::drawImage() which is not reliable in such cases due to many if statements handling rectangles with negative values of width or height.
It would be better if I made the solution in another function but I kept it this way to be more compatible with the code in my question.

Can I make QPainter fonts operate in the same units as everything else?

I started with this
void draw_text (QPainter & p, const QString & text, QRectF target)
{
float scale = calculate_font_scale (p, text, target); // about 0.0005
QFont f = p .font ();
float old_size = f .pointSizeF ();
f .setPointSizeF (old_size * scale);
p .setFont (f);
// this prints the new font size correctly
qWarning ("old: %f, new: %f", old_size, p .font () .pointSizeF ());
// but that doesn't seem to affect this at all
p .drawText (position, text);
}
The QPainter's font has size has been correctly updated, as the qWarning line indicates, but the text draws much, much to big. I think this is because the QPainter coordinate system has been zoomed-in quite a lot and it seems setPointSizeF only works with sizes of at least 1. By eye it seems that the font is one "unit" high so I'll buy that explanation, although it's stupid.
I experimented with using setPixelSize instead, and although p.fontMetrics().boundingRect(text) yields a sane-looking answer, it is given in pixel units. One requirement for the above-function is that the bounding rect of the text is horizontally and vertically centred with respect to the target argument, which is in coordinates of a vastly different scale, so the arithmetic is no longer valid and the text is drawn miles off-screen.
I want to be able to transform the coordinate system arbitrarily and if, at the point, one "unit" is a thousand pixels high and I'm drawing text in a 0.03x0.03 unit box then I want the font to be 30 pixels high, obviously, but I need all my geometry to be calculated in general units all the time, and I need fontMetrics::boundingRect to be in these same general units.
Is there any way out of this or do I have to dick around with pixel calculations to appease the font API?
You simply have to undo whatever "crazy" scaling there was on the painter.
// Save the state
p.save();
// Translate the center of `target` to 0,0.
p.translate(-target.center());
// Scale so that the target has a "reasonable" size
qreal dim = 256.0;
qreal sf = dim/qMin(target.height(), target.width());
p.scale(sf, sf);
// Draw your text
p.setPointSize(48);
p.drawText(QRectF(dim, dim), Qt::AlignCenter | Qt::WordWrap, text);
// Restore the state
p.restore();

OpenGL + QT: render to texture and display it back

After some trouble I've managed to correctly render to texture inside a Frame Buffer Object in a Qt 4.8 application: I can open an OpenGL context with a QGLWidget, render to a FBO, and use this one as a texture.
Now I need to display the texture rendered in a QPixmap and show it in some other widget in the gui. But.. nothing is shown.
Those are some pieces of code:
// generate texture, FBO, RBO in the initializeGL
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glGenRenderbuffers(1, &rboId);
glBindRenderbuffer(GL_RENDERBUFFER, rboId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, TEXTURE_WIDTH, TEXTURE_HEIGHT);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
// now in paintGL
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
// .... render into texture code ....
if(showTextureInWidget==false) {
showTextureInWidget = true;
char *pixels;
pixels = new char[TEXTURE_WIDTH * TEXTURE_HEIGHT * 4];
glReadPixels(0, 0, TEXTURE_WIDTH, TEXTURE_HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, pixels);
QPixmap qp = QPixmap(pixels);
QLabel *l = new QLabel();
// /* TEST */ l->setText(QString::fromStdString("dudee"));
l->setPixmap(qp);
QWidget *d = new QWidget;
l->setParent(d);
d->show();
}
glBindFramebuffer(GL_FRAMEBUFFER, 0); // unbind
// now draw the scene with the rendered texture
I see the Widget opened but.. there is nothing inside it. If I decomment the test line.. I see the "dudee" string so I know that there is a qlabel but.. no image from the QPixmap.
I know that the original data are ´unsigned char´ and I'm using ´char´ and I've tried with some different color parameters (´GL_RGBA´, ´GL_RGB´ etc) but I don't think this is the point.. the point is that I don't see anything..
Any advice? If I have to post more code I will do it!
Edit:
I haven't posted all the code, but the fact I'd like to be clear is that the texture is correctly rendered as a texture inside a cube. I'm just not able to put it back in the cpu from gpu
Edit 2:
Thanks to the peppe answer I found out the problem: I needed a Qt object that accept as a constructor some raw pixels data. Here is the complete snippet:
uchar *pixels;
pixels = new uchar[TEXTURE_WIDTH * TEXTURE_HEIGHT * 4];
for(int i=0; i < (TEXTURE_WIDTH * TEXTURE_HEIGHT * 4) ; i++ ) {
pixels[i] = 0;
}
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glReadPixels( 0,0, TEXTURE_WIDTH, TEXTURE_HEIGHT, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
qi = QImage(pixels, TEXTURE_WIDTH, TEXTURE_HEIGHT, QImage::Format_ARGB32);
qi = qi.rgbSwapped();
QLabel *l = new QLabel();
l->setPixmap(QPixmap::fromImage(qi));
QWidget *d = new QWidget;
l->setParent(d);
d->show();
Given that that's not all of your code and -- as you say -- the texture is correctly filled, then there's a little mistake going on here:
glReadPixels(0, 0, TEXTURE_WIDTH, TEXTURE_HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, pixels);
QPixmap qp = QPixmap(pixels);
The QPixmap(const char *) ctor wants a XPM image, not raw pixels. You need to use one of the QImage ctors to create a valid QImage. (You can also pass ownership to the QImage, solving the fact that you're currently leaking pixels...)
Once you do that, you'll figure out that
the image is flipped vertically, as OpenGL has the origin in the bottom left corner, growing upwards/rightwards, while Qt assumes origin in the top left, growing to downwards/rightwards;
the channels might be swapped -- i.e. OpenGL is returning data with the wrong endianess. I don't remember in this case if using glPixelStorei(GL_PACK_SWAP_BYTES) or GL_UNSIGNED_INT_8_8_8_8 as the type may help, eventually you need to resort to a CPU-side loop to fix your pixel data :)

scale surface without blurring it

I try to scale an image derivedfrom a file, to any (sensible) scale.
The problem is, cairo somhow autoblurrs it. How can I fix/remove it? The aim is to see the individual pixels.
Thanks for any reply.
Edit: Some code, triggering on "draw" event, parent is a GtkDrawingArea
static gboolean
cb_event_draw (GtkWidget *obj, cairo_t *cr, gpointer data)
{
guint width, height;
width = gtk_widget_get_allocated_width (obj);
height = gtk_widget_get_allocated_height (obj);
_priv = ...; //some struct
// cairo_save (cr);
cairo_set_antialias (cr, CAIRO_ANTIALIAS_NONE);
cairo_scale (cr, _priv->zoom, _priv->zoom);
cairo_set_source_surface (cr, _priv->image, 0., 0.);
cairo_set_antialias (cr, CAIRO_ANTIALIAS_NONE);
cairo_pattern_set_filter (cr, CAIRO_FILTER_FAST); // no matter if this is there or not
// it does actually, matter, works with this:
// cairo_pattern_set_filter (cairo_get_source (cr), CAIRO_FILTER_FAST);
cairo_paint (cr);
// print some markers at defined locations
return FALSE;
}
I suspect that you need:
cairo_pattern_set_filter(cairo_get_source(cr), CAIRO_FILTER_FAST);

Aero: How to draw solid (opaque) colors on glass?

Using GDI+ to draw various colors:
brush = new SolidBrush(color);
graphics.FillRectangle(brush, x, y, width, height);
You'll notice that no opaque color shows properly on glass:
How do i draw solid colors on glass?
You'll also notice that a fully opaque color is handled differently depending on what color it is:
opaque black: fully transparent
opaque color: partially transparent
opaque white: fully opaque
Can anyone point me to the documentation on the desktop compositor that explains how different colors are handled?
Update 3
You'll also notice that FillRectangle behaves differently than FillEllipse:
FillEllipse with an opaque color draws an opaque color
FillRectangle with an opaque color draws partially (or fully) transparent
Explanation for non-sensical behavior please.
Update 4
Alwayslearning suggested i change the compositing mode. From MSDN:
CompositingMode Enumeration
The CompositingMode enumeration specifies how rendered colors are combined with background colors. This enumeration is used by the Graphics::GetCompositingMode and 'Graphics::SetCompositingMode' methods of the Graphics class.
CompositingModeSourceOver
Specifies that when a color is rendered, it is blended with the background color. The blend is determined by the alpha component of the color being rendered.
CompositingModeSourceCopy
Specifies that when a color is rendered, it overwrites the background color. This mode cannot be used along with TextRenderingHintClearTypeGridFit.
From the description of CompositingModeSourceCopy, it sounds like it's not the option i want. From the limitations it imposes, it sounds like the option i want. And with composition, or transparency disabled it isn't the option i want, since it performs a SourceCopy, rather than SourceBlend:
Fortunately it's not an evil i have to contemplate because it doesn't solve my actual issue. After constructing my graphics object, i tried changed the compositing mode:
graphics = new Graphics(hDC);
graphics.SetCompositingMode(CompositingModeSourceCopy); //CompositingModeSourceCopy = 1
The result has no effect on the output:
Notes
Win32 native
not .NET (i.e. native)
not Winforms (i.e. native)
GDI+ (i.e. native)
See also
Aero: How to draw ClearType text on glass?
Windows Aero: What color to paint to make “glass” appear?
Vista/7: How to get glass color?
Seems to work OK for me. With the lack of a full code example I'm assuming you've got your compositing mode wrong.
public void RenderGdiPlus()
{
List<string> colors = new List<string>(new string[] { "000000", "ff0000", "00ff00", "0000ff", "ffffff" });
List<string> alphas = new List<string>(new string[] { "00", "01", "40", "80", "c0", "fe", "ff" });
Bitmap bmp = new Bitmap(200, 300, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
Graphics graphics = Graphics.FromImage(bmp);
graphics.CompositingQuality = System.Drawing.Drawing2D.CompositingQuality.HighQuality;
graphics.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.None;
graphics.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.None;
graphics.CompositingMode = System.Drawing.Drawing2D.CompositingMode.SourceCopy;
graphics.CompositingQuality = System.Drawing.Drawing2D.CompositingQuality.HighQuality;
SolidBrush backBrush = new SolidBrush(Color.FromArgb(254, 131, 208, 129));
graphics.FillRectangle(backBrush, 0, 0, 300, 300);
graphics.CompositingMode = System.Drawing.Drawing2D.CompositingMode.SourceOver;
Pen pen = new Pen(Color.Gray);
for (int row = 0; row < alphas.Count; row++)
{
string alpha = alphas[row];
for (int column=0; column<colors.Count; column++)
{
string color = "#" + alpha + colors[column];
SolidBrush brush = new SolidBrush(ColorTranslator.FromHtml(color));
graphics.DrawRectangle(pen, 40*column, 40*row, 32, 32);
graphics.FillRectangle(brush, 1+40*column, 1+40*row, 31, 31);
}
}
Graphics gr2 = Graphics.FromHwnd(this.Handle);
gr2.CompositingMode = System.Drawing.Drawing2D.CompositingMode.SourceCopy;
gr2.CompositingQuality = System.Drawing.Drawing2D.CompositingQuality.HighQuality;
gr2.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.None;
gr2.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.None;
gr2.DrawImage(bmp, 0, 0);
}
I had a similar issue, but it involved drawing onto a layered window, rather than on Aero's glass. I haven't got any code with which I can test whether this solves your problem, but I figured it's worth a shot, since the symptoms of your problem are the same as mine.
As you have noticed, there seems to be some qwerks with FillRectangle, apparent by the differences between its behaviour and FillEllipse's.
Here are two work-arounds that I came up with, which each solve my issue:
Call FillRectangle twice
SolidBrush b(Color(254, 255, 0, 0));
gfx.FillRectangle(&b, Rect(0, 0, width, height));
gfx.FillRectangle(&b, Rect(0, 0, width, height));
Since the same area is being filled twice, they should blend and create RGB(255, 0, 0) regardless of the glass colour, which leads to a result of a 100% opaque shape. I do not prefer this method, as it requires every rectangle to be drawn twice.
Use FillPolygon instead
Just as with FillEllipse, FillPolygon doesn't seem to have the colour/opacity issue, unless you call it like so:
SolidBrush b(Color(255, 255, 0, 0));
Point points[4];
points[0] = Point(0, 0);
points[1] = Point(width, 0);
points[2] = Point(width, height);
points[4] = Point(0, height);
gfx.FillPolygon(&b, points, 4); //don't copy and paste - this won't work
For me, the above code resulted in a 100% transparent shape. I am guessing that this is either due to some form of optimisation that passes the call to FillRectangle instead. Or - most likely - there is some problem with FillPolygon, which is called by FillRectangle. Regardless, if you add an extra Point to the array, you can get around it:
SolidBrush b(Color(255, 255, 0, 0));
Point points[5];
points[0] = Point(0, 0);
points[1] = Point(0, 0); //<-
points[2] = Point(width, 0);
points[3] = Point(width, height);
points[4] = Point(0, height);
gfx.FillPolygon(&b, points, 5);
The above code indeed draws a 100% opaque shape for me. I hope this also resolves your issue.
Another day, another solution by me.
Draw everything you want to appear on glass into a bitmap.
Then, clear the form background with black color.
Immediately after this, draw the bitmap on your form.
However (as with any other solution not using DrawThemeTextEx):
Text rendering will not work correctly, because it always takes the back color of your form as an antialias/cleartype hint. Use DrawThemeTextEx instead, which also supports text with a glow effect behind.
I met the same issue with GDI.
GDI uses zero alpha channel value, so the simpliest solution is to fix alpha channel like this code does:
void fix_alpha_channel()
{
std::vector<COLORREF> pixels(cx * cy);
BITMAPINFOHEADER bmpInfo = {0};
bmpInfo.biSize = sizeof(bmpInfo);
bmpInfo.biWidth = cx;
bmpInfo.biHeight = -int(cy);
bmpInfo.biPlanes = 1;
bmpInfo.biBitCount = 32;
bmpInfo.biCompression = BI_RGB;
GetDIBits(memDc, hBmp, 0, cy, &pixels[0], (LPBITMAPINFO)&bmpInfo, DIB_RGB_COLORS);
std::for_each(pixels.begin(), pixels.end(), [](COLORREF& pixel){
if(pixel != 0) // black pixels stay transparent
pixel |= 0xFF000000; // set alpha channel to 100%
});
SetDIBits(memDc, hBmp, 0, cy, &pixels[0], (LPBITMAPINFO)&bmpInfo, DIB_RGB_COLORS);
}
I've found another way around it. Use LinearGradientBrush with both colors the same:
LinearGradientBrush brush(Point(0,0), Point(0,0), Color(255,231,45,56), Color(255,231,45,56));
g.FillRectangle(&brush, 25, 25, 30, 30);
This is perhaps slower than SolidBrush, but works fine.
Do you want a stupid solution? Here you get a stupid solution. At least it's just one line of code. And causing a small but ignorable side effect.
Assumption
When drawing solid, right angle rectangles, GDI+ tends to speed things up by drawing them in a faster method than drawing other stuff. This technique is called bitbliting. That is actually pretty clever since it is the fastest way to draw rectangles on a surface. However, the rectangles to be drawn must fulfill the rule that they are right angled.
This clever optimization was done before there was DWM, Aero, Glass and all the new fancy stuff.
Internally, bitblitting just copies the RGBA color data of pixels from one memory area to another (so to say from your drawing on your window). Sadly enough, the RGB format it writes is incompatible with glass areas, resulting in the weird transparency effects you observed.
Solution
So here comes a twist.
GDI+ can respect a transformation matrix, with which every drawing can be scaled, skewed, rotated or whatever. If we apply such a matrix, the rule that rectangles are right angled anymore is not guaranteed anymore. So, GDI+ will stop bitblitting these and draw them in a fashion similar to the ellipses.
But we also don't want to skew, scale or rotate our drawing. We simply apply the smallest transformation possible: We create a transformation matrix which moves every drawing down one pixel:
// If you don't get that matrix instance, ignore it, it's just boring math
e.Graphics.Transform = new Matrix(1f, 0.001f, 0f, 1f, 0f, 0f);
Now, bitblitting is off, rectangles are solid, violets are blue. If there would be just an easier way to control that, especially one not moving the drawings!
Thus said, if you want to draw on the first pixel row, use -1 as a Y coordinate.
You can decide if this really is a solution for you, or just ignore it.

Resources