(Qt) Render SVG with custom color - qt

I want to render a SVG icon in a QPixmap by choosing the drawing color.
This is my code using PyQt:
def svg_to_pixmap(svg_filename: str, width: int, height: int, color: QColor) -> QPixmap:
renderer = QSvgRenderer(svg_filename)
pixmap = QPixmap(width, height)
pixmap.fill(Qt.GlobalColor.transparent)
painter = QPainter(pixmap)
painter.setPen(QPen(color))
renderer.render(painter)
painter.end()
return pixmap
And a test code to display the pixmap:
app = QApplication([])
pixmap = svg_to_pixmap("test.svg", 512, 512, Qt.GlobalColor.red)
label = QLabel()
label.setStyleSheet("background-color: yellow;")
label.setPixmap(pixmap)
label.show()
app.exec()
The issue is that painter.setPen has no effect as expected (the drawing remains black). The background is transparent as expected and we can see the label background color behind.
An example of SVG file to test here
My configuration: Ubuntu22.10, X11, PyQt6.3.1

The answer is not simple. But, as long as you are completely sure that you only want to use the shape (or mask) of the original SVG, then it is feasible.
The reason for which setting the pen doesn't change the result is that SVG is a vector image format: it usually explicitly tells the colors of anything it wants to draw, so, setting the painter pen is almost useless, unless the SVG content is so simple to not specify it, which is clearly not the case: icons normally define colors of their shapes, and for good reasons.
The problem comes when trying to understand compositing and the possible alternatives that QPainter provides.
To be honest, even after trying to put efforts in understanding the results and associating them with the QPainter.CompositionMode enums, I still need to testing and checking in order to understand the proper mode I need.
What you probably need is the CompositionMode_SourceIn, which is cryptically explained in the documentation:
The output is the source, where the alpha is reduced by that of the destination.
As far as I can understand, the source is what is going to be painted, while the destination is what was previously painted (consider the source as a brush painting, and the destination as the canvas on which you paint).
With that in mind, we need to use the original svg as the destination and fill the whole pixmap with the color we want. Since only the alpha of the destination will be used (the visible part of the svg), we will be practically doing some sort of "stencil".
def svg_to_pixmap(svg_filename: str, width: int, height: int, color: QColor) -> QPixmap:
renderer = QSvgRenderer(svg_filename)
pixmap = QPixmap(width, height)
pixmap.fill(Qt.GlobalColor.transparent)
painter = QPainter(pixmap)
renderer.render(painter) # this is the destination, and only its alpha is used!
painter.setCompositionMode(
painter.CompositionMode.CompositionMode_SourceIn)
painter.fillRect(pixmap.rect(), color)
painter.end()
return pixmap
Which seems to give the wanted result:

Related

Add a QTextureMaterial to a custom mesh

I have a custom mesh (created in blender) that I insert into Qt3D using the following code:
QMesh *mesh = new QMesh(rootEntity);
mesh->setSource(QUrl::fromLocalFile(baseUrl+"mesh.obj"));
This works fine; I can add it to an entity with a material and everything.
Then I create a custom material using a texture loaded from a .png. I do this using the following code:
Qt3DRender::QTextureLoader *loader = new Qt3DRender::QTextureLoader(rootEntity);
Qt3DExtras::QTextureMaterial *material = new Qt3DExtras::QTextureMaterial(rootEntity);
loader->setSource(QUrl::fromLocalFile(baseUrl+"pattern.jpg"));
material->setTexture(loader);
This also works fine. When I add this material to a built-in Qt mesh (e.g. QPlaneMesh or QSphereMesh) it shows perfectly on the surface as one would expect.
However - now comes the problem - if I add it with the QMesh specified above, the mesh just gets one homogeneous color which seems to be the average over the colors in the pattern. Here you can see what I mean: both objects have the same material. The top one is inserted externally while the bottom one is a QPlaneMesh.
Can someone explain me why that is the case? And is there a way to successfully add textures to custom meshes?
Note: I have tried this with 2D and 3D meshes and it is the same outcome.
Note 2: I have also tried it with diferent images and it still just gets one homogeneous average color.
UPDATE: I tried (following the suggestion in the answer) to add a texture attribute to the geometry of my imported mesh like the following:
Qt3DCore::QEntity *entity = new Qt3DCore::QEntity(rootEntity);
QMesh *mesh = new QMesh(entity);
mesh->setSource(QUrl::fromLocalFile(baseUrl+"mesh.obj"));
const int stride = (3 + 2 + 3 + 4) * sizeof(float);
QSize resolution = QSize(2,2);
const int nVerts = resolution.width() * resolution.height();
QAttribute *texCoordAttr = new QAttribute(mesh->geometry());
Qt3DRender::QBuffer *vertexBuffer = new Qt3DRender::QBuffer(mesh->geometry());
texCoordAttr->setName(QAttribute::defaultTextureCoordinate1AttributeName());
texCoordAttr->setVertexBaseType(QAttribute::Float);
texCoordAttr->setVertexSize(2);
texCoordAttr->setAttributeType(QAttribute::VertexAttribute);
texCoordAttr->setBuffer(vertexBuffer);
texCoordAttr->setByteStride(stride);
texCoordAttr->setByteOffset(3*sizeof(float));
texCoordAttr->setCount(nVerts);
vertexBuffer->setDataGenerator(QSharedPointer<PlaneVertexBufferFunctor>::create(1.0f,1.0f,resolution, false)); //these input values (width, height, resolution, mirrored) are probably the cause of the problem
mesh->geometry()->addAttribute(texCoordAttr); //it crashes here
entity->addComponent(mesh);
entity->addComponent(transform);
entity->addComponent(material);
I created the functor for setDataGenerator like in the QPlaneMesh code. Now I am suspecting the segmentation fault is because of sizing mismatch. So how can I get the correct width and height of an external mesh from its QGeometry? And what else might be wrong here?
It looks like the mesh is missing the texture coordinates. When you open the file with a text editor, do you see the key vt somewhere? Those are the texture coordinates. You can read about the format here.
If you still want the obj file that you have, you have to add texture coordinates if it doesn't have any. It's probably best to open the file in Blender and use its texture mapper - at least for more complex meshes. Guessing which vertex needs which texture coordinate is not really feasible.
The texture coordinates work as follows:
If you have an image of, say 500 by 400 pixels, the texture coordinate (0.7, 0.3) is (500 * 0.7, 400 * 0.3) = (350, 120), meaning that the vertex which has that texture coordinate will receive the color value of the pixel at (350, 120). Values inside a triangle will get interpolated.
If your obj file comes along with a mtl file then it probably already has texture coordinates. If you want to load this mtl file use the QSceneLoader and add it to its parent QEntity to display everything.

how to get the current widget's cursor size in pixels

How to get the current mouse cursor size measured in pixels? I tried mywidget.cursor().pixmap().size() but it returns (0,0) for the standard arrow cursor.
(I need this to show a special tool tip label which would appear just below the cursor and would follow the cursor and I cannot use the standard QToolTip for certain reasons - delays etc. I already have a nice, working solution but if I display the label exactly at the cursor position, the cursor is painted over it hiding some text on the label. Of course I could move it down using some 'magic' number like 32 pixels, but this would cause me bad stomach feelings.)
You can't do this with the standard cursors. The QCursor methods only work with custom bitmaps or pixmaps. So you will either have to use your own cursors, or estimate the size.
A quick web-search suggests that the standard cursors can vary in size and there is no fixed maximum (although that probably depends on the platform). For example, on X11, the size range usually includes 16, 24, 32, 48, and 64, but other sizes may be possible (even as large as 512). The default is normally 32.
If you need accuracy, it would seem that using custom cursors is the only way to solve this problem.
You could use the code that is used in QTipLabel::placeTip, which offsets tooltips based on cursor size:
const int screenIndex = /*figure out what screen you are on*/;
const QScreen *screen = QGuiApplication::screens().value(screenIndex, QGuiApplication::primaryScreen());
if (const QPlatformScreen *platformScreen = screen ? screen->handle() : nullptr) {
QPlatformCursor *cursor = platformScreen->cursor();
const QSize nativeSize = cursor ? cursor->size() : QSize(16, 16);
const QSize cursorSize = QHighDpi::fromNativePixels(nativeSize, platformScreen);
}
To do this you do need at least one private header:
#include <qpa/qplatformscreen.h>
#include <qpa/qplatformcursor.h>
#include <QtGui/private/qhighdpiscaling_p.h>
If it doesn't have to be portable you can look at the size implementation of the QPlatformCursor implementation for the platform you're targeting (e.g. QWindowsCursor::size()) and use that code.

How to make qt qgraphicsview scale to not affect stipple pattern?

I draw few rectangles inside the QGraphicsView ; I use custom stipple pattern for these by creating a QBrush with my QPixmap. This gets displayed with the default zoom level as expected.
When I call view->scale(), the rectangles show up bigger or smaller as I expected. However Qt has scaled the individual bits of the stipple pattern which is not expected; I expected it to draw the larger or smaller rectangle again with the brush.
Eg.
If I had used a stipple pattern with one pixel dot and pixel space, after zooming in, I want to see a larger rectangle but I want the same stipple pattern with same pixel gaps. Is this achievable somehow? Thanks.
I ran into the same problem while developing an EDA tool companion in Qt.
After some trying, what I did (and seems to work for me) is to create a custom graphics item. On the paint method, I do:
QBrush newBrush = brush_with_pattern;
newBrush.setTransform(QTransform(painter->worldTransform().inverted()));
painter->setBrush(newBrush);
That is to apply the inverse transformation of the item to the brush (so it does not scale).
I think that the setDashOffset is only for the border of the shapes (not the fill).
You may use QPen::setDashOffset:
http://harmattan-dev.nokia.com/docs/library/html/qt4/qpen.html#setDashOffset
You'll need to set the offset based on the scenes zoom/scale level. You can grab a pointer to the scene in your item by calling scene(), don't forget to check for NULL though since it will be NULL when not added to the scene (although you shouldn't in theory get a paint() when not in a scene).
The other option is to use:
http://doc.qt.digia.com/qt/qpainter.html#scale
To undo the views scaling on your painter.
In case anyone is still looking on this, a related question here regarding scaling of standard fill patterns instead of pixmap fill patterns may help. Basically, it may not be possible to modify scaling of standard fill patterns (a few workaround ideas are listed), but, working with alpha values instead gives the desired effect if you are looking for varying colors, especially gray levels - and is much less convoluted.

QBrush texture without tiling

Is there an easy way to get rid of tiling when using a QBrush with texture?
QImage* texture = CreateQImage(); // create texture
QBrush* brush = new QBrush(*texture); // create texture brush
QPainter* painter = CreateQPainter(); // create painter
painter->fillRectangle(0, 0, 500, 500, *brush);
Suppose we have a QImage texture with size of 20x20 pixels. The code above will tile this texture all across the rectangle being filled. Is there an easy way to draw only a single instance of this texture? The QBrush usage is crucial.
Theoretically, I could reload every fill and draw method of the QPainter that takes a QBrush as input and use a QPainter.drawImage() method, but I think there must be a simplier way.
Thanks, Tony.
I don't think there is (see Qt::BrushStyle - the only style with a texture tiles it), and it wouldn't really make sens IMO. If you just want one image, use the drawImage functions as you've stated.
(One of the problems with not tiling is: what do you fill the rest of the rectangle with? Nothing? Some default background color? Some other QBrush attribute?)

Is there a way to make drawText() update a QPicture's bounding rect?

Drawing on a QPicture should update its bounding rect. Like this:
>>> picture = QPicture()
>>> painter = QPainter(picture)
>>> picture.boundingRect()
QRect(0,0,0,0)
>>> painter.drawRect(20,20,50,50)
>>> picture.boundingRect()
QRect(20,20,50,50)
But if I draw text on it, the bounding rect isn't updated:
>>> picture = QPicture()
>>> painter = QPainter(picture)
>>> picture.boundingRect()
QRect(0,0,0,0)
>>> painter.drawText(10,10, "Hello, World!")
>>> picture.boundingRect()
QRect(0,0,0,0)
Obviously, it doesn't update the bounding rect.
Is there a way to make it repsect drawn text or do I have to do it manually? (Not too hard, but I hope that Qt can assist me here.)
Take a look at these overload methods, where you must specify the Bounding Rectangle after the text parameter (which is apparently different than the rectangle in the first argument's position):
Draws the given text within the
provided rectangle according to the
specified flags. The boundingRect (if
not null) is set to the what the
bounding rectangle should be in order
to enclose the whole text.
QPainter.drawText (1), QPainter.drawText (2)
Update:
It appears if you want to generate a bounding rectangle for the drawText() method in advance, you just call the boundingRect() method on QPainter, which does the following:
Returns the bounding rectangle of the
text as it will appear when drawn
inside the given rectangle with the
specified flags using the currently
set font(); i.e the function tells you
where the drawText() function will
draw when given the same arguments.
If the text does not fit within the
given rectangle using the specified
flags, the function returns the
required rectangle.
QPainter.boundingRect
I linked to BoundingRect with QRectF output, but the information applies to the other versions as well.
So basically, pass the result of QPainter.boundingRect() into the boundingRect parameter of the QPainter.drawText() method (the second QRect argument).
Update 2:
I APOLOGIZE PROFUSELY for being so damn dense. I forgot that drawText works differently in PyQt than in Qt. The bounding rectangle is RETURNED by the drawText function (not passed in like Qt) and in addition, you have to specify alignment flags before you get a bounding rectangle given back to you. (I even included the p.end() as per Aaron Digulla's comment):
pic = Qt.QPicture()
p = QtGui.QPainter(pic)
brect = p.drawText(10,10,200,200, QtCore.Qt.AlignCenter, "blah")
p.end()
print brect
print pic.boundingRect()
Here is the output:
PyQt4.QtCore.QRect(100, 103, 20, 14)
PyQt4.QtCore.QRect(0, 0, 0, 0)
So it appears you will have to set the bounding rectangle yourself, though at least it is returned to you by the output of the drawText() method when passing in flags.
This does not seem like ideal behaviour, that you would have to set the bounding rectangle yourself. I hope someone else has the answer you're looking for, but I suspect you may want to report this bug.
Painting doesn't change the size of something in Qt. The main reason is this:
A component has to paint itself
The paint triggers a resize
The resize triggers painting -> endless loop
So the resize has to happen during the layout phase. After that, the bounds should not change.
To solve your problem, use QFontMetric to figure out how big your text is going to be during or close to the construction of your picture and then resize it accordingly.
[EDIT] Hm ... try to call end() before requesting the bounding rect. If that works, you've found a bug (can't see a reason why the bounding rect should not exist as you add elements...)

Resources