I am rendering a QPixmap inside of a QThread. the code to paint is inside a function. If I declare the painter inside the drawChart function everything seems ok but if I declare the painter inside the run function the image is wrong in the sense that at the edge of a black and white area, the pixels at the interface are overlapped to give a grey. Does anyone know why this is so? Could it be because of the nature of the run function itself?
//This is ok
void RenderThread::run()
{
QImage image(resultSize, QImage::Format_RGB32);
drawChart(&image);
emit renderedImage(image, scaleFactor);
}
drawChart(&image)
{
QPainter painter(image);
painter.doStuff()(;
...
}
//This gives a image that seems to have artifacts
void RenderThread::run()
{
QImage image(resultSize, QImage::Format_RGB32);
QPainter painter(image);
drawChart(painter);
emit renderedImage(image, scaleFactor);
}
drawChart(&painter)
{
painter.doStuff();
...
}
//bad
.
//good
.
From C++ GUI Programming with Qt 4 by Jasmin Blanchette and Mark Summerfield:
One important thing to understand is
that the center of a pixel lies on
“half-pixel” coordinates. For example,
the top-left pixel covers the area
between points (0, 0) and (1, 1), and
its center is located at (0.5, 0.5).
If we ask QPainter to draw a pixel at,
say, (100, 100), it will approximate
the result by shifting the coordinate
by +0.5 in both directions, resulting
in the pixel centered at (100.5,
100.5) being drawn.
This distinction may seem rather
academic at first, but it has
important consequences in practice.
First, the shifting by +0.5 only
occurs if antialiasing is disabled
(the default); if antialiasing is
enabled and we try to draw a pixel at
(100, 100) in black, QPainter will
actually color the four pixels (99.5,
99.5), (99.5, 100.5), (100.5, 99.5), and (100.5, 100.5) light gray, to give
the impression of a pixel lying
exactly at the meeting point of the
four pixels. If this effect is
undesirable, we can avoid it by
specifying half-pixel coordinates, for
example, (100.5, 100.5).
Related
My LineItem inheriting from QGraphicsLineItem can change its pen width.
I have created a boundingRect that uses the QGraphicsLineItem::boundingRect adjusted by pads that get calculated based on pen width and arrows. It works.
void LineItem::calculateStuff() // called on any change including pen width
{
qreal padLeft, padRight, padT;
padLeft = 0.5 * m_pen.width(); // if no arrows
padT = padLeft;
padRight = padLeft;
m_boundingRect = QGraphicsLineItem::boundingRect().adjusted(-padLeft, -padT, padRight, padT);
update();
}
QRectF LineItem::boundingRect() const
{
return m_boundingRect;
}
QPainterPath LineItem::shape() const
{
QPainterPath p;
p.addRect(m_boundingRect);
return p;
}
There is only one artifact that I get:
if I increase the pen width, then decrease it, I get traces:
these of course disappear as soon as i move mouse or any action (I had a hard time getting the screen shots)
As pretty as they are (seriously I consider them a "feature :-) ) - I am trying to eliminate them. I tried to remember previous bounding rectangle, and update the item with the previous bounding rectangle - i thought that was what the option was for - but it didn't work.
QRectF oldRect = selectedItem->boundingRect();
item->setItemPenWidth(p);
selectedItem->update(oldRect);
selectedItem->update();
My viewport has
setViewportUpdateMode(BoundingRectViewportUpdate);
If I change to
setViewportUpdateMode(FullViewportUpdate);
I don't get artifacts - but I think this will impact performance which is a major constraint.
How can I fix these artifacts - that only occur in that specific situation, decreasing pen width / decreasing bounding rect of line, without impacting performance ?
Simple fix... I had to add
prepareGeometryChange();
in my calculateStuff() function.
I have not seen any changes from this before, it is the first time I change my boundingRect that it does not update seamlessly.
I have a QGraphicsScene with rather small point markers. I would like to enlarge the area of these markers to make dragging easier. The marker is a cross, +/- 2 pixels from the origin. I have reimplemented
QGraphicsItem::contains(const QPointF & point ) const
{
return QRectF(-10,-10,20,20);
}
and
void hoverEnterEvent(QGraphicsSceneHoverEvent* event)
{
setPen(QPen(Qt::red));
update();
}
but the marker only turns red when it is directly hit by the cursor (and even that is a bit picky). How can I enlarge the "hover area"?
As stated in the short comment:
Usually those things are handled via the bounding rect or the shape function, try overloading those. Take a look into the qt help of QGraphicsItem under shape (http://doc.qt.io/qt-4.8/qgraphicsitem.html#shape):
Returns the shape of this item as a QPainterPath in local coordinates.
The shape is used for many things, including collision detection, hit
tests, and for the QGraphicsScene::items() functions.
The default implementation calls boundingRect() to return a simple
rectangular shape, but subclasses can reimplement this function to
return a more accurate shape for non-rectangular items. For example, a
round item may choose to return an elliptic shape for better collision
detection. For example:
QPainterPath RoundItem::shape() const {
QPainterPath path;
path.addEllipse(boundingRect());
return path; } The outline of a shape can vary depending on the width and style of the pen used when drawing. If you want to include
this outline in the item's shape, you can create a shape from the
stroke using QPainterPathStroker.
This function is called by the default implementations of contains()
and collidesWithPath().
So what basically happens is that all functions that want to access the "Zone" which is associated with an item, call shape and then do e.g. a containment or collision detection with the resulting painterpath.
Thus if you have small items you should enlargen the shape zone.
Lets for instance consider a line that is your target, than your shape implementation could look like the following:
QPainterPath Segment::shape() const{
QLineF temp(qLineF(scaled(Plotable::cScaleFactor)));
QPolygonF poly;
temp.translate(0,pen.widthF()/2.0);
poly.push_back(temp.p1());
poly.push_back(temp.p2());
temp.translate(0,-pen.widthF());
poly.push_back(temp.p2());
poly.push_back(temp.p1());
QPainterPath path;
path.addPolygon(poly);
return path;
}
Pen is a member of the segment, and I use its width to enlarge the shape zone. But you can take anything else as well that has a good relation to the actual dimension of your object.
I need to do something similar to QPainter::drawImage, but drawing a triangle part of the given picture (into a triangular region of my widget) instead of working with rectangles.
Any idea how I could do that, besides painfully trying to redraw every pixel?
Thanks for your insights!
If it is feasible for you to use a QPixmap instead of a QImage, you can set a bitmap mask for the QPixmap which defines which of the pixels are shown and which are transparent:
myPixmap->setMask(myTriangleMask);
painter->drawPixmap(myPixmap);
Here is another solution based on QImage:
MaskWidget::MaskWidget(QWidget* parent) : QWidget(parent) {
img = QImage("Sample.jpg"); // The image to paint
mask = QImage("Mask.png"); // An indexed 2-bit colormap image
QPainter imgPainter(&img);
imgPainter.drawImage(0, 0, mask); // Paint the mask onto the image
}
void MaskWidget::paintEvent ( QPaintEvent * event ) {
QPainter painter(this);
painter.drawImage(10, 10, img);
}
Mask.png is an image file with the same size as Sample.jpg. It contains an alpha channel to support transparency. You can create this file easily with The GIMP, for example. I added an alpha channel, changed all areas I want to have painted to transparent and all other areas to white. To reduce the size, I finally converted it to an indexed 2-bit image.
You could even create the mask image programmatically with Qt, if you need your triangle be computed based on various parameters.
I am trying to rotate a Sprite in three dimensions around its centerpoint, and I am struggling to understand some of the behavior of matrix3D.
Ive overridden the set rotationX, rotationY, and rotationZ methods of the Sprite as follows:
override public function set rotationX (_rotationX:Number) : void {
this.transform.matrix3D.prependTranslation(this.width/2.0, this.height/2.0, 0);
this.transform.matrix3D.prependRotation(-this.rotationX, Vector3D.X_AXIS);
this.transform.matrix3D.prependRotation(_rotationX, Vector3D.X_AXIS);
this.transform.matrix3D.prependTranslation(-(this.width/2.0), -(this.height/2.0), 0);
}
override public function set rotationY (_rotationY:Number) : void {
this.transform.matrix3D.prependTranslation(this.width/2.0, this.height/2.0, 0);
this.transform.matrix3D.prependRotation(-this.rotationY, Vector3D.Y_AXIS);
this.transform.matrix3D.prependRotation(_rotationY, Vector3D.Y_AXIS);
this.transform.matrix3D.prependTranslation(-(this.width/2.0), -(this.height/2.0), 0);
}
override public function set rotationZ (_rotationZ:Number) : void {
this.transform.matrix3D.prependTranslation(this.width/2.0, this.height/2.0, 0);
this.transform.matrix3D.prependRotation(-this.rotationZ, Vector3D.Z_AXIS);
this.transform.matrix3D.prependRotation(_rotationZ, Vector3D.Z_AXIS);
this.transform.matrix3D.prependTranslation(-(this.width/2.0), -(this.height/2.0), 0);
}
I am using prependTranslation to correct the centerpoint of the rotation, and the first prependRotation to cancel out any previously-applied rotation.
Testing it out, rotationX works exactly as expected, and the Sprite rotates around its horizontal axis.
rotationY and rotationZ also appear to work fine. However, there is one problem: whenever rotationY or rotationZ are set, all of the other rotation values change as well. This is not a problem with rotationX -- if I set rotationX, nothing else changes. But if I set rotationY or rotationZ all the rotation values change, which is a problem for my app (which is trying to save and restore values).
I think I am just lacking some understanding about what is going on with matrix3D. How can I implement this so there is no interdependence between the values?
Another easy solution is to add the object and center it within a container sprite and do the 3D transformations on the containing sprite.
I know nothing about AS3 etc. But just looking at your code, I wonder why you translate on the z-axis using what I understand to be x and y values (width and height). Shouldn't the z-axis be translated using something like "depth"?
This is very simple, you can try use the following code:
var matrix3d:Matrix3D = s.transform.matrix3D;
matrix3d.appendRotation( -1, Vector3D.Z_AXIS , new Vector3D( 390, 360, 0 ) );
while s is your sprite, the third parameter, Vector3D indicate your sprite's center position.
The Above code will make the sprite s rotate more -1 degree.
My application is using Qt.
I have a class which is inheriting QGraphicsPixmapItem.
When applying transformations on these items (for instance, rotations), the origin of the item (or the pivot point) is always the top left corner.
I'd like to change this origin, so that, for instance, when setting the position of the item, this would put actually change the center of the pixmap.
Or, if I'm applying a rotation, the rotation's origin would be the center of the pixmap.
I haven't found a way to do it straight out of the box with Qt, so I thougth of reimplementing itemChange() like this :
QVariant JGraphicsPixmapItem::itemChange(GraphicsItemChange Change, const QVariant& rValue)
{
switch (Change)
{
case QGraphicsItem::ItemPositionHasChanged:
// Emulate a pivot point in the center of the image
this->translate(this->boundingRect().width() / 2,
this->boundingRect().height() / 2);
break;
case QGraphicsItem::ItemTransformHasChanged:
break;
}
return QGraphicsItem::itemChange(Change, rValue);
}
I thought this would work, as Qt's doc mentions that the position of an item and its transform matrix are two different concepts.
But it is not working.
Any idea ?
You're overthinking it. QGraphicsPixmapItem already has this functionality built in. See the setOffset method.
So to set the item origin at its centre, just do setOffset( -0.5 * QPointF( width(), height() ) ); every time you set the pixmap.
The Qt-documentation about rotating:
void QGraphicsItem::rotate ( qreal angle )
Rotates the current item
transformation angle degrees clockwise
around its origin. To translate around
an arbitrary point (x, y), you need to
combine translation and rotation with
setTransform().
Example:
// Rotate an item 45 degrees around (0, 0).
item->rotate(45);
// Rotate an item 45 degrees around (x, y).
item->setTransform(QTransform().translate(x, y).rotate(45).translate(-x, -y));
You need to create a rotate function, that translate the object to the parent's (0, 0) corner do the rotation and move the object to the original location.