QGraphicsItem : emulating an item origin which is not the top left corner - qt

My application is using Qt.
I have a class which is inheriting QGraphicsPixmapItem.
When applying transformations on these items (for instance, rotations), the origin of the item (or the pivot point) is always the top left corner.
I'd like to change this origin, so that, for instance, when setting the position of the item, this would put actually change the center of the pixmap.
Or, if I'm applying a rotation, the rotation's origin would be the center of the pixmap.
I haven't found a way to do it straight out of the box with Qt, so I thougth of reimplementing itemChange() like this :
QVariant JGraphicsPixmapItem::itemChange(GraphicsItemChange Change, const QVariant& rValue)
{
switch (Change)
{
case QGraphicsItem::ItemPositionHasChanged:
// Emulate a pivot point in the center of the image
this->translate(this->boundingRect().width() / 2,
this->boundingRect().height() / 2);
break;
case QGraphicsItem::ItemTransformHasChanged:
break;
}
return QGraphicsItem::itemChange(Change, rValue);
}
I thought this would work, as Qt's doc mentions that the position of an item and its transform matrix are two different concepts.
But it is not working.
Any idea ?

You're overthinking it. QGraphicsPixmapItem already has this functionality built in. See the setOffset method.
So to set the item origin at its centre, just do setOffset( -0.5 * QPointF( width(), height() ) ); every time you set the pixmap.

The Qt-documentation about rotating:
void QGraphicsItem::rotate ( qreal angle )
Rotates the current item
transformation angle degrees clockwise
around its origin. To translate around
an arbitrary point (x, y), you need to
combine translation and rotation with
setTransform().
Example:
// Rotate an item 45 degrees around (0, 0).
item->rotate(45);
// Rotate an item 45 degrees around (x, y).
item->setTransform(QTransform().translate(x, y).rotate(45).translate(-x, -y));

You need to create a rotate function, that translate the object to the parent's (0, 0) corner do the rotation and move the object to the original location.

Related

Why is lookAt not looking at specified vector?

I have this three js scene: http://codepen.io/giorgiomartini/pen/ZWLWgX
The scene contains 5 things:
Camera - Not Visible
origen (3D vector) - At 0,0,0.
objOne - Green
objParent - Red
CenterOfscene - Blue
objOne is a child of objParent. And ObjOne looksAt origen, which is a 3d vector at 0,0,0.
But the objOne instead of looking at the 0,0,0. where the origin vector is, It looks at objParent....?
Got any ideas?
What i want is the objOne to look at the 0,0,0. Which is the origen vector.
Any ideas why this is misbehaving? thanks.
THREE.SceneUtils.detach( objOne, objParent, scene );
THREE.SceneUtils.attach( objOne, scene, objParent );
var origen = new THREE.Vector3( 0, 0, 0 );
var render = function () {
objOne.lookAt(origen);
requestAnimationFrame( render );
xOffset += 0.01;
yOffset += 0.011;
zOffset += 0.012;
xOffsetParent += 0.0011;
yOffsetParent += 0.0013;
zOffsetParent += 0.0012;
camXPos = centeredNoise(-1,1,xOffset);
camYPos = centeredNoise(-1,1,yOffset);
camZPos = centeredNoise(-1,1,zOffset);
objOne.position.x = camXPos*4;
objOne.position.y = camYPos*4;
objOne.position.z = camZPos*4;
camParentXPos = centeredNoise(-1,1,xOffsetParent);
camParentYPos = centeredNoise(-1,1,yOffsetParent);
camParentZPos = centeredNoise(-1,1,zOffsetParent);
objParent.position.x = camParentXPos*10;
objParent.position.y = camParentYPos*10;
objParent.position.z = camParentZPos*10;
renderer.render(scene, camera);
};
render();
Object3D.lookAt() does not support objects with rotated and/or translated parent(s).
Your work-around is to (1) add the child as a child of the scene, instead, and (2) replace the child object with a dummy Object3D, which, as a child of the parent object, will move with the parent.
Then, in your render loop,
child.position.setFromMatrixPosition( dummy.matrixWorld );
child.lookAt( origin );
three.js r.75
Here is the corrected codepen:
http://codepen.io/anon/pen/oxGpPQ?editors=0010
Now the green disk rides around its parent (the red sphere) all while looking at the blue disk (or the 'origen' vector).
Uncomment lines 163 and 164 to make the camera be at the green disk's location and have the camera also look at the blue disk ('origen' vector) while it orbits its parent red sphere.
How I accomplished this is:
1. make parent Red Mesh
2. make dummyChild Object3D (this is an invisible math object)
3. make child Green Mesh
4. make origen centerOfScene Blue Mesh
5. attach parent, child, and centerOfScene mesh to Scene (not dummyChild though)
6. attach dummyChild to parent like so: parent.add(dummyChild);
In render function:
1. Move parent around with noise function (which offsets dummyChild)
2. Move dummyChild with another noise function (which revolves around its parent position, the center of dummyChild's world is its red parent's position)
3. Stick the green child mesh wherever the invisible dummyChild is. But since dummyChild is offset by red parent, we need to get it's world coordinates in relation to Scene, not its coordinates in red's world, so we use
child.setFromMatrixPosition(dummyChild.matrixWorld);
Notice its matrixWorld and not matrix - matrix holds its local system and matrixWorld holds its coordinated relative to the Scene or World coordinate system.
4. Use lookAt to make the green child disk 'lookAt' the blue centerOfScene Mesh which is at the origen vector or the center of the Scene.
Hope this helps! :)

Shift `QGraphicsTextItem` position relative to the center of the text?

I have a number of classes that inherit from QGraphicsItem, that get to be arranged in a certain way. For simplicity of calculations, I made the scenes, and items, centered in (0, 0) (with the boundingRect() having +/- coordinates).
QGraphicsTextItem subclass defies me, its pos() is relative to top left point.
I have tried a number of things to shift it so it centers in the text center (for example, the suggested solution here - the code referenced actually cuts my text and only shows the bottom left quarter).
I imagined that the solution should be something simple, like
void TextItem::paint(QPainter* painter, const QStyleOptionGraphicsItem* option, QWidget* widget)
{
painter->translate( -boundingRect().width()/2.0, -boundingRect().height()/2.0 );
QGraphicsTextItem::paint(painter, option, widget );
}
the above "sort of" works - but as I increase the item scale -> increase the font, the displayed item is cut off...
I tried to set the pos() - but the problem is, I still need to track the actual position on the scene, so I cannot just replace it.
A slightly unpleasant side effect - centering the QGraphicsView on the element does not work either.
How can I make my QGraphicsTextItem show its position relative to the center of the text ?
Edit: one of the experiments of changing the boundingRect():
QRectF TextItem::boundingRect() const
{
QRectF rect = QGraphicsTextItem::boundingRect();
rect.translate(QPointF(-rect.width()/2.0, -rect.height()/2.0));
return rect;
}
I had to shift the initial position, as well as the resize, to trigger a new position - I was unable to do it in paint() because, as I thought from the start, any repaint would continuously recalculate the position.
Only the initial position needs to be adjusted - but as the font size (or style...) changes, its bounding rectangle also changes, so the position must be recalculated - based on previous position.
In the constructor,
setPos(- boundingRect().width()/2, - boundingRect().height()/2);
in the function that modifies item (font) size,
void TextItem::setSize(int s)
{
QRectF oldRect = boundingRect();
QFont f;
f.setPointSize(s);
setFont(f);
if(m_scale != s)
{
m_scale = s;
qreal x = pos().x() - boundingRect().width()/2.0 + oldRect.width()/2.0;
qreal y = pos().y() - boundingRect().height()/2.0 + oldRect.height()/2.0;
setPos(QPointF(x, y));
}
}

How to enlarge the hover area of a QGraphicsItem

I have a QGraphicsScene with rather small point markers. I would like to enlarge the area of these markers to make dragging easier. The marker is a cross, +/- 2 pixels from the origin. I have reimplemented
QGraphicsItem::contains(const QPointF & point ) const
{
return QRectF(-10,-10,20,20);
}
and
void hoverEnterEvent(QGraphicsSceneHoverEvent* event)
{
setPen(QPen(Qt::red));
update();
}
but the marker only turns red when it is directly hit by the cursor (and even that is a bit picky). How can I enlarge the "hover area"?
As stated in the short comment:
Usually those things are handled via the bounding rect or the shape function, try overloading those. Take a look into the qt help of QGraphicsItem under shape (http://doc.qt.io/qt-4.8/qgraphicsitem.html#shape):
Returns the shape of this item as a QPainterPath in local coordinates.
The shape is used for many things, including collision detection, hit
tests, and for the QGraphicsScene::items() functions.
The default implementation calls boundingRect() to return a simple
rectangular shape, but subclasses can reimplement this function to
return a more accurate shape for non-rectangular items. For example, a
round item may choose to return an elliptic shape for better collision
detection. For example:
QPainterPath RoundItem::shape() const {
QPainterPath path;
path.addEllipse(boundingRect());
return path; } The outline of a shape can vary depending on the width and style of the pen used when drawing. If you want to include
this outline in the item's shape, you can create a shape from the
stroke using QPainterPathStroker.
This function is called by the default implementations of contains()
and collidesWithPath().
So what basically happens is that all functions that want to access the "Zone" which is associated with an item, call shape and then do e.g. a containment or collision detection with the resulting painterpath.
Thus if you have small items you should enlargen the shape zone.
Lets for instance consider a line that is your target, than your shape implementation could look like the following:
QPainterPath Segment::shape() const{
QLineF temp(qLineF(scaled(Plotable::cScaleFactor)));
QPolygonF poly;
temp.translate(0,pen.widthF()/2.0);
poly.push_back(temp.p1());
poly.push_back(temp.p2());
temp.translate(0,-pen.widthF());
poly.push_back(temp.p2());
poly.push_back(temp.p1());
QPainterPath path;
path.addPolygon(poly);
return path;
}
Pen is a member of the segment, and I use its width to enlarge the shape zone. But you can take anything else as well that has a good relation to the actual dimension of your object.

QPainter declared inside a run function creates artifact

I am rendering a QPixmap inside of a QThread. the code to paint is inside a function. If I declare the painter inside the drawChart function everything seems ok but if I declare the painter inside the run function the image is wrong in the sense that at the edge of a black and white area, the pixels at the interface are overlapped to give a grey. Does anyone know why this is so? Could it be because of the nature of the run function itself?
//This is ok
void RenderThread::run()
{
QImage image(resultSize, QImage::Format_RGB32);
drawChart(&image);
emit renderedImage(image, scaleFactor);
}
drawChart(&image)
{
QPainter painter(image);
painter.doStuff()(;
...
}
//This gives a image that seems to have artifacts
void RenderThread::run()
{
QImage image(resultSize, QImage::Format_RGB32);
QPainter painter(image);
drawChart(painter);
emit renderedImage(image, scaleFactor);
}
drawChart(&painter)
{
painter.doStuff();
...
}
//bad
.
//good
.
From C++ GUI Programming with Qt 4 by Jasmin Blanchette and Mark Summerfield:
One important thing to understand is
that the center of a pixel lies on
“half-pixel” coordinates. For example,
the top-left pixel covers the area
between points (0, 0) and (1, 1), and
its center is located at (0.5, 0.5).
If we ask QPainter to draw a pixel at,
say, (100, 100), it will approximate
the result by shifting the coordinate
by +0.5 in both directions, resulting
in the pixel centered at (100.5,
100.5) being drawn.
This distinction may seem rather
academic at first, but it has
important consequences in practice.
First, the shifting by +0.5 only
occurs if antialiasing is disabled
(the default); if antialiasing is
enabled and we try to draw a pixel at
(100, 100) in black, QPainter will
actually color the four pixels (99.5,
99.5), (99.5, 100.5), (100.5, 99.5), and (100.5, 100.5) light gray, to give
the impression of a pixel lying
exactly at the meeting point of the
four pixels. If this effect is
undesirable, we can avoid it by
specifying half-pixel coordinates, for
example, (100.5, 100.5).

how to do 3D rotation around center in AS3 using matrix3D?

I am trying to rotate a Sprite in three dimensions around its centerpoint, and I am struggling to understand some of the behavior of matrix3D.
Ive overridden the set rotationX, rotationY, and rotationZ methods of the Sprite as follows:
override public function set rotationX (_rotationX:Number) : void {
this.transform.matrix3D.prependTranslation(this.width/2.0, this.height/2.0, 0);
this.transform.matrix3D.prependRotation(-this.rotationX, Vector3D.X_AXIS);
this.transform.matrix3D.prependRotation(_rotationX, Vector3D.X_AXIS);
this.transform.matrix3D.prependTranslation(-(this.width/2.0), -(this.height/2.0), 0);
}
override public function set rotationY (_rotationY:Number) : void {
this.transform.matrix3D.prependTranslation(this.width/2.0, this.height/2.0, 0);
this.transform.matrix3D.prependRotation(-this.rotationY, Vector3D.Y_AXIS);
this.transform.matrix3D.prependRotation(_rotationY, Vector3D.Y_AXIS);
this.transform.matrix3D.prependTranslation(-(this.width/2.0), -(this.height/2.0), 0);
}
override public function set rotationZ (_rotationZ:Number) : void {
this.transform.matrix3D.prependTranslation(this.width/2.0, this.height/2.0, 0);
this.transform.matrix3D.prependRotation(-this.rotationZ, Vector3D.Z_AXIS);
this.transform.matrix3D.prependRotation(_rotationZ, Vector3D.Z_AXIS);
this.transform.matrix3D.prependTranslation(-(this.width/2.0), -(this.height/2.0), 0);
}
I am using prependTranslation to correct the centerpoint of the rotation, and the first prependRotation to cancel out any previously-applied rotation.
Testing it out, rotationX works exactly as expected, and the Sprite rotates around its horizontal axis.
rotationY and rotationZ also appear to work fine. However, there is one problem: whenever rotationY or rotationZ are set, all of the other rotation values change as well. This is not a problem with rotationX -- if I set rotationX, nothing else changes. But if I set rotationY or rotationZ all the rotation values change, which is a problem for my app (which is trying to save and restore values).
I think I am just lacking some understanding about what is going on with matrix3D. How can I implement this so there is no interdependence between the values?
Another easy solution is to add the object and center it within a container sprite and do the 3D transformations on the containing sprite.
I know nothing about AS3 etc. But just looking at your code, I wonder why you translate on the z-axis using what I understand to be x and y values (width and height). Shouldn't the z-axis be translated using something like "depth"?
This is very simple, you can try use the following code:
var matrix3d:Matrix3D = s.transform.matrix3D;
matrix3d.appendRotation( -1, Vector3D.Z_AXIS , new Vector3D( 390, 360, 0 ) );
while s is your sprite, the third parameter, Vector3D indicate your sprite's center position.
The Above code will make the sprite s rotate more -1 degree.

Resources