I'd like to implement application which allows user to select few QGraphicsItems and then rotate them as a group. I know that I could add all items into one QGraphicsItemGroup but I need to keep Z-value of each item. Is it possible?
I also have a second question.
I'm trying to rotate QGraphicsItem around some point (different from (0,0) - let's say (200,150)). After that operation I want to rotate this item once more time but now around (0,0). I'm using code below:
QPointF point(200,150); // point is (200,150) at first time and then it is changed to (0,0) - no matter how...
qreal x = temp.rx();
qreal y = temp.ry();
item->setTransform(item->transform()*(QTransform().translate(x,y).rotate(angle).translate(-x,-y)));
I noticed that after second rotation the item is not rotated around point (0,0) but around some other point (I don't know which). I also noticed that if I change order of operations it all works great.
What am I doing wrong?
Regarding your first problem, why should the z-values be a problem when putting them into a QGraphicsGroup?
On the other hand you could also iterate through the selected items and just apply the transformation.
I guess this snippet will solve your 2nd problem:
QGraphicsView view;
QGraphicsScene scene;
QPointF itemPosToRotate(-35,-35);
QPointF pivotPoint(25,25);
QGraphicsEllipseItem *pivotCircle = scene.addEllipse(-2.5,-2.5,5,5);
pivotCircle->setPos(pivotPoint);
QGraphicsRectItem *rect = scene.addRect(-5,-5,10,10);
rect->setPos(itemPosToRotate);
// draw some coordinate frame lines
scene.addLine(-100,0,100,0);
scene.addLine(0,100,0,-100);
// do half-cicle rotation
for(int j=0;j<=5;j++)
for(int i=1;i<=20;i++) {
rect = scene.addRect(-5,-5,10,10);
rect->setPos(itemPosToRotate);
QPointF itemCenter = rect->pos();
QPointF pivot = pivotCircle->pos() - itemCenter;
// your local rotation
rect->setRotation(45);
// your rotation around the pivot
rect->setTransform(QTransform().translate(pivot.x(), pivot.y()).rotate(180.0 * (qreal)i/20.0).translate(-pivot.x(),-pivot.y()),true);
}
view.setScene(&scene);
view.setTransform(view.transform().scale(2,2));
view.show();
EDIT:
In case you meant to rotate around the global coordinate frame origin change the rotations to:
rect->setTransform(QTransform().translate(-itemCenter.x(), -itemCenter.y()).rotate(360.0 * (qreal)j/5.0).translate(itemCenter.x(),itemCenter.y()) );
rect->setTransform(QTransform().translate(pivot.x(), pivot.y()).rotate(180.0 * (qreal)i/20.0).translate(-pivot.x(),-pivot.y()),true);
Related
I have a QGraphicsView in my Qt application on which user can draw curves. Curves consist of QGraphicsEllipseItem's and QGraphicsPathItem's, which connect the adjacent ellipses.
I want to get a list of QPoint's which satisfy the given curve. I tried creating local QPainterPath for this procedure which would represent the whole curve and iterating over all the points from it's rectangle to see which ones satisfy this curve. The code looks like:
QPainterPath curvePath = edges[index]->at(0)->path();
qreal left, right, bottom, top;
for(int i=1;i<edges[index]->size();i++)
{
curvePath.connectPath(edges[index]->at(i)->path());
}
QRectF curveRect = curvePath.boundingRect();
left = curveRect.left();
right = curveRect.right();
top = curveRect.top();
bottom = curveRect.bottom();
for(qreal i = left;i<right;i++)
for(qreal j = top;j<bottom;j++)
{
QPointF pointToCheck(i, j);
if(curvePath.contains(pointToCheck))
list.append(pointToCheck);
}
where edges is QList of QLists of QGraphicsPathItem's. It works fine in case of calculations (the point of applying this is to increase precision of calculation), but it really slows down my application since those calculations are made quite often.
Is there more efficient way to implement this?
I'm using QWT library for my widget, there are some curves on the canvas, like this:
void Plot::addCurve1( double x, double y, const char *CurveName,
const char *CurveColor,const char *CurveType )
{
...
*points1 << QPointF(x, y);
curve1->setSamples( *points1 );
curve1->attach( this );
...
}
So, all my curves have the same coordinate system. I'm trying to build navigation interface, so I could put step into TextEdit (for example) and moving by using this step, or I could go the end/start of my defined curve.
I've found method in QwtPlotPanner class, that gives me such opportunity:
double QWT_widget::move_XLeft()
{
//getting step from TextEdit
QString xValStr = _XNavDiscrepancies->toPlainText();
double xVal = xVal.toDouble();
// moveCanvas(int dx, int dy) - the method of QwtPlotPanner
plot->panner->moveCanvas(xVal,0);
x_storage = x_storage - xVal;
return x_storage;
}
So it works ok, but displacement in pixels and I need to stick it to my defined curve and it's coordinate system.
Qwt User's Guide tells, that:
Adjust the enabled axes according to dx/dy
Parameters
dx Pixel offset in x direction
dy Pixel offset in y direction
And this is the only information I've found. How can I convert pixels step into my coordinat system step? I need to go to the end of my curve, so I should return the last QPointF(x,y) of my curve and convert it to pixel-step? Or maybe I'm using wrong class/method?
Thank you very much :)
Thanks to #Pavel Gridin:
(https://ru.stackoverflow.com/a/876184/251026)
"For conversion from pixels to coordinates and back there are two
methods: QwtPlot::transform and QwtPlot::invTransform"
I have a QGraphicsScene that contains a hierarchy of QGraphicsItems.
Method item.scenePos() returns the scene coordinates of the item.
I'm looking for something like setScenePos() in order to change positions of the items by giving them scene coordinates.
How can I achieve this ?
As the docs says:
QPointF QGraphicsItem::scenePos() const
Returns the item's position in scene coordinates. This is equivalent to calling mapToScene(0, 0).
to get what you want you can do the inverse
# setScenePos(QPointF point)
point = QPointF(xx, yy)
point_item = item.mapFromScene(point)
item.setPos(point_item)
I have a QGraphicsScene of big dimension for displaying a database content.
Part of the database is made of pictures that I place in the QGraphicsScene thanks to the method setPos() of a QGraphicsPixmapItem and this works fine with thousands of pictures.
In front of these pictures, I place QCheckboxes that are finally accessible through QGraphicsProxyWidgets. But QGraphicsProxyWidget::setPos(qreal x, qreal y) results in casting provided coordinates in signed short in the QGraphicsScene.
However, doing a QGraphicsProxyWidget::pos() returns correctly the original coordinates, even above 2^16.
Here is the code:
QCheckBox* checkbox = new QCheckBox("", this);
QWidget* dummyWidget = new QWidget; //used for having a transparent background
dummyWidget->setStyleSheet("background-color:transparent;"
"outline-color:transparent;"
"font-size: 8pt;");
QHBoxLayout* dummyLayout = new QHBoxLayout(dummyWidget);
dummyLayout->addWidget(checkbox);
QGraphicsProxyWidget* proxyWidget = scene.addWidget(dummyWidget);
proxyWidget->setPos(0, 120*i);
When 120*i is between 32769 and 65536, QChekBoxes don't show. For above values, QCheckBoxes are shown like if y = value - 65536.
I have tried many things without success, like
- proxyWidget->moveBy
- dummyWidget->move
- dummyWidget->setFixedSize(0, 240*i); checkbox->move(0, 120*i);
Any solution?
PS: The toolchain/cross-toolchain I depend from embeds QT4.8.1. for the desktop side.
I have no way to change that so upgrading to QT5.x is not an option.
You can use next trick:
void setNewPos(QGraphicsItem *item, QPointF pos)
{
item->resetTransform();
QTransform trans = item->transform();
item->setTransform(trans.translate(pos.x(), pos.y()));
}
Now, you can call this func:
QPushButton *btn = new QPushButton("Hello, people!");
QGraphicsProxyWidget *wdgItem = scene->addWidget(btn);
setNewPos(wdgItem, view->mapToScene(0,0)); // There's scenePos can have any coords
I am writing a volume render program that constantly adjusts some plane geometry so it always faces the camera. The plane geometry rotates whenever the camera rotates in order to appear as if it doesn't move--relative to everything else in the scene. (I use the camera's viewing direction as a normal vector to these plane geometries.)
Currently I am manually storing a custom rotation vector ('rotations') and applying its affects as follows in the render function:
gl2.glRotated(rotations.y, 1.0, 0.0, 0.0);
gl2.glRotated(rotations.x, 0.0, 1.0, 0.0);
Then later on I get the viewing direction by rotating the initial view direction (0,0,-1) around the x and y axes with the values from rotation. This is done in the following manner. The final viewing direction is stored in 'view':
public Vec3f getViewingAngle(){
//first rotate the viewing POINT
//then find the vector from there to the center
Vec3f view=new Vec3f(0,0,-1);
float newZ=0;
float ratio=(float) (Math.PI/180);
float vA=(float) (-1f*rotations.y*(ratio));
float hA=(float) (-1f*rotations.x)*ratio;
//rotate about the x axis first
float newY=(float) (view.y*Math.cos(vA)-view.z*Math.sin(vA));
newZ=(float) (view.y*Math.sin(vA)+view.z*Math.cos(vA));
view=new Vec3f(view.x,newY,newZ);
//rotate about Y axis
float newX=(float) (view.z*Math.sin(hA)+view.x*Math.cos(hA));
newZ=(float) (view.z*Math.cos(hA)-view.x*Math.sin(hA));
view=new Vec3f(newX,view.y,newZ);
view=new Vec3f(view.x*-1f,view.y*-1f,view.z*-1f);
//return the finalized normal viewing direction
view=Vec3f.normalized(view);
return view;
}
Now I am moving this program to a larger project wherein the camera rotation is handled by a 3rd party graphics library. I have no rotations vector. Is there some way I can get my view direction vector from:
GLfloat matrix[16];
glGetFloatv (GL_MODELVIEW_MATRIX, matrix);
I am looking at this for reference http://3dengine.org/Modelview_matrix but I still don't get how to come up with the view direction. Can someone explain to me if it is possible and how it works?
You'll want to look at this picture # http://db-in.com/images/local_vectors.jpg
The Direction-of-Flight ( DOF) is the 3rd row.
GLfloat matrix[16];
glGetFloatv( GL_MODELVIEW_MATRIX, matrix );
float DOF[3];
DOF[0] = matrix[ 2 ]; // x
DOF[1] = matrix[ 6 ]; // y
DOF[2] = matrix[ 10 ]; // z
Reference:
http://blog.db-in.com/cameras-on-opengl-es-2-x/
Instead of trying to follow the modelview matrix, to adjust your volume rasterizer's fragment impostor, you should just adjust the modelview matrix to your needs. OpenGL is not a scene graph, it's a drawing system and you can, and should change things however they suit you best.
Of course if you must embedd the volume rasterization into a larger scene, it may be neccessary to extract certain info from the modelview matrix. The upper left 3×3 submatrix contains the composite rotation of models and view. The 3rd column contains the view rotated Z vector.