Qt Widget click 2D pos to 3D world coordinate - qt

My Qt widget show the 3D world.
I have the 3D world's Qt3DRender::QCamera
How can I use Qcamera to convert the mouse position in widget to 3D world coordinate?
I tried to use point * .viewMatrix4x4().transposed().inverted() but it is wrong.

You cannot simply convert a 2D click to a 3D coordinate, only to a 3D vector which is a direction vector. You can only obtain a 3D coordinate if you have an object which is underneath the mouse.
In this case, you can use QObjectPicker to obtain the coordinate.
The steps to do so are:
Create the object picker in your scene hierarchy (it's best to create it as a child of the object you want to pick, but I'm not sure if that's necessary)
Attach the object picker as a component to the object you want to pick
Connect the object picker's signal that gets fired when you click the object to a function of yours that gives you the coordinates
You can also check out the manual Qt3D test on GitHub. It's in QML but you should be able to translate it to C++ (if that's what you're programming in).

I know this question is old, but I had the same problem and wanted to go from qt3d window coordinates to 3d space. I hunted around in the qt3d source code and was able to come up with the following. I think it should work as long as you are using the Qt3DWindow from Qt3DExtras and no additional Viewports.
QVector3D mouseEventToSpace(const QMouseEvent* mouseEvent, const Qt3DRender::QCamera* camera, const QSurface* surface) {
const QPointF glCorrectSurfacePosition{static_cast<float>(mouseEvent->pos().x()),
surface->size().height() - static_cast<float>(mouseEvent->pos().y())};
const QMatrix4x4 viewMatrix{camera->viewMatrix()};
const QMatrix4x4 projectionMatrix{camera->lens()->projectionMatrix()};
const int areaWidth = surface->size().width();
const int areaHeight = surface->size().height();
const auto relativeViewport = QRectF(0.0f, 0.0f, 1.0f, 1.0f);
const auto viewport =
QRect(relativeViewport.x() * areaWidth, (1.0 - relativeViewport.y() - relativeViewport.height()) * areaHeight,
relativeViewport.width() * areaWidth, relativeViewport.height() * areaHeight);
const auto nearPos = QVector3D{static_cast<float>(glCorrectSurfacePosition.x()),
static_cast<float>(glCorrectSurfacePosition.y()), 0.0f}
.unproject(viewMatrix, projectionMatrix, viewport);
return nearPos;}

Related

QWT moving canvas

I'm using QWT library for my widget, there are some curves on the canvas, like this:
void Plot::addCurve1( double x, double y, const char *CurveName,
const char *CurveColor,const char *CurveType )
{
...
*points1 << QPointF(x, y);
curve1->setSamples( *points1 );
curve1->attach( this );
...
}
So, all my curves have the same coordinate system. I'm trying to build navigation interface, so I could put step into TextEdit (for example) and moving by using this step, or I could go the end/start of my defined curve.
I've found method in QwtPlotPanner class, that gives me such opportunity:
double QWT_widget::move_XLeft()
{
//getting step from TextEdit
QString xValStr = _XNavDiscrepancies->toPlainText();
double xVal = xVal.toDouble();
// moveCanvas(int dx, int dy) - the method of QwtPlotPanner
plot->panner->moveCanvas(xVal,0);
x_storage = x_storage - xVal;
return x_storage;
}
So it works ok, but displacement in pixels and I need to stick it to my defined curve and it's coordinate system.
Qwt User's Guide tells, that:
Adjust the enabled axes according to dx/dy
Parameters
dx Pixel offset in x direction
dy Pixel offset in y direction
And this is the only information I've found. How can I convert pixels step into my coordinat system step? I need to go to the end of my curve, so I should return the last QPointF(x,y) of my curve and convert it to pixel-step? Or maybe I'm using wrong class/method?
Thank you very much :)
Thanks to #Pavel Gridin:
(https://ru.stackoverflow.com/a/876184/251026)
"For conversion from pixels to coordinates and back there are two
methods: QwtPlot::transform and QwtPlot::invTransform"

In Qt drawPoint method does not plot anything if negative valued parameters are supplies

in Qt creator drawPoint() method does not put point if negative valued parameters are passed
following is code for Bresenham's algorithm.but, it is not working in qt creator.it just plots circle in one quadrant.
Bresenham::Bresenham(QWidget*parent):QWidget(parent)
{}
void Bresenham::paintEvent(QPaintEvent *e)
{
Q_UNUSED(e);
QPainter qp(this);
drawPixel(&qp);
}
void Bresenham::drawPixel(QPainter *qp)
{
QPen pen(Qt::red,2,Qt::SolidLine);
qp->setPen(pen);
int x=0,y,d,r=100;
y=r;
d=3-2*r;
do
{
qp->drawPoint(x,y);
qp->drawPoint(y,x);
qp->drawPoint(y,-x);
qp->drawPoint(x,-y);
qp->drawPoint(-x,-y);
qp->drawPoint(-y,-x);
qp->drawPoint(-x,y);
qp->drawPoint(-y,x);
if(d<0)
{
d=d+4*x+6;
}
else
{
d=d+(4*x-4*y)+10;
y=y-1;
}
x=x+1;
}while(x<y);
}
You need to translate the Qt coordinate system to the classic cartesian one. Choose a new center QPoint orig and replace all
qp->drawPoint(x,y);
with
qp->drawPoint(orig + QPoint(x,y));
The Qt coordinates system origin is at (0,0) and the y-axis is inverted. For instance, a segment from A(2,7) to B(6,1) look like this:
Notice how there is only the positive-x, positive-y quadrant. For simplicity assume that no negative coordinates exist.
Note:
For performance reasons it is better to compute all the points first and then draw them all using
QPainter::drawPoints ( const QPoint * points, int pointCount );

How can I get view direction from the OpenGL ModelView Matrix?

I am writing a volume render program that constantly adjusts some plane geometry so it always faces the camera. The plane geometry rotates whenever the camera rotates in order to appear as if it doesn't move--relative to everything else in the scene. (I use the camera's viewing direction as a normal vector to these plane geometries.)
Currently I am manually storing a custom rotation vector ('rotations') and applying its affects as follows in the render function:
gl2.glRotated(rotations.y, 1.0, 0.0, 0.0);
gl2.glRotated(rotations.x, 0.0, 1.0, 0.0);
Then later on I get the viewing direction by rotating the initial view direction (0,0,-1) around the x and y axes with the values from rotation. This is done in the following manner. The final viewing direction is stored in 'view':
public Vec3f getViewingAngle(){
//first rotate the viewing POINT
//then find the vector from there to the center
Vec3f view=new Vec3f(0,0,-1);
float newZ=0;
float ratio=(float) (Math.PI/180);
float vA=(float) (-1f*rotations.y*(ratio));
float hA=(float) (-1f*rotations.x)*ratio;
//rotate about the x axis first
float newY=(float) (view.y*Math.cos(vA)-view.z*Math.sin(vA));
newZ=(float) (view.y*Math.sin(vA)+view.z*Math.cos(vA));
view=new Vec3f(view.x,newY,newZ);
//rotate about Y axis
float newX=(float) (view.z*Math.sin(hA)+view.x*Math.cos(hA));
newZ=(float) (view.z*Math.cos(hA)-view.x*Math.sin(hA));
view=new Vec3f(newX,view.y,newZ);
view=new Vec3f(view.x*-1f,view.y*-1f,view.z*-1f);
//return the finalized normal viewing direction
view=Vec3f.normalized(view);
return view;
}
Now I am moving this program to a larger project wherein the camera rotation is handled by a 3rd party graphics library. I have no rotations vector. Is there some way I can get my view direction vector from:
GLfloat matrix[16];
glGetFloatv (GL_MODELVIEW_MATRIX, matrix);
I am looking at this for reference http://3dengine.org/Modelview_matrix but I still don't get how to come up with the view direction. Can someone explain to me if it is possible and how it works?
You'll want to look at this picture # http://db-in.com/images/local_vectors.jpg
The Direction-of-Flight ( DOF) is the 3rd row.
GLfloat matrix[16];
glGetFloatv( GL_MODELVIEW_MATRIX, matrix );
float DOF[3];
DOF[0] = matrix[ 2 ]; // x
DOF[1] = matrix[ 6 ]; // y
DOF[2] = matrix[ 10 ]; // z
Reference:
http://blog.db-in.com/cameras-on-opengl-es-2-x/
Instead of trying to follow the modelview matrix, to adjust your volume rasterizer's fragment impostor, you should just adjust the modelview matrix to your needs. OpenGL is not a scene graph, it's a drawing system and you can, and should change things however they suit you best.
Of course if you must embedd the volume rasterization into a larger scene, it may be neccessary to extract certain info from the modelview matrix. The upper left 3×3 submatrix contains the composite rotation of models and view. The 3rd column contains the view rotated Z vector.

QGraphicsItem's - selection & rotation

I'd like to implement application which allows user to select few QGraphicsItems and then rotate them as a group. I know that I could add all items into one QGraphicsItemGroup but I need to keep Z-value of each item. Is it possible?
I also have a second question.
I'm trying to rotate QGraphicsItem around some point (different from (0,0) - let's say (200,150)). After that operation I want to rotate this item once more time but now around (0,0). I'm using code below:
QPointF point(200,150); // point is (200,150) at first time and then it is changed to (0,0) - no matter how...
qreal x = temp.rx();
qreal y = temp.ry();
item->setTransform(item->transform()*(QTransform().translate(x,y).rotate(angle).translate(-x,-y)));
I noticed that after second rotation the item is not rotated around point (0,0) but around some other point (I don't know which). I also noticed that if I change order of operations it all works great.
What am I doing wrong?
Regarding your first problem, why should the z-values be a problem when putting them into a QGraphicsGroup?
On the other hand you could also iterate through the selected items and just apply the transformation.
I guess this snippet will solve your 2nd problem:
QGraphicsView view;
QGraphicsScene scene;
QPointF itemPosToRotate(-35,-35);
QPointF pivotPoint(25,25);
QGraphicsEllipseItem *pivotCircle = scene.addEllipse(-2.5,-2.5,5,5);
pivotCircle->setPos(pivotPoint);
QGraphicsRectItem *rect = scene.addRect(-5,-5,10,10);
rect->setPos(itemPosToRotate);
// draw some coordinate frame lines
scene.addLine(-100,0,100,0);
scene.addLine(0,100,0,-100);
// do half-cicle rotation
for(int j=0;j<=5;j++)
for(int i=1;i<=20;i++) {
rect = scene.addRect(-5,-5,10,10);
rect->setPos(itemPosToRotate);
QPointF itemCenter = rect->pos();
QPointF pivot = pivotCircle->pos() - itemCenter;
// your local rotation
rect->setRotation(45);
// your rotation around the pivot
rect->setTransform(QTransform().translate(pivot.x(), pivot.y()).rotate(180.0 * (qreal)i/20.0).translate(-pivot.x(),-pivot.y()),true);
}
view.setScene(&scene);
view.setTransform(view.transform().scale(2,2));
view.show();
EDIT:
In case you meant to rotate around the global coordinate frame origin change the rotations to:
rect->setTransform(QTransform().translate(-itemCenter.x(), -itemCenter.y()).rotate(360.0 * (qreal)j/5.0).translate(itemCenter.x(),itemCenter.y()) );
rect->setTransform(QTransform().translate(pivot.x(), pivot.y()).rotate(180.0 * (qreal)i/20.0).translate(-pivot.x(),-pivot.y()),true);

Cannot see IplImage in QGraphicsView

I'm trying to display a 3D scene(OpenGL-OpenCV) in a QGraphicsView object in QT. The scene has 5 planes: top, bottom, right, left and front. I'm taking images from my webcam and mapping them to the front plane. I have successfully displayed 4 of 5 planes, the front plane is missing.
I followed this tutorial to load the OpenGL scene: http://doc.trolltech.com/qq/qq26-openglcanvas.html
However, I don't know how to treat the IplImage to be displayed in the QT Object. Do you guys have any suggestion?
This is something that I salvaged out of a blog posting,
This will provide you with a QImage that you can display using Qt.
you should tailor it to fit your needs.
QImage img;
constructor()
{
// setup capture device
CvCapture *cvCapture = cvCreateCapture(0);
}
getQImageFromIplImage()
{
// this frame gets a frame from capture device
IplImage *frame = new IplImage();
frame = cvQueryFrame(cvCapture);
// create an IplImage with 8bit color depth
IplImage *iplImg = cvCreateImage(cvSize(frame->width, frame->height),IPL_DEPTH_8U, 3);
// copy image captured from capture device to new image, converting pixel data from OpenCV's default BGR format to Qt's RGB format
cvCvtColor(frame, iplImg, CV_BGR2RGB);
// create a this newly converted RGB pixel data with a QImage
qImg = QImage((uchar *)iplImg->imageData, iplImg->width, iplImg->height, QImage::Format_RGB888);
}
for full code, check out:
http://www.morethantechnical.com/2009/03/05/qt-opencv-combined-for-face-detecting-qwidgets/

Resources