QTransform quadToQuad always returning false - qt

I'm trying to use quadToQuad to transform coordinates so I can draw a pixel on a resized image, and transfer that coordiantes back to the original image.
For a sample purpose, I made 2 rectangles, but the quadToQuad function always returns false.
Does anyone know why its failing?
QRectF origRect(QPointF(0, 0), QPointF(800, 800));
QRectF resizeRect(QPointF(0, 0), QPointF(400, 400));
QTransform mappingTransform;
qDebug() << (QTransform::quadToQuad(origRect, resizeRect, mappingTransform));
QPointF point = mappingTransform.map(QPointF(x, y));

Just of hand I'd say the problem is that quadToQuad takes QPolygonF as input, not QRectF. Try inserting the following after declaring mappingTransform:
QPolygonF orig(origRect);
QPolygonF resize(resizeRect);
qDebug() << (QTransform::quadToQuad(orig, resize, mappingTransform));
Dug into some of my source code, this is how I compute one transform:
QRectF tileBox(QPointF(w, s+D), QPointF(w+D, s));
QPolygonF q0 = rectToQuad(tileBox);
QPolygonF q1;
q1 << QPointF(0,0)
<< QPointF(1201,0)
<< QPointF(1201,1201)
<< QPointF(0, 1201);
QTransform tileTransform;
bool ok = QTransform::quadToQuad(q0, q1, tileTransform);
I coded rectToQuad like this:
QPolygonF MyClass::rectToQuad(QRectF rect) {
QPolygonF quad;
quad << rect.topLeft()
<< rect.topRight()
<< rect.bottomRight()
<< rect.bottomLeft();
return quad;
}
So I must have had an issue similar to yours. You may wish to try something similar.

Related

How to get camera intrinsics and extrinsics in openni2?

I have a primesense carmine 1.08 and carmine 1.09. I need the intrinsic parameters for the RGB and the IR camera and the extrinsics between the two. I use pcl with openni2 support. So I need to know the sensor parameters used by openni2/pcl.
Is there a way in openni2 to find the intrinsics and the extrinsics using openni2/pcl? Libfreenect2 has option to get IR and color camera intrinsics, but are these parameters same as that in openni? Are all these parameters extracated from sensor during runtime?
I tried to get it via pcl, but i get nan for the focal length and the principal points
int main (int argc, char** argv)
{
std::string device_id ("");
pcl::io::OpenNI2Grabber::Mode depth_mode =
pcl::io::OpenNI2Grabber::OpenNI_Default_Mode;
pcl::io::OpenNI2Grabber::Mode image_mode =
pcl::io::OpenNI2Grabber::OpenNI_Default_Mode;
pcl::io::OpenNI2Grabber grabber (device_id, depth_mode, image_mode);
grabber.start();
double fx,fy,px,py;
grabber.getDepthCameraIntrinsics(fx,fy,px,py);
cout << "fx=" << fx << endl;
cout << "fy=" << fy << endl;
cout << "px=" << px << endl;
cout << "py=" << px << endl;
return (0);
}
A similar question has been asked here https://stackoverflow.com/questions/41110791/openni-intrinsic-and-extrinsic-calibration. However it hasnt recieved any answers.

!Clipper library Line polygon use offset How it Union

I want Draw Line polygon and Use Offset But it Union
Suppadech, I suggest you pass two closed paths the the ClipperOffset object where the second path is oriented in the opposite direction to the first.
int main()
{
Paths subj(2);
Paths solution;
subj[0] << IntPoint(10,10) << IntPoint(100,10) << IntPoint(100,100) << IntPoint(10,100);
subj[1] << IntPoint(10,10) << IntPoint(10,100) << IntPoint(100,100) << IntPoint(100,10);
ClipperOffset co;
co.AddPaths(subj, jtSquare, etClosedPolygon);
co.Execute(solution, 5.0);
}

Drawing QImage on a QPainter which has inverted y-axis

I have a scene with an inverted y-axis. Everything is correctly drawn except QImages.
I use drawIage() as:
QRectF aWorldRect = ...
QRectF anImageRect = QRectF(0, 0, theQImage.width(), theQImage.height())
thePainter->drawImage(aWorldRect, theQImage, anImageRect;
I get undefined graphics outside (to the top of) where the image should be displayed. This is normal because y-axis is inverted. So I expected something like that may fix the issue:
QRectF anImageRect = QRectF(0, 0, imgWidth, -imgHeight)
It has the same effect. If I do aWorldRect = aWorldRect.noralized() before calling drawImage(), I get the image in the correct rectangle but mirrored so I did aQImage = aQImage.mirrored(). Now the image is correctly displayed in the correct rectangle
I consider this a workaround which I don't like to keep. So, can someone tell me what should be done to get the image displayed, the right way?
Update
Here I put a minimal sample of my problem that is ready to compile:
Update 2014-04-09 10:05 EET
Updated the sample code little bit to make really work using the workaround
#include <QtGui>
const int WIDTH = 640;
const int HEIGHT = 480;
class View : public QGraphicsView
{
protected:
void drawBackground(QPainter *p, const QRectF & rect)
{
QImage img = QImage("/usr/share/backgrounds/images/stone_bird.jpg"); // or any other
/* The next three lines makes everything displayed correctly but
should be considered a workaround */
/* I ignore the rect that is passed to the function on purpose */
QRectF imageRect = QRectF(QPointF(0, 0), QPointF(img.width(), img.height()));
QRectF theSceneRect = sceneRect().normalized();
p->drawImage(theSceneRect, img.mirrored(), imageRect);
}
};
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
View w;
/* I don't want to change the code below */
w.setScene(new QGraphicsScene(QRectF(QPointF(0, HEIGHT), QPointF(WIDTH, 0))));
w.scale(1, -1);
w.scene()->addLine(0, HEIGHT, WIDTH, 0);
w.showMaximized();
return a.exec();
}
The approach of reversing the Y coordinate value is right but the implementation was faulty.
QRectF's documentation shows that it takes (x, y, width, height). Giving height as negative makes little sense. Instead try the other constructor which takes topLeft and bottomRight.
QRectF anImageRect(QPointF(0.0f, -imgHeight), QPointF(imageWidth, 0.0f));
EDIT:
It seems that the only drawings like line, arc, etc. are affected by the scale (1, -1) transform you set on the view. drawImage continues to render upside down due to the scale set. The simple fix is to set the scale back to (1, -1). Here's the updated code:
void drawBackground(QPainter *p, const QRectF & rect)
{
QImage img = QImage("/usr/share/backgrounds/images/stone_bird.jpg");
// backup the current transform set (which has the earlier scale of (1, -1))
const QTransform oldTransform = p->transform();
// set the transform back to identity to make the Y axis go from top to bottom
p->setTransform(QTransform());
// draw
QRectF theSceneRect = sceneRect().normalized();
p->drawImage(theSceneRect, img);
// revert back to the earlier transform
p->setTransform(oldTransform);
}
Updated on 2014-04-14 14:35 EET
I could finally solve the problem reliably by replacing the two lines
QRectF theSceneRect = sceneRect().normalized();
p->drawImage(theSceneRect, img.mirrored(), imageRect);
of my question to
QRectF theSceneRect = sceneRect(); // Not normalized. It is no more workaround :)
qreal x = theSceneRect.x();
qreal y = theSceneRect.y();
qreal w = theSceneRect.width();
qreal h = theSceneRect.height();
qreal sx = imageRect.x();
qreal sy = imageRect.y();
qreal sw = imageRect.width();
qreal sh = imageRect.height();
p->translate(x, y);
p->scale(w / sw, h / sh);
p->setBackgroundMode(Qt::TransparentMode);
p->setRenderHint(QPainter::Antialiasing, p->renderHints() &
QPainter::SmoothPixmapTransform);
QBrush brush(img);
p->setBrush(brush);
p->setPen(Qt::NoPen);
p->setBrushOrigin(QPointF(-sx, -sy));
p->drawRect(QRectF(0, 0, sw, sh));
p->restore();
This is inspired by the implementation of the QPainter::drawImage() which is not reliable in such cases due to many if statements handling rectangles with negative values of width or height.
It would be better if I made the solution in another function but I kept it this way to be more compatible with the code in my question.

Mouse click event in qt 3.0.3

Mouse button clicki am trying to create an automatic mouse click event at a particular co ordinate.
This source code moves the mouse pointer to the co ordinate region but it is not clicking.
please help me to solve this problem or suggest any new idea to automate mouse click event.
Note: i am using QT 3.0.3
void mMouseClickFunction()
{
QWidget *d = QApplication::desktop()->screen();
int w=d->width(); // returns desktop width
int h=d->height();
printf("w=%d\nh=%d\n",w,h);
int x,y;
printf("Enter the points...\n");
scanf("%d%d",&x,&y);
QApplication::desktop()->cursor().setPos(x,y);
QPoint pt(x,y);
std::cout << pt.x() << " " << pt.y() << std::endl;
QMouseEvent *e = new QMouseEvent(QEvent::MouseButtonPress, pt,Qt::LeftButton, 0);
QApplication::sendEvent(d, e);
std::cout << "in contentsMousePressEvent"<< e->x() << " " << e->y() << std::endl;
QMouseEvent *p = new QMouseEvent(QEvent::MouseButtonRelease, pt,Qt::LeftButton, 0);
QApplication::sendEvent(d, p);
std::cout << "in contentsMouseReleaseEvent"<< p->x() << " " << p->y() << std::endl;
}
QApplication::sendEvent sends an event internal to the QApplication, not a system wide event. You probably need to be system specific to send out an event like that. Here is the function call for windows:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms646310(v=vs.85).aspx
But even with that kind of call, you will be limited to certain windows, unless you have UIAccess to be true, and your program is a signed application located in the right part of the harddrive.
EDIT: Here is a page that has some examples for sending input in Linux:
http://www.linuxquestions.org/questions/programming-9/simulating-a-mouse-click-594576/
Hope that helps.

Does Qt have a way to find bounding box of an image?

Given a .png image with a transparent background, I want to find the bounding box of the non-transparent data. Using nested for loops with QImage.pixel() is painfully slow. Is there a built-in method of doing this in Qt?
There is one option that involves using a QGraphicsPixmapItem and querying for the bounding box of the opaque area (QGraphicsPixmapItem::opaqueArea().boundingRect()). Not sure if it is the best way but it works :) It might be worth digging into Qt's source code to see what code is at the heart of it.
The following code will print out the width and height of the image followed by the width and height of the opaque portions of the image:
QPixmap p("image.png");
QGraphicsPixmapItem *item = new QGraphicsPixmapItem(p);
std::cout << item->boundingRect().width() << "," << item->boundingRect().height() << std::endl;
std::cout << item->opaqueArea().boundingRect().width() << "," << item->opaqueArea().boundingRect().height() << std::endl;
If pixel() is too slow for you, consider more efficient row-wise data adressing, given a QImage p:
int l =p.width(), r = 0, t = p.height(), b = 0;
for (int y = 0; y < p.height(); ++y) {
QRgb *row = (QRgb*)p.scanLine(y);
bool rowFilled = false;
for (int x = 0; x < p.width(); ++x) {
if (qAlpha(row[x])) {
rowFilled = true;
r = std::max(r, x);
if (l > x) {
l = x;
x = r; // shortcut to only search for new right bound from here
}
}
}
if (rowFilled) {
t = std::min(t, y);
b = y;
}
}
I doubt it will get any faster than this.
The easiest and also relatively fast solution is to do as follows:
QRegion(QBitmap::fromImage(image.createMaskFromColor(0x00000000))).boundingRect()
If you have a QPixmap rather than QImage, then you can use:
QRegion(pixmap.createMaskFromColor(Qt::transparent)).boundingRect()
QPixmap::createMaskFromColor internally will convert the pixmap to an image and do the same as above. An even shorter solution for QPixmap is:
QRegion(pixmap.mask()).boundingRect()
In this case, a QPixmap without alpha channel will result in an empty region, so you may need to check for that explicitly. Incidentally, this is also what QGraphicsPixmapItem::opaqueArea mentioned by #Arnold Spence is based on.
You may also want to try QImage::createAlphaMask, though the cutoff point will not be at 0 alpha but rather somewhere at half opacity.

Resources