To determine the coordinates of an object with Qapplication and QGraphicsItem Library,Usually used setPos and setPos(mapToParent(yvelocity,-xvelocity)).The problem with these commands is that they determine the "speed" of the object move(Not their X Y Coordinates!).
So by what Command I can give X & Y inputs and the object goes to those coordinates?
thanks in advance.
In my comment I gave the recommendation:
Something like setPos(pos() + mapToParent(yvelocity,-xvelocity)). Although pos() returns QPointF, you may apply the operator+. - There is a suitable overload.
While thinking twice, I struggled over mapToParent() which appears wrong to me.
According to the OP, the velocity describes a direction. The mapToParent() is dedicated to translate a local position to a position in parents coordinate system.
QPointF QGraphicsItem::mapToParent(const QPointF &point) const
Maps the point point, which is in this item's coordinate system, to its parent's coordinate system, and returns the mapped coordinate. If the item has no parent, point will be mapped to the scene's coordinate system.
So, the better recommendation might be:
setPos(pos() + QPointF(yvelocity,-xvelocity));
I made an MCVE testQGraphicsItemSetPos.cc to demonstrate this:
// Qt header:
#include <QtWidgets>
// main application
int main(int argc, char **argv)
{
qDebug() << "Qt Version:" << QT_VERSION_STR;
QApplication app(argc, argv);
// setup data
QGraphicsScene qGScene;
QImage qImg("smiley.png");
QGraphicsPixmapItem qGItemImg(QPixmap::fromImage(qImg));
qGScene.addItem(&qGItemImg);
const qreal v = 2.0;
qreal xVel = 0.0, yVel = 0.0;
// setup GUI
QWidget qWinMain;
qWinMain.resize(320, 200);
qWinMain.setWindowTitle("QGraphicsView - Move Item");
QVBoxLayout qVBox;
QGraphicsView qGView;
qGView.setScene(&qGScene);
qVBox.addWidget(&qGView, 1);
qWinMain.setLayout(&qVBox);
qWinMain.show();
// timer for periodic update
using namespace std::chrono_literals;
QTime qTime(0, 0);
QTimer qTimerAnim;
qTimerAnim.setInterval(50ms);
// install signal handlers
QObject::connect(&qTimerAnim, &QTimer::timeout,
[&]() {
// change velocity xVel and yVel alternating in range [-1, 1]
const qreal t = 0.001 * qTime.elapsed(); // t in seconds
xVel = v * std::sin(t); yVel = v * std::cos(t);
// apply current velocity to move item
qGItemImg.setPos(qGItemImg.pos() + QPointF(xVel, yVel));
});
// runtime loop
qTimerAnim.start();
return app.exec();
}
and a CMakeLists.txt:
project(QGraphicsItemSetPos)
cmake_minimum_required(VERSION 3.10.0)
set_property(GLOBAL PROPERTY USE_FOLDERS ON)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)
find_package(Qt5Widgets CONFIG REQUIRED)
include_directories("${CMAKE_SOURCE_DIR}")
add_executable(testQGraphicsItemSetPos testQGraphicsItemSetPos.cc)
target_link_libraries(testQGraphicsItemSetPos Qt5::Widgets)
with which I built and started the sample in VS2017:
Related
I have a big image of 7589x5537 that I put in a scene as a QPixmapGraphicsItem.
If I scale the QGraphicsView to 14.2318 and rotate it -35 degrees, the render of the pixmap starts behaving weirdly; tearing or completely disappearing.
This happens also at other rotations and scales, but only if they are big scaling of more than 14.
I've read about X11 limitations but I'm on Windows.
I'm on Qt 5.5
I've tested changing the content of the image to a bucketfill of tree pattern, exactly the same behaviour. The image is indexed, but with a RGB I have the same issue.
Anybody has a clue why this happens and how to fix it? Is the problem reproducible?
The issue seems to be related to the maximum value of unsigned int, dimension independent if not rotated. Creating an untilted image of 1 million by 200 pixels, one can zoom up to 4384x. In my computer the size of unsigned int is 4 bytes, which can handle roughly values up to 4000 million.
I presume Qt doesn't crop the upscaled image to the view before scaling it, or something similar. It is weird, thought, that it tears it instead of crashing exhausting resources, failing to allocate contiguous memory or something else.
Those are suspicions since at the moment I don't know how QGraphicsView implements scaling.
#include <QtWidgets>
int main(int argc, char *argv[]) {
QApplication a(argc, argv);
unsigned int w = 7589;
unsigned int h = 5537;
QImage image(w, h, QImage::Format_ARGB32);
for(unsigned int j = 0; j < h; j++)
{
for(unsigned int i = 0; i < w; i++)
{
QRgb rgb = qRgb(i%255,j%255,(i+j)%255);
image.setPixel(i, j, rgb);
}
}
QPixmap imagepm = QPixmap::fromImage(image);
QGraphicsPixmapItem* item = new QGraphicsPixmapItem(imagepm);
item->setTransformationMode(Qt::FastTransformation);
QGraphicsScene* scene = new QGraphicsScene;
scene->addItem(item);
QGraphicsView* view = new QGraphicsView(scene);
view->rotate(-35);
view->scale(14.2318,14.2318);
view->show();
return a.exec();
}
The fix requires cutting the image up into tiles, grouping them under a single parent item, and then proceeding as you did before. The tiles would be an implementation detail that you don't need to worry about.
Wave diagram in QT.. There was some functionality issues in logic. I tried a wave diagram logically using QPainter. Wave diagram is drawn perfectly but there was one line has drawn.. Any one help me to solve this problem.
//dialog.cpp
#include "dialog.h"
#include "ui_dialog.h"
#include <QtMath>
#include <QDebug>
#include <QDialog>
#include <QTGui>
#include <QtCore>
Dialog::Dialog(QWidget *parent) :
QDialog(parent),
ui(new Ui::Dialog)
{
ui->setupUi(this);
}
Dialog::~Dialog()
{
delete ui;
}
void Dialog::paintEvent(QPaintEvent *e)
{
e->accept();
float scale = 40;
//boolean negativeX = true;
float width = 500;
float height = 108;
QPainter painter(this);
QPen linepen(Qt::red);
linepen.setWidth(2);
QPoint p1;
QPoint p2;
painter.setPen(linepen);
float xx, yy, dx=4, x0=width / 2, y0=height / 2;
//float iMax = (width - x0) / dx;
float iMax = 63;
//float iMin = negativeX ? -x0 / dx : 0;
float iMin = -63;
for (int i=iMin;i<=iMax;i++) {
float x = x0+xx;
float y = y0-yy;
p1.setX(x);
p1.setY(y);
xx = dx*i;
float xscl = xx/scale;
yy = scale * qCos( 3* xscl );
x = x0+xx;
y = y0-yy;
p2.setX(x);
p2.setY(y);
painter.drawLine(p1, p2);
}
}
// dialog.h
protected:
void paintEvent(QPaintEvent *e);
// main.cpp
QApplication a(argc, argv);
Dialog w;
w.show();
Not sure (a) if this is what's causing your problem since you have undefined behaviour (meaning anything could happen) but, the first time through the loop, you use xx and yy before they've been given a value:
float xx, yy, ... // will be arbitrary value.
float iMax = 63;
float iMin = -63;
for (int i=iMin;i<=iMax;i++) {
float x = x0+xx; // shouldn't be using them here.
float y = y0-yy;
p1.setX(x);
p1.setY(y);
xx = dx*i; // not set until here.
float xscl = xx/scale;
yy = scale * qCos( 3* xscl );
This may be a simple case of initialising them to zero before use, since that's likely to be the best offset for the initial point.
(a) It's certainly feasible that this is the case since the initial value of xx and yy may be such that it sets the initial point to the rightmost end of that straight line.
That means the initial line draw, far from being a small line forming part of the cosine wave, will be from that point to the leftmost point of that wave. You can check this by simply putting a break as the final statement within the for loop and seeing that the straight line is the only thing that appears.
You could also check it by actually debugging the code, either putting a breakpoint at the calculation of x and y and seeing what the xx and yy values are, or printing them out before each time you use them. Debugging skills are an invaluable tool in your arsenal.
If that is the case, setting xx and yy to zero before the loop starts should definitely fix your problem.
I have a problem with QPainter on QImage in Qt 5.4.
The image has Format_ARGB32. I want to set a given RGBA value on pixels in the image using a QPainter draw function and later read the value back using QImage::pixel.
Yet the value painted and the value read back are different. What am I doing wrong?
Sample code:
QImage image(100, 100, QImage::Format_ARGB32);
uint value = 0x44fa112b; //some value..
QPainter painter(&image);
painter.setCompositionMode(QPainter::CompositionMo de_Source);
QColor color(qRed(value), qGreen(value), qBlue(value), qAlpha(value));
QBrush brush(color);
painter.setBrush(brush);
painter.drawRect(0,0,image.width(), image.height());
uint value1 = image.pixel(50,50);
// value1 IS NOT EQUAL TO value. Why??
This works fine in Qt 5.7. Perhaps earlier Qt versions need the painter.end() call.
#include <QtGui>
int main(int argc, char ** argv) {
QGuiApplication app{argc, argv};
QImage image{100, 100, QImage::Format_ARGB32};
auto const set = 0x44fa112b;
QPainter painter(&image);
painter.setCompositionMode(QPainter::CompositionMode_Source);
painter.setBrush({{qRed(set), qGreen(set), qBlue(set), qAlpha(set)}});
painter.drawRect(image.rect());
if (false) painter.end(); //<< try with true here
auto readback = image.pixel(50,50);
qDebug() << hex << set << readback;
Q_ASSERT(readback == set);
}
Problem solved!!
Working properly when I tried with Qt 5.8
Looks like a bug with Qt 5.4.
Thanks to all :)
I have a scene with an inverted y-axis. Everything is correctly drawn except QImages.
I use drawIage() as:
QRectF aWorldRect = ...
QRectF anImageRect = QRectF(0, 0, theQImage.width(), theQImage.height())
thePainter->drawImage(aWorldRect, theQImage, anImageRect;
I get undefined graphics outside (to the top of) where the image should be displayed. This is normal because y-axis is inverted. So I expected something like that may fix the issue:
QRectF anImageRect = QRectF(0, 0, imgWidth, -imgHeight)
It has the same effect. If I do aWorldRect = aWorldRect.noralized() before calling drawImage(), I get the image in the correct rectangle but mirrored so I did aQImage = aQImage.mirrored(). Now the image is correctly displayed in the correct rectangle
I consider this a workaround which I don't like to keep. So, can someone tell me what should be done to get the image displayed, the right way?
Update
Here I put a minimal sample of my problem that is ready to compile:
Update 2014-04-09 10:05 EET
Updated the sample code little bit to make really work using the workaround
#include <QtGui>
const int WIDTH = 640;
const int HEIGHT = 480;
class View : public QGraphicsView
{
protected:
void drawBackground(QPainter *p, const QRectF & rect)
{
QImage img = QImage("/usr/share/backgrounds/images/stone_bird.jpg"); // or any other
/* The next three lines makes everything displayed correctly but
should be considered a workaround */
/* I ignore the rect that is passed to the function on purpose */
QRectF imageRect = QRectF(QPointF(0, 0), QPointF(img.width(), img.height()));
QRectF theSceneRect = sceneRect().normalized();
p->drawImage(theSceneRect, img.mirrored(), imageRect);
}
};
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
View w;
/* I don't want to change the code below */
w.setScene(new QGraphicsScene(QRectF(QPointF(0, HEIGHT), QPointF(WIDTH, 0))));
w.scale(1, -1);
w.scene()->addLine(0, HEIGHT, WIDTH, 0);
w.showMaximized();
return a.exec();
}
The approach of reversing the Y coordinate value is right but the implementation was faulty.
QRectF's documentation shows that it takes (x, y, width, height). Giving height as negative makes little sense. Instead try the other constructor which takes topLeft and bottomRight.
QRectF anImageRect(QPointF(0.0f, -imgHeight), QPointF(imageWidth, 0.0f));
EDIT:
It seems that the only drawings like line, arc, etc. are affected by the scale (1, -1) transform you set on the view. drawImage continues to render upside down due to the scale set. The simple fix is to set the scale back to (1, -1). Here's the updated code:
void drawBackground(QPainter *p, const QRectF & rect)
{
QImage img = QImage("/usr/share/backgrounds/images/stone_bird.jpg");
// backup the current transform set (which has the earlier scale of (1, -1))
const QTransform oldTransform = p->transform();
// set the transform back to identity to make the Y axis go from top to bottom
p->setTransform(QTransform());
// draw
QRectF theSceneRect = sceneRect().normalized();
p->drawImage(theSceneRect, img);
// revert back to the earlier transform
p->setTransform(oldTransform);
}
Updated on 2014-04-14 14:35 EET
I could finally solve the problem reliably by replacing the two lines
QRectF theSceneRect = sceneRect().normalized();
p->drawImage(theSceneRect, img.mirrored(), imageRect);
of my question to
QRectF theSceneRect = sceneRect(); // Not normalized. It is no more workaround :)
qreal x = theSceneRect.x();
qreal y = theSceneRect.y();
qreal w = theSceneRect.width();
qreal h = theSceneRect.height();
qreal sx = imageRect.x();
qreal sy = imageRect.y();
qreal sw = imageRect.width();
qreal sh = imageRect.height();
p->translate(x, y);
p->scale(w / sw, h / sh);
p->setBackgroundMode(Qt::TransparentMode);
p->setRenderHint(QPainter::Antialiasing, p->renderHints() &
QPainter::SmoothPixmapTransform);
QBrush brush(img);
p->setBrush(brush);
p->setPen(Qt::NoPen);
p->setBrushOrigin(QPointF(-sx, -sy));
p->drawRect(QRectF(0, 0, sw, sh));
p->restore();
This is inspired by the implementation of the QPainter::drawImage() which is not reliable in such cases due to many if statements handling rectangles with negative values of width or height.
It would be better if I made the solution in another function but I kept it this way to be more compatible with the code in my question.
I have an RGB888 format qImage defined as follows:
myQrgb = QImage(img_in, width, height, QImage::Format_RGB888);
I wish to alter specific pixel values, so I followed the example here, like so:
QRgb value = qRgb(0, 0, 0);
myQrgb.setPixel(i, j, value);
This, however, always produces a segmentation fault regardless of the values of i and j (e.g. i = j = 2).
I am guessing it is because I am incorrectly using QRgb to manipulate pixels in a QImage::Format_RGB888. What should I do instead?
I think the problem may be more related to the img_in data with which you are initializing the image. Are you sure that data is valid?
The following example successfully paints a white square with a black square in the corner.
#include <QtGui>
int main(int argc, char **argv) {
QApplication app(argc, argv);
QImage img(100, 100, QImage::Format_RGB888);
img.fill(QColor(Qt::white).rgb());
for (int x = 0; x < 10; ++x) {
for (int y = 0; y < 10; ++y) {
img.setPixel(x, y, qRgb(0, 0, 0));
}
}
QLabel l;
l.setPixmap(QPixmap::fromImage(img));
l.show();
return app.exec();
}
There are few things you need to confirm:
According to QImage constructor you're using, make sure img_in remains valid throughout the life span of QImage object. By the way, QImage destructor will not delete your data (img_in).
If the pixel position you're setting is not valid coordinate, setPixel()'s behavior is undefined.
I suspect the first case, img_in is probably vanishing from QImage. You may want to try to create a QImage using other constructor like QImage(10, 10, QImage::Format_RGB888) and play with setPixel().