I have a big image of 7589x5537 that I put in a scene as a QPixmapGraphicsItem.
If I scale the QGraphicsView to 14.2318 and rotate it -35 degrees, the render of the pixmap starts behaving weirdly; tearing or completely disappearing.
This happens also at other rotations and scales, but only if they are big scaling of more than 14.
I've read about X11 limitations but I'm on Windows.
I'm on Qt 5.5
I've tested changing the content of the image to a bucketfill of tree pattern, exactly the same behaviour. The image is indexed, but with a RGB I have the same issue.
Anybody has a clue why this happens and how to fix it? Is the problem reproducible?
The issue seems to be related to the maximum value of unsigned int, dimension independent if not rotated. Creating an untilted image of 1 million by 200 pixels, one can zoom up to 4384x. In my computer the size of unsigned int is 4 bytes, which can handle roughly values up to 4000 million.
I presume Qt doesn't crop the upscaled image to the view before scaling it, or something similar. It is weird, thought, that it tears it instead of crashing exhausting resources, failing to allocate contiguous memory or something else.
Those are suspicions since at the moment I don't know how QGraphicsView implements scaling.
#include <QtWidgets>
int main(int argc, char *argv[]) {
QApplication a(argc, argv);
unsigned int w = 7589;
unsigned int h = 5537;
QImage image(w, h, QImage::Format_ARGB32);
for(unsigned int j = 0; j < h; j++)
{
for(unsigned int i = 0; i < w; i++)
{
QRgb rgb = qRgb(i%255,j%255,(i+j)%255);
image.setPixel(i, j, rgb);
}
}
QPixmap imagepm = QPixmap::fromImage(image);
QGraphicsPixmapItem* item = new QGraphicsPixmapItem(imagepm);
item->setTransformationMode(Qt::FastTransformation);
QGraphicsScene* scene = new QGraphicsScene;
scene->addItem(item);
QGraphicsView* view = new QGraphicsView(scene);
view->rotate(-35);
view->scale(14.2318,14.2318);
view->show();
return a.exec();
}
The fix requires cutting the image up into tiles, grouping them under a single parent item, and then proceeding as you did before. The tiles would be an implementation detail that you don't need to worry about.
Related
I have a problem with QPainter on QImage in Qt 5.4.
The image has Format_ARGB32. I want to set a given RGBA value on pixels in the image using a QPainter draw function and later read the value back using QImage::pixel.
Yet the value painted and the value read back are different. What am I doing wrong?
Sample code:
QImage image(100, 100, QImage::Format_ARGB32);
uint value = 0x44fa112b; //some value..
QPainter painter(&image);
painter.setCompositionMode(QPainter::CompositionMo de_Source);
QColor color(qRed(value), qGreen(value), qBlue(value), qAlpha(value));
QBrush brush(color);
painter.setBrush(brush);
painter.drawRect(0,0,image.width(), image.height());
uint value1 = image.pixel(50,50);
// value1 IS NOT EQUAL TO value. Why??
This works fine in Qt 5.7. Perhaps earlier Qt versions need the painter.end() call.
#include <QtGui>
int main(int argc, char ** argv) {
QGuiApplication app{argc, argv};
QImage image{100, 100, QImage::Format_ARGB32};
auto const set = 0x44fa112b;
QPainter painter(&image);
painter.setCompositionMode(QPainter::CompositionMode_Source);
painter.setBrush({{qRed(set), qGreen(set), qBlue(set), qAlpha(set)}});
painter.drawRect(image.rect());
if (false) painter.end(); //<< try with true here
auto readback = image.pixel(50,50);
qDebug() << hex << set << readback;
Q_ASSERT(readback == set);
}
Problem solved!!
Working properly when I tried with Qt 5.8
Looks like a bug with Qt 5.4.
Thanks to all :)
I want to display sample6 of the OptixSDK in a QGLWidget.
I've read the topic in the Nvidia OptiX Forum but I do not get ahead, because unfortunalety I have no idea how I shall override the paintGL()
method.
At first I simply tried to read the outputbuffer of sample6 and saved it in a QImage
QImage img(m_width,m_height,QImage::Format_RGB32);
QColor color;
int idx;
void* data = m_outputBuffer->map();
typedef struct { float r; float g; float b; float a;} rgb;
rgb* rgb_data = (rgb*)data;
for(unsigned int i=0; i<m_width*m_height; ++i){
std::cout<<rgb_data[i].r<<","<<rgb_data[i].g<<","<<rgb_data[i].b<<std::endl;
float red = rgb_data[i].r; if(red>1.0) red=1.0;
float green = rgb_data[i].g; if(green>1.0) green=1.0;
float blue = rgb_data[i].b; if(blue>1.0) blue=1.0;
float alpha = rgb_data[i].a; if(alpha>1.0) alpha=1.0;
color.setRgbF(red,green,blue,alpha);
idx = floor((float)i/m_height);
img.setPixel(i-(idx*m_height), idx, color.rgb());
}
m_outputBuffer->unmap();
img.save("optixSampleSix.png","PNG");
and the method mentioned bei RoboMod in the Nvidia OptiX Forum, but in both I get a black picture of nonsense. Nevertheless if I use the functions by provided sutil to save the output in a .ppm file, everything seems right.
So my question is how to get from the OptiX output buffer to the rendered openGL scene properly.
What about constructing QImage directly?
uchar* data = (uchar *)m_outputBuffer->map();
QImage img(data, m_width, m_height, QImage::Format_ARGB32);
// or maybe Format_RGBA8888 would work for you.. you have to check docs
m_outputBuffer->unmap();
img.save("optixSampleSix.png","PNG");
I have a scene with an inverted y-axis. Everything is correctly drawn except QImages.
I use drawIage() as:
QRectF aWorldRect = ...
QRectF anImageRect = QRectF(0, 0, theQImage.width(), theQImage.height())
thePainter->drawImage(aWorldRect, theQImage, anImageRect;
I get undefined graphics outside (to the top of) where the image should be displayed. This is normal because y-axis is inverted. So I expected something like that may fix the issue:
QRectF anImageRect = QRectF(0, 0, imgWidth, -imgHeight)
It has the same effect. If I do aWorldRect = aWorldRect.noralized() before calling drawImage(), I get the image in the correct rectangle but mirrored so I did aQImage = aQImage.mirrored(). Now the image is correctly displayed in the correct rectangle
I consider this a workaround which I don't like to keep. So, can someone tell me what should be done to get the image displayed, the right way?
Update
Here I put a minimal sample of my problem that is ready to compile:
Update 2014-04-09 10:05 EET
Updated the sample code little bit to make really work using the workaround
#include <QtGui>
const int WIDTH = 640;
const int HEIGHT = 480;
class View : public QGraphicsView
{
protected:
void drawBackground(QPainter *p, const QRectF & rect)
{
QImage img = QImage("/usr/share/backgrounds/images/stone_bird.jpg"); // or any other
/* The next three lines makes everything displayed correctly but
should be considered a workaround */
/* I ignore the rect that is passed to the function on purpose */
QRectF imageRect = QRectF(QPointF(0, 0), QPointF(img.width(), img.height()));
QRectF theSceneRect = sceneRect().normalized();
p->drawImage(theSceneRect, img.mirrored(), imageRect);
}
};
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
View w;
/* I don't want to change the code below */
w.setScene(new QGraphicsScene(QRectF(QPointF(0, HEIGHT), QPointF(WIDTH, 0))));
w.scale(1, -1);
w.scene()->addLine(0, HEIGHT, WIDTH, 0);
w.showMaximized();
return a.exec();
}
The approach of reversing the Y coordinate value is right but the implementation was faulty.
QRectF's documentation shows that it takes (x, y, width, height). Giving height as negative makes little sense. Instead try the other constructor which takes topLeft and bottomRight.
QRectF anImageRect(QPointF(0.0f, -imgHeight), QPointF(imageWidth, 0.0f));
EDIT:
It seems that the only drawings like line, arc, etc. are affected by the scale (1, -1) transform you set on the view. drawImage continues to render upside down due to the scale set. The simple fix is to set the scale back to (1, -1). Here's the updated code:
void drawBackground(QPainter *p, const QRectF & rect)
{
QImage img = QImage("/usr/share/backgrounds/images/stone_bird.jpg");
// backup the current transform set (which has the earlier scale of (1, -1))
const QTransform oldTransform = p->transform();
// set the transform back to identity to make the Y axis go from top to bottom
p->setTransform(QTransform());
// draw
QRectF theSceneRect = sceneRect().normalized();
p->drawImage(theSceneRect, img);
// revert back to the earlier transform
p->setTransform(oldTransform);
}
Updated on 2014-04-14 14:35 EET
I could finally solve the problem reliably by replacing the two lines
QRectF theSceneRect = sceneRect().normalized();
p->drawImage(theSceneRect, img.mirrored(), imageRect);
of my question to
QRectF theSceneRect = sceneRect(); // Not normalized. It is no more workaround :)
qreal x = theSceneRect.x();
qreal y = theSceneRect.y();
qreal w = theSceneRect.width();
qreal h = theSceneRect.height();
qreal sx = imageRect.x();
qreal sy = imageRect.y();
qreal sw = imageRect.width();
qreal sh = imageRect.height();
p->translate(x, y);
p->scale(w / sw, h / sh);
p->setBackgroundMode(Qt::TransparentMode);
p->setRenderHint(QPainter::Antialiasing, p->renderHints() &
QPainter::SmoothPixmapTransform);
QBrush brush(img);
p->setBrush(brush);
p->setPen(Qt::NoPen);
p->setBrushOrigin(QPointF(-sx, -sy));
p->drawRect(QRectF(0, 0, sw, sh));
p->restore();
This is inspired by the implementation of the QPainter::drawImage() which is not reliable in such cases due to many if statements handling rectangles with negative values of width or height.
It would be better if I made the solution in another function but I kept it this way to be more compatible with the code in my question.
I am trying to draw a 10 millisecond grid in a QGraphicsScene in Qt. I am not very familiar with Qt... it's the first time I used it, and only because the application needs to be portable between Windows and Linux.
I actually don't have a problem drawing the grid, it's just the performance when the grid gets big. The grid has to be able to change size to fit the SceneRect if/when new data is loaded into the program to be displayed.
This is how I do it at the moment, I hate this but it's the only way I can think of doing it...
void Plotter::drawGrid() {
unsigned int i;
QGraphicsLineItem *line;
QGraphicsTextItem *text;
char num[11];
QString label;
unsigned int width = scene->sceneRect().width();
unsigned int height = scene->sceneRect().height();
removeGrid();
for (i = 150; i < width; i+= 10) {
line = new QGraphicsLineItem(i, 0, i, scene->sceneRect().height(), 0, scene);
line->setPen(QPen(QColor(0xdd,0xdd,0xdd)));
line->setZValue(0);
_itoa_s(i - 150, num, 10);
label = num;
label += " ms";
text = new QGraphicsTextItem(label, 0, scene);
text->setDefaultTextColor(Qt::white);
text->setX(i);
text->setY(height - 10);
text->setZValue(2);
text->setScale(0.2);
//pointers to items stored in list for removal later.
gridList.append(line);
gridList.append(text);
}
for (i = 0; i < height; i+= 10) {
line = new QGraphicsLineItem(150, i, width, i, 0, scene);
line->setPen(QPen(QColor(0xdd,0xdd,0xdd)));
line->setZValue(0);
gridList.append(line);
}
}
When scene->sceneRect().width() gets too big, however, the application becomes very sluggish. I have tried using a QGLWidget, but the improvements in speed are marginal at best.
I ended up using a 10x10 square pixmap and drew it as the backgroundBrush on my QGraphicsView, as is suggested in the link in the comment under my initial question.
I have an RGB888 format qImage defined as follows:
myQrgb = QImage(img_in, width, height, QImage::Format_RGB888);
I wish to alter specific pixel values, so I followed the example here, like so:
QRgb value = qRgb(0, 0, 0);
myQrgb.setPixel(i, j, value);
This, however, always produces a segmentation fault regardless of the values of i and j (e.g. i = j = 2).
I am guessing it is because I am incorrectly using QRgb to manipulate pixels in a QImage::Format_RGB888. What should I do instead?
I think the problem may be more related to the img_in data with which you are initializing the image. Are you sure that data is valid?
The following example successfully paints a white square with a black square in the corner.
#include <QtGui>
int main(int argc, char **argv) {
QApplication app(argc, argv);
QImage img(100, 100, QImage::Format_RGB888);
img.fill(QColor(Qt::white).rgb());
for (int x = 0; x < 10; ++x) {
for (int y = 0; y < 10; ++y) {
img.setPixel(x, y, qRgb(0, 0, 0));
}
}
QLabel l;
l.setPixmap(QPixmap::fromImage(img));
l.show();
return app.exec();
}
There are few things you need to confirm:
According to QImage constructor you're using, make sure img_in remains valid throughout the life span of QImage object. By the way, QImage destructor will not delete your data (img_in).
If the pixel position you're setting is not valid coordinate, setPixel()'s behavior is undefined.
I suspect the first case, img_in is probably vanishing from QImage. You may want to try to create a QImage using other constructor like QImage(10, 10, QImage::Format_RGB888) and play with setPixel().