My project requirement is that I can create an image with width of up to 36000 pixels (height much smaller).
(The image is rendered from a QGraphicsScene).
I ran into a limitation: QPainter limits its device size for raster painting:
void QRasterPaintEnginePrivate::systemStateChanged()
{
deviceRectUnclipped = QRect(0, 0,
qMin(QT_RASTER_COORD_LIMIT, device->width()),
qMin(QT_RASTER_COORD_LIMIT, device->height()));
....
}
// This limitations comes from qgrayraster.c. Any higher and
// rasterization of shapes will produce incorrect results.
const int QT_RASTER_COORD_LIMIT = 32767;
(My attempt to troubleshoot ... Rendering a large QGraphicsScene on a QImage clips it off)
So... I thought, can I create 2 images then add them ? (one at the end of each other)
if(wOutput > 32767)
{
QImage image1 = QImage(32767, hOutput, QImage::Format_Mono);
image1.fill(QColor(Qt::white).rgb());
QRectF source(0, 0, 32767, hOutput);
QRectF target(0, 0, 32767, hOutput);
QPainter painter;
painter.begin(&image1);
outputScene->render(&painter, target, source);
painter.end();
QImage image2 = QImage(wOutput - 32767, hOutput, QImage::Format_Mono);
image2.fill(QColor(Qt::white).rgb());
source = QRectF(32767, 0, wOutput - 32767, hOutput);
target = QRectF(0, 0, wOutput - 32767, hOutput);
painter.begin(&image2);
outputScene->render(&painter, target, source);
painter.end();
// now create a combination, add image2 at the end of image1
QImage image = QImage(wOutput, hOutput, QImage::Format_Mono);
painter.begin(&image);
painter.drawImage(0, 0, image1);
painter.drawImage(32767, hOutput, image2);
painter.end();
}
else
{
// just create the image
}
Looks very logical... but the output does not show image2. Obviously... I am using the same raster painting... with the same limitation !
What other way can I add an image at the end of another ? ( note - my "large" size is a "width" so I don't even think I can use scanline to copy pixels faster )
You can use QImage::scanLine to get the pixel data and copy it.
However the QImage::Format_Mono make it a little more complex because you have to consider the alignment of the pixel data (with QImage::Format_Mono you have 1 bit per pixel, so 8 pixels into a byte).
I suggest to generate the first image using a width divisible by 8 (e.g 32760), so you can copy the row of the second image without shifting the bits.
Also the color table should be the same in the two source images.
You can do something like this:
int w1 = 32760;
QImage image1 = QImage(w1, hOutput, QImage::Format_Mono);
//grab the first image....
//....
int w2 = wOutput - w1;
QImage image2 = QImage(w2, hOutput, QImage::Format_Mono);
//grab the second image....
//....
int bytesPerLine1 = w1 / 8; //is divisible by 8
int bytesPerLine2 = ceil(float(w2) / 8.0f); //should be right :)
QImage image = QImage(wOutput, hOutput, QImage::Format_Mono);
image.setColorTable(image1.colorTable());
for(int i = 0; i < hOutput; ++i)
{
uchar* dstSL = image.scanLine(i);
uchar* src1SL = image1.scanLine(i);
memcpy(dstSL, src1SL, bytesPerLine1);
uchar* src2SL = image2.scanLine(i);
memcpy(&dstSL[bytesPerLine1], src2SL, bytesPerLine2);
}
I suggest also to read the QImage documentation: Pixel Manipulation and QImage::Format
Related
I want to display an image received in a short[] of pixels from a server.
The server(C++) writes the image as an unsigned short[] of pixels (12 bit depth).
My java application gets the image by a CORBA call to this server.
Since java does not have ushort, the pixels are stored as (signed) short[].
This is the code I'm using to obtain a BufferedImage from the array:
private WritableImage loadImage(short[] pixels, int width, int height) {
int[] intPixels = new int[pixels.length];
for (int i = 0; i < pixels.length; i++) {
intPixels[i] = (int) pixels[i];
}
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0, 0, width, height, intPixels);
return SwingFXUtils.toFXImage(image, null);
}
And later:
WritableImage orgImage = convertShortArrayToImage2(image.data, image.size_x, image.size_y);
//load it into the widget
Platform.runLater(() -> {
imgViewer.setImage(orgImage);
});
I've checked that width=1280 and height=1024 and the pixels array is 1280x1024, that matches with the raster height and width.
However I'm getting an array out of bounds error in the line:
raster.setPixels(0, 0, width, height, intPixels);
I have try with ALL ImageTypes , and all of them produce the same error except for:
TYPE_USHORT_GRAY: Which I thought it would be the one, but shows an all-black image
TYPE_BYTE_GRAY: which show the image in negative(!) and with a lot of grain(?)
TYPE_BYTE_INDEXED: which likes the above what colorized in a funny way
I also have tried shifting bits when converting from shot to int, without any difference:
intPixels[i] = (int) pixels[i] & 0xffff;
So..I'm quite frustrated after looking for days a solution in the internet. Any help is very welcome
Edit. The following is an example of the images received, converted to jpg on the server side. Not sure if it is useful since I think it is made from has pixel rescaling (sqrt) :
Well, finally I solved it.
Probably not the best solution but it works and could help someone in ether....
Being the image grayscale 12 bit depth, I used BufferedImage of type TYPE_BYTE_GRAY, but I had to downsample to 8 bit scaling the array of pixels. from 0-4095 to 0-255.
I had an issue establishing the higher and lower limits of the scale. I tested with avg of the n higher/lower limits, which worked reasonably well, until someone sent me a link to a java program translating the zscale algorithm (used in DS9 tool for example) for getting the limits of the range of greyscale vlues to be displayed:
find it here
from that point I modified the previous code and it worked like a charm:
//https://github.com/Caltech-IPAC/firefly/blob/dev/src/firefly/java/edu/caltech/ipac/visualize/plot/Zscale.java
Zscale.ZscaleRetval retval = Zscale.cdl_zscale(pixels, width, height,
bitsVal, contrastVal, opt_sizeVal, len_stdlineVal, blankValueVal);
double Z1 = retval.getZ1();
double Z2 = retval.getZ2();
try {
int[] ints = new int[pixels.length];
for (int i = 0; i < pixels.length; i++) {
if (pixels[i] < Z1) {
pixels[i] = (short) Z1;
} else if (pixels[i] > Z2) {
pixels[i] = (short) Z2;
}
ints[i] = ((int) ((pixels[i] - Z1) * 255 / (Z2 - Z1)));
}
BufferedImage bImg
= new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
bImg.getRaster().setPixels(0, 0, width, height, ints);
return SwingFXUtils.toFXImage(bImg, null);
} catch (Exception ex) {
System.out.println(ex.getMessage());
}
return null;
I have a scene with an inverted y-axis. Everything is correctly drawn except QImages.
I use drawIage() as:
QRectF aWorldRect = ...
QRectF anImageRect = QRectF(0, 0, theQImage.width(), theQImage.height())
thePainter->drawImage(aWorldRect, theQImage, anImageRect;
I get undefined graphics outside (to the top of) where the image should be displayed. This is normal because y-axis is inverted. So I expected something like that may fix the issue:
QRectF anImageRect = QRectF(0, 0, imgWidth, -imgHeight)
It has the same effect. If I do aWorldRect = aWorldRect.noralized() before calling drawImage(), I get the image in the correct rectangle but mirrored so I did aQImage = aQImage.mirrored(). Now the image is correctly displayed in the correct rectangle
I consider this a workaround which I don't like to keep. So, can someone tell me what should be done to get the image displayed, the right way?
Update
Here I put a minimal sample of my problem that is ready to compile:
Update 2014-04-09 10:05 EET
Updated the sample code little bit to make really work using the workaround
#include <QtGui>
const int WIDTH = 640;
const int HEIGHT = 480;
class View : public QGraphicsView
{
protected:
void drawBackground(QPainter *p, const QRectF & rect)
{
QImage img = QImage("/usr/share/backgrounds/images/stone_bird.jpg"); // or any other
/* The next three lines makes everything displayed correctly but
should be considered a workaround */
/* I ignore the rect that is passed to the function on purpose */
QRectF imageRect = QRectF(QPointF(0, 0), QPointF(img.width(), img.height()));
QRectF theSceneRect = sceneRect().normalized();
p->drawImage(theSceneRect, img.mirrored(), imageRect);
}
};
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
View w;
/* I don't want to change the code below */
w.setScene(new QGraphicsScene(QRectF(QPointF(0, HEIGHT), QPointF(WIDTH, 0))));
w.scale(1, -1);
w.scene()->addLine(0, HEIGHT, WIDTH, 0);
w.showMaximized();
return a.exec();
}
The approach of reversing the Y coordinate value is right but the implementation was faulty.
QRectF's documentation shows that it takes (x, y, width, height). Giving height as negative makes little sense. Instead try the other constructor which takes topLeft and bottomRight.
QRectF anImageRect(QPointF(0.0f, -imgHeight), QPointF(imageWidth, 0.0f));
EDIT:
It seems that the only drawings like line, arc, etc. are affected by the scale (1, -1) transform you set on the view. drawImage continues to render upside down due to the scale set. The simple fix is to set the scale back to (1, -1). Here's the updated code:
void drawBackground(QPainter *p, const QRectF & rect)
{
QImage img = QImage("/usr/share/backgrounds/images/stone_bird.jpg");
// backup the current transform set (which has the earlier scale of (1, -1))
const QTransform oldTransform = p->transform();
// set the transform back to identity to make the Y axis go from top to bottom
p->setTransform(QTransform());
// draw
QRectF theSceneRect = sceneRect().normalized();
p->drawImage(theSceneRect, img);
// revert back to the earlier transform
p->setTransform(oldTransform);
}
Updated on 2014-04-14 14:35 EET
I could finally solve the problem reliably by replacing the two lines
QRectF theSceneRect = sceneRect().normalized();
p->drawImage(theSceneRect, img.mirrored(), imageRect);
of my question to
QRectF theSceneRect = sceneRect(); // Not normalized. It is no more workaround :)
qreal x = theSceneRect.x();
qreal y = theSceneRect.y();
qreal w = theSceneRect.width();
qreal h = theSceneRect.height();
qreal sx = imageRect.x();
qreal sy = imageRect.y();
qreal sw = imageRect.width();
qreal sh = imageRect.height();
p->translate(x, y);
p->scale(w / sw, h / sh);
p->setBackgroundMode(Qt::TransparentMode);
p->setRenderHint(QPainter::Antialiasing, p->renderHints() &
QPainter::SmoothPixmapTransform);
QBrush brush(img);
p->setBrush(brush);
p->setPen(Qt::NoPen);
p->setBrushOrigin(QPointF(-sx, -sy));
p->drawRect(QRectF(0, 0, sw, sh));
p->restore();
This is inspired by the implementation of the QPainter::drawImage() which is not reliable in such cases due to many if statements handling rectangles with negative values of width or height.
It would be better if I made the solution in another function but I kept it this way to be more compatible with the code in my question.
After some trouble I've managed to correctly render to texture inside a Frame Buffer Object in a Qt 4.8 application: I can open an OpenGL context with a QGLWidget, render to a FBO, and use this one as a texture.
Now I need to display the texture rendered in a QPixmap and show it in some other widget in the gui. But.. nothing is shown.
Those are some pieces of code:
// generate texture, FBO, RBO in the initializeGL
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glGenRenderbuffers(1, &rboId);
glBindRenderbuffer(GL_RENDERBUFFER, rboId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, TEXTURE_WIDTH, TEXTURE_HEIGHT);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
// now in paintGL
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
// .... render into texture code ....
if(showTextureInWidget==false) {
showTextureInWidget = true;
char *pixels;
pixels = new char[TEXTURE_WIDTH * TEXTURE_HEIGHT * 4];
glReadPixels(0, 0, TEXTURE_WIDTH, TEXTURE_HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, pixels);
QPixmap qp = QPixmap(pixels);
QLabel *l = new QLabel();
// /* TEST */ l->setText(QString::fromStdString("dudee"));
l->setPixmap(qp);
QWidget *d = new QWidget;
l->setParent(d);
d->show();
}
glBindFramebuffer(GL_FRAMEBUFFER, 0); // unbind
// now draw the scene with the rendered texture
I see the Widget opened but.. there is nothing inside it. If I decomment the test line.. I see the "dudee" string so I know that there is a qlabel but.. no image from the QPixmap.
I know that the original data are ´unsigned char´ and I'm using ´char´ and I've tried with some different color parameters (´GL_RGBA´, ´GL_RGB´ etc) but I don't think this is the point.. the point is that I don't see anything..
Any advice? If I have to post more code I will do it!
Edit:
I haven't posted all the code, but the fact I'd like to be clear is that the texture is correctly rendered as a texture inside a cube. I'm just not able to put it back in the cpu from gpu
Edit 2:
Thanks to the peppe answer I found out the problem: I needed a Qt object that accept as a constructor some raw pixels data. Here is the complete snippet:
uchar *pixels;
pixels = new uchar[TEXTURE_WIDTH * TEXTURE_HEIGHT * 4];
for(int i=0; i < (TEXTURE_WIDTH * TEXTURE_HEIGHT * 4) ; i++ ) {
pixels[i] = 0;
}
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glReadPixels( 0,0, TEXTURE_WIDTH, TEXTURE_HEIGHT, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
qi = QImage(pixels, TEXTURE_WIDTH, TEXTURE_HEIGHT, QImage::Format_ARGB32);
qi = qi.rgbSwapped();
QLabel *l = new QLabel();
l->setPixmap(QPixmap::fromImage(qi));
QWidget *d = new QWidget;
l->setParent(d);
d->show();
Given that that's not all of your code and -- as you say -- the texture is correctly filled, then there's a little mistake going on here:
glReadPixels(0, 0, TEXTURE_WIDTH, TEXTURE_HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, pixels);
QPixmap qp = QPixmap(pixels);
The QPixmap(const char *) ctor wants a XPM image, not raw pixels. You need to use one of the QImage ctors to create a valid QImage. (You can also pass ownership to the QImage, solving the fact that you're currently leaking pixels...)
Once you do that, you'll figure out that
the image is flipped vertically, as OpenGL has the origin in the bottom left corner, growing upwards/rightwards, while Qt assumes origin in the top left, growing to downwards/rightwards;
the channels might be swapped -- i.e. OpenGL is returning data with the wrong endianess. I don't remember in this case if using glPixelStorei(GL_PACK_SWAP_BYTES) or GL_UNSIGNED_INT_8_8_8_8 as the type may help, eventually you need to resort to a CPU-side loop to fix your pixel data :)
I am trying to draw a 10 millisecond grid in a QGraphicsScene in Qt. I am not very familiar with Qt... it's the first time I used it, and only because the application needs to be portable between Windows and Linux.
I actually don't have a problem drawing the grid, it's just the performance when the grid gets big. The grid has to be able to change size to fit the SceneRect if/when new data is loaded into the program to be displayed.
This is how I do it at the moment, I hate this but it's the only way I can think of doing it...
void Plotter::drawGrid() {
unsigned int i;
QGraphicsLineItem *line;
QGraphicsTextItem *text;
char num[11];
QString label;
unsigned int width = scene->sceneRect().width();
unsigned int height = scene->sceneRect().height();
removeGrid();
for (i = 150; i < width; i+= 10) {
line = new QGraphicsLineItem(i, 0, i, scene->sceneRect().height(), 0, scene);
line->setPen(QPen(QColor(0xdd,0xdd,0xdd)));
line->setZValue(0);
_itoa_s(i - 150, num, 10);
label = num;
label += " ms";
text = new QGraphicsTextItem(label, 0, scene);
text->setDefaultTextColor(Qt::white);
text->setX(i);
text->setY(height - 10);
text->setZValue(2);
text->setScale(0.2);
//pointers to items stored in list for removal later.
gridList.append(line);
gridList.append(text);
}
for (i = 0; i < height; i+= 10) {
line = new QGraphicsLineItem(150, i, width, i, 0, scene);
line->setPen(QPen(QColor(0xdd,0xdd,0xdd)));
line->setZValue(0);
gridList.append(line);
}
}
When scene->sceneRect().width() gets too big, however, the application becomes very sluggish. I have tried using a QGLWidget, but the improvements in speed are marginal at best.
I ended up using a 10x10 square pixmap and drew it as the backgroundBrush on my QGraphicsView, as is suggested in the link in the comment under my initial question.
I have a char* data, where every char represents red/green/blue/alpha value of a pixel.
So, the first four numbers are red, green, blue and alpha value of the first pixel, the next four are R, G, B, A value of the pixel on the right and so on.
It represents a picture (with previously known width and height).
Now, I want to somehow take this array and display it on Qt window. How to do it?
I know I should somehow use QPixmap and/or QImage, but I cannot find anything helpful in the documentation.
QImage is designed for access to the various pixels (among other things), so you could do something like this:
QImage DataToQImage( int width, int height, int length, char *data )
{
QImage image( width, height, QImage::Format_ARGB32 );
assert( length % 4 == 0 );
for ( int i = 0; i < length / 4; ++i )
{
int index = i * 4;
QRgb argb = qRgba( data[index + 1], //red
data[index + 2], //green
data[index + 3], //blue
data[index] ); //alpha
image.setPixel( i, argb );
}
return image;
}
Based on coming across another constructor, you might also be able to do this:
QImage DataToQImage( int width, int height, int length, const uchar *data )
{
int bytes_per_line = width * 4;
QImage image( data, width, height, bytes_per_line,
QImage::Format_ARGB32 );
// data is required to be valid throughout the lifetime of the image so
// constructed, and QImages use shared data to make copying quick. I
// don't know how those two features interact, so here I chose to force a
// copy of the image. It could be that the shared data would make a copy
// also, but due to the shared data, we don't really lose anything by
// forcing it.
return image.copy();
}