QImage definition - qt

this command line:
QImage:: QImage (uchar * data, int width, int height, int bytesPerLine, Format format)
Would use is it so?
QImage image = new QImage (buffer, 600, 400, jpg)
the bytesPerLine not they mean well, will the photo occupies kb?
thanks

If you do not want to use the bytesPerLine parameter, there is a
QImage::QImage ( uchar * data, int width, int height, Format format )
constructor.
However, Format is not what you might think. Theformatparameter specifies an enum value which decides over the bit depth etc. I.e. enteringjpgor"jpg"there won't work. Check Format-enum for a list of possible values.

I will try to answer the best I can considering the fact that your question is very unclear to me.
From the Qt documentation:
bytesPerLine specifies the number of bytes per line (stride)
Also consider that the format argument, which you specified as jpg, must be given as one of the enum values specified in here.
Best regards

That's how you would use this constructor:
int imageWidth = 800;
int imageHeight = 600;
int bytesPerPixel = 4; // 4 for RGBA, 3 for RGB
int format = QImage::Format_ARGB32; // this is the pixel format - check Qimage::Format enum type for more options
QImage image(yourData, imageWidth, imageHeight, imageWidth * bytesPerPixel, format);
You don't specify the image format (png, jpeg, etc.) but the pixel format (RGB, RGBA, etc.)

Related

using glDrawPixels to render bitmap raw data

I receive raw image data from server. The server uses MS Dib() function which returns in BGR format. Now, what i want to do is to read this raw data and use glDrawPixels to draw it in Linux.
I was advised that GetClrTabAddress function in MS and alike shall be used to get me the RGB values for each index of 800 by 600 image sent to me.
I do not know how to get these values using indices. Could anyone give some tips.
void func(QByteArray)
{
window_width = 800;
window_height = 600;
size = window_width * window_height;
pixels = new float[size*3];
memcpy(pixels, bytes, bytes.size());
}
void GlWidget::paintGL()
{
//! [5]
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDrawPixels(window_width,window_height,GL_RGB,GL_FLOAT,pixels);
}
You can use GL_BGR in glDrawPixels, which will do the conversion for you and will probably be faster since AFAIK the GPU will do the work.
QByteArray sounds like you should be using unsigned bytes/chars instead of floats, which means GL_UNSIGNED_BYTE.
I'd assert(size*3*sizeof(float) == bytes.size());.
In this case make sure to set glPixelStorei(GL_UNPACK_ALIGNMENT, 1) if your width doesn't align to the default 4-byte boundry. With GL_BGR very pixel is 3 bytes and by default each row of your pixels is assumed to be padded to the next 4-byte boundary.
[EDIT]
OK, it looks like the image uses a palette. This means every value inthe QByteArray maps to an rgb value in another array. I'm not 100% sure where the palette is and maybe it can be computed implicitly, but you mentioned GetClrTabAddress which sounds promising.
The code will then look something like this
for(int i = 0; i < size; ++i)
{
unsigned char index = btmp[i];
//and something like..
memcpy(bytes + i * 3, GetClrTabAddress() + index * 3, 3);
//or
bytes[i*3+0] = someOtherPaletteData[index].red;
bytes[i*3+1] = someOtherPaletteData[index].green;
bytes[i*3+2] = someOtherPaletteData[index].blue;
}

How to create gray scale QImage(QImage::Format_Indexed) without copying memory

I'm trying to create QImage that wrap a existing image buffer that is created by OpenCv
I was considering use following constructor to do this.
QImage::QImage ( const uchar * data, int width, int height,
int bytesPerLine, Format format )
so, my code is like
QImage qimage((const uchar*)iplImage->imageData,
iplImage->width, iplImage->height,
iplImage->widthStep,
QImage::Format_Indexed); // image buffer not copied!
qimage.setColorTable(grayScaleColorTable); // color table's item count 256 for grayscale.
// now new image buffer is allocated here.
Ok, no memory copy actually was done at the time of calling this ctor.
But, here comes my problem. QImage::setColorTable() is non const member function where QImage allocates new image buffer for copying by its internal detach() function.
I found there was Qt3 support for this kind of problem where ctor could accept color table as argument in its ctor, but I've not found any such support in > Qt4.
How can I create gray scale QImage for existing image buffer?
Thanks for in advance
[EDITED]
Thanks to Stephen Chu, I realized that following contstructors create read/write-able QImage object
QImage ( uchar * data, int width, int height, Format format )
QImage ( uchar * data, int width, int height, int bytesPerLine, Format format )
which even if QImage::setColorTable() is called later right after instantiation, no new buffer is allocated. On the other hand, following constructors receiving 'const'ed data buffer create read-only QImage objects which new buffer is allocated and deep copied from original buffer when any non-const member function like QImage::setColorTable() is called(that I do not want).
QImage ( const uchar * data, int width, int height, Format format )
QImage ( const uchar * data, int width, int height, int bytesPerLine, Format format )

Efficient conversion of AVFrame to QImage

I need to extract frames from a video in my Qt based application. Using ffmpeg libraries I am able to fetch frames as AVFrames which I need to convert to QImage to use in other parts of my application. This conversion needs to be efficient. So far it seems sws_scale() is the right function to use but I am not sure what source and destination pixel formats are to be specified.
Came up with the following 2-step process that first converts a decoded AVFame to another AVFrame in RGB colorspace and then to QImage. It works and is reasonably fast.
src_frame = get_decoded_frame();
AVFrame *pFrameRGB = avcodec_alloc_frame(); // intermediate pframe
if(pFrameRGB==NULL) {
;// Handle error
}
int numBytes= avpicture_get_size(PIX_FMT_RGB24,
is->video_st->codec->width, is->video_st->codec->height);
uint8_t *buffer = (uint8_t*)malloc(numBytes);
avpicture_fill((AVPicture*)pFrameRGB, buffer, PIX_FMT_RGB24,
is->video_st->codec->width, is->video_st->codec->height);
int dst_fmt = PIX_FMT_RGB24;
int dst_w = is->video_st->codec->width;
int dst_h = is->video_st->codec->height;
// TODO: cache following conversion context for speedup,
// and recalculate only on dimension changes
SwsContext *img_convert_ctx_temp;
img_convert_ctx_temp = sws_getContext(
is->video_st->codec->width, is->video_st->codec->height,
is->video_st->codec->pix_fmt,
dst_w, dst_h, (PixelFormat)dst_fmt,
SWS_BICUBIC, NULL, NULL, NULL);
QImage *myImage = new QImage(dst_w, dst_h, QImage::Format_RGB32);
sws_scale(img_convert_ctx_temp,
src_frame->data, src_frame->linesize, 0, is->video_st->codec->height,
pFrameRGB->data,
pFrameRGB->linesize);
uint8_t *src = (uint8_t *)(pFrameRGB->data[0]);
for (int y = 0; y < dst_h; y++)
{
QRgb *scanLine = (QRgb *) myImage->scanLine(y);
for (int x = 0; x < dst_w; x=x+1)
{
scanLine[x] = qRgb(src[3*x], src[3*x+1], src[3*x+2]);
}
src += pFrameRGB->linesize[0];
}
If you find a more efficient approach, let me know in the comments
I know, it's too late, but maybe someone will find it useful. From here I got the clue of doing the same conversion, which looks a bit shorter.
So I created QImage which is reused for every decoded frame:
QImage img( width, height, QImage::Format_RGB888 );
Created frameRGB:
frameRGB = av_frame_alloc();
//Allocate memory for the pixels of a picture and setup the AVPicture fields for it.
avpicture_alloc( ( AVPicture *) frameRGB, AV_PIX_FMT_RGB24, width, height);
After the the first frame is decoded I create conversion context SwsContext this way (it will be used for all the next frames):
mImgConvertCtx = sws_getContext( codecContext->width, codecContext->height, codecContext->pix_fmt, width, height, AV_PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);
And finally for every decoded frame conversion is performed:
if( 1 == framesFinished && nullptr != imgConvertCtx )
{
//conversion frame to frameRGB
sws_scale(imgConvertCtx, frame->data, frame->linesize, 0, codecContext->height, frameRGB->data, frameRGB->linesize);
//setting QImage from frameRGB
for( int y = 0; y < height; ++y )
memcpy( img.scanLine(y), frameRGB->data[0]+y * frameRGB->linesize[0], mWidth * 3 );
}
See the link for the specifics.
A simpler approach, I think:
void takeSnapshot(AVCodecContext* dec_ctx, AVFrame* frame)
{
SwsContext* img_convert_ctx;
img_convert_ctx = sws_getContext(dec_ctx->width,
dec_ctx->height,
dec_ctx->pix_fmt,
dec_ctx->width,
dec_ctx->height,
AV_PIX_FMT_RGB24,
SWS_BICUBIC, NULL, NULL, NULL);
AVFrame* frameRGB = av_frame_alloc();
avpicture_alloc((AVPicture*)frameRGB,
AV_PIX_FMT_RGB24,
dec_ctx->width,
dec_ctx->height);
sws_scale(img_convert_ctx,
frame->data,
frame->linesize, 0,
dec_ctx->height,
frameRGB->data,
frameRGB->linesize);
QImage image(frameRGB->data[0],
dec_ctx->width,
dec_ctx->height,
frameRGB->linesize[0],
QImage::Format_RGB888);
image.save("capture.png");
}
Today, I have tested directly pass the image->bit() to swscale and finally it works, so it doesn't need to copy to memory. For example:
/* 1. Get frame and QImage to show */
struct my_frame *frame = get_frame(source);
QImage *myImage = new QImage(dst_w, dst_h, QImage::Format_RGBA8888);
/* 2. Convert and write into image buffer */
uint8_t *dst[] = {myImage->bits()};
int linesizes[4];
av_image_fill_linesizes(linesizes, AV_PIX_FMT_RGBA, frame->width);
sws_scale(myswscontext, frame->data, (const int*)frame->linesize,
0, frame->height, dst, linesizes);
I just discovered that scanLine is just seeking thru the buffer.. all you need is use AV_PIX_FMT_RGB32 for the AVFrame and QImage::FORMAT_RGB32 for the QImage.
Then after decoding just do a memcpy
memcpy(img.scanLine(0), pFrameRGB->data[0], pFrameRGB->linesize[0] * pFrameRGB->height());
I had problems with the other proposed solutions as :
They did not mention freeing either AVFrame, SwsContext or the allocated buffers, which caused massive memory leaks (I had thousands of frames to handle). These problems couldn't all be solved easily as QImage relies on the underlying data, and does not copy it. If freeing the buffer directly, the QImage points to freed data and breaks. This could be solved by using QImage's cleanupFunction to free the buffer once the image is no longer needed, but with other problems it wasn't good anyways.
In some cases one of the suggestions, of passing QImage.bits directly to sws_scale, would not work as QImage are minimum 32 bit aligned. Therefore for certain dimensions it would not match the expected width by sws_scale and output each line shifted a little bit.
A third problem is that they used deprecated AVPicture elements.
I listed the problem in another question Converting an AVFrame to QImage with conversion of pixel format and in the end found a solution using a temporary buffer, which could be copied to the QImage, and then safely freed.
So see my answer for a fully working, efficient, and with no deprecated function calls, implementation : https://stackoverflow.com/a/68212609/7360943

how to print a uint16 monochrome image in Qt?

I'm trying to print a image from a Dicom file. I pass the raw data to a convertToFormat_RGB888 function. As far as I know, Qt can't handle monochrome 16 bits images.
Here's the original image (converted to jpg here):
http://imageshack.us/photo/my-images/839/16bitc.jpg/
bool convertToFormat_RGB888(gdcm::Image const & gimage, char *buffer, QImage* &imageQt)
Inside this function, I get inside this...
...
else if (gimage.GetPixelFormat() == gdcm::PixelFormat::UINT16)
{
short *buffer16 = (short*)buffer;
unsigned char *ubuffer = new unsigned char[dimX*dimY*3];
unsigned char *pubuffer = ubuffer;
for (unsigned int i = 0; i < dimX*dimY; i++)
{
*pubuffer++ = *buffer16;
*pubuffer++ = *buffer16;
*pubuffer++ = *buffer16;
buffer16++;
}
imageQt = new QImage(ubuffer, dimX, dimY, QImage::Format_RGB888);
...
This code is a little adaptation from here:
gdcm.sourceforge.net/2.0/html/ConvertToQImage_8cxx-example.html
But the original one I got a execution error. Using mine at least I get an image, but it's not the same.
Here is the new image (converted to jpg here):
http://imageshack.us/photo/my-images/204/8bitz.jpg/
What am I doing wrong?
Thanks.
Try to get values of pixels from buffer manually and pass it to QImage::setPixel. It can be simplier.
You are assigning 16-bit integer to 8-bit variables here:
*pubuffer++ = *buffer16;
The result is undefined and most compilers just move the lower 8 bits to the destination. You want the upper 8 bits
*pubuffer++ = (*buffer16) >> 8;
The other issue is endianness. Depending to the endianness of the source data, you may need to call one of the QtEndian functions.
Lastly, you don't really need to use any of the 32 or 24-bit Qt image formats. Use 8-bit QImage::Format_Indexed8 and set the color table to grays.

What is meant by bytesPerLine in QImage?

What is mean by bytesPerLine in
QImage::QImage ( uchar * data, int width, int height, int bytesPerLine, Format format )
In documentation it is mentioned as bytesPerLine specifies the number of bytes per line (stride).
I am not clear with its usage. Are width and bytesPerLine the same? Could anyone please explain it?
bytesperline means the number of bytes required by the image pixels in a given row.
to illustrate this consider the following code snippet...
int imageWidth = 800;
int imageHeight = 600;
int bytesPerPixel = 4; // 4 for RGBA, 3 for RGB
int format = QImage::Format_ARGB32; // this is the pixel format - check Qimage::Format enum type for more options
QImage image(yourData, imageWidth, imageHeight, imageWidth * bytesPerPixel, format);

Resources