using glDrawPixels to render bitmap raw data - qt

I receive raw image data from server. The server uses MS Dib() function which returns in BGR format. Now, what i want to do is to read this raw data and use glDrawPixels to draw it in Linux.
I was advised that GetClrTabAddress function in MS and alike shall be used to get me the RGB values for each index of 800 by 600 image sent to me.
I do not know how to get these values using indices. Could anyone give some tips.
void func(QByteArray)
{
window_width = 800;
window_height = 600;
size = window_width * window_height;
pixels = new float[size*3];
memcpy(pixels, bytes, bytes.size());
}
void GlWidget::paintGL()
{
//! [5]
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDrawPixels(window_width,window_height,GL_RGB,GL_FLOAT,pixels);
}

You can use GL_BGR in glDrawPixels, which will do the conversion for you and will probably be faster since AFAIK the GPU will do the work.
QByteArray sounds like you should be using unsigned bytes/chars instead of floats, which means GL_UNSIGNED_BYTE.
I'd assert(size*3*sizeof(float) == bytes.size());.
In this case make sure to set glPixelStorei(GL_UNPACK_ALIGNMENT, 1) if your width doesn't align to the default 4-byte boundry. With GL_BGR very pixel is 3 bytes and by default each row of your pixels is assumed to be padded to the next 4-byte boundary.
[EDIT]
OK, it looks like the image uses a palette. This means every value inthe QByteArray maps to an rgb value in another array. I'm not 100% sure where the palette is and maybe it can be computed implicitly, but you mentioned GetClrTabAddress which sounds promising.
The code will then look something like this
for(int i = 0; i < size; ++i)
{
unsigned char index = btmp[i];
//and something like..
memcpy(bytes + i * 3, GetClrTabAddress() + index * 3, 3);
//or
bytes[i*3+0] = someOtherPaletteData[index].red;
bytes[i*3+1] = someOtherPaletteData[index].green;
bytes[i*3+2] = someOtherPaletteData[index].blue;
}

Related

Failed conversion of a QImage image to a CV image

I am new to both opencv and opencv. What I am doing is to convert a QImage image to an opencv Mat image, and then display both of them. Here is my code for this conversion:
i = new QImage("lena.png");
QImage lena = i->scaled(labW,labH,Qt::IgnoreAspectRatio);
//Original
QImage lenaRGB = lena.convertToFormat(QImage::Format_RGB888);
ui->imgWindow->setPixmap(QPixmap::fromImage(lena,Qt::AutoColor));
//method 1
Mat lena_cv, out;
QImage lena2 = lenaRGB.rgbSwapped();
QImage swapped = lena2;
swapped = swapped.rgbSwapped();
lena_cv = Mat(swapped.width(),swapped.height(),CV_8UC3, swapped.bits(),swapped.bytesPerLine()).clone();
namedWindow("CV Image");
imshow("CV Image", lena_cv);
//method 2
Mat out2,out3;
out2.create(Size(lena2.width(),lena2.height()),CV_8UC3);
int width = lena2.width();
int height = lena2.height();
memcpy(out2.data, lena2.bits(), sizeof(char)*width*height*3);
cvtColor(out2,out3,CV_RGB2GRAY);
namedWindow("CV Image2");
imshow("CV Image2",out3);
Both of the above two conversions cannot yield desired images, as shown below:
It is also noted that the conversion cannot proceed without using rgbSwapped, i.e.,:
lena_cv = Mat(lenaRGB.width(),lenaRGB.height(),CV_8UC3, lenaRGB.bits(),lenaRGB.bytesPerLine());
because:
The resulting image lena_cv cannot be displayed. If an additional step to convert lena_cv to BGR format using cvtColor before image display:
Exception at 0x7ffdff394008, code: 0xe06d7363: C++ exception, flags=0x1
(execution cannot be continued) (first chance) at c:\opencv-3.2.0
\sources\modules\core\src\opencl\runtime\opencl_core.cpp:278
This indicates the post conversion to BGR fails. I am not sure RGB to BGR conversion (of QImage) is necessary or not for converting QImage to CV image.
Can anyone help identify the issue with the above codes. Thanks :)
The "skew" in the third image is almost likely a result of assuming that each scan line occupies exactly width*3 bytes. There's typically a "stride" (or "steps") factor with each row in many image formats image such that the number of bytes per row is on some 4-byte or 16-byte boundary. Fortunately, QImage has a helper method called bytesPerLine that tells you how long each source row is.
So instead of this:
memcpy(out2.data, lena2.bits(), sizeof(char)*width*height*3);
Do this:
unsigned char* src = lena2.bits();
unsigned char* dst = out2.data;
int stride = lena2.bytesPerLine();
for (int row = 0; row < height; row++)
{
memcpy(dst + width*3*row, src+row*stride, width*3); // copy a single row, accounting for stride bytes
}
All of this assume it's the QImage that has the stride bytes and not the target Mat image you are transforming the bits too. If I have this backwards, then adjust the code to account for the steps member of Mat. (I don't see you using this, so I'm willing to be the above code is what you need).
The "blue" image is mostly likely just the RGB color bytes needing to be swapped for every pixel. Not sure why you are calling rgbSwapped unless that was the effect you were going for. Oh wait, you're probably referring to that noise effect at the bottom of the image. I'm willing to bet you need to think about "stride" bytes as well here too.

Raw data to QImage

I'm new to graphics programming (pixels, images, etc..)
I'm trying to convert Raw data to QImage and display it on QLabel. The problem is that, the raw data can be any data (it's not actually image raw data, it's binary file.)
The reason if this is that, to understand deeply how pixels and things like that work, I know I'll get random image with weird results, but it will work.
I'm doing something like this, but I think I'm doing it wrong!
QImage *img = new QImage(640, 480, QImage::Format_RGB16); //640,480 size picture.
//here I'm trying to fill newly created QImage with random pixels and display it.
for(int i = 0; i < 640; i++)
{
for(int u = 0; u < 480; u++)
{
img->setPixel(i, u, rawData[i]);
}
}
ui->label->setPixmap(QPixmap::fromImage(*img));
am I doing it correctly? By the way, can you point me where should I learn these things? Thank you!
Overall it's correct. QImage is a class that allows to manipulate its own data directly, but you should use correct pixel format.
A bit more efficient example:
QImage* img = new QImage(640, 480, QImage::Format_RGB16);
for (int y = 0; y < img->height(); y++)
{
memcpy(img->scanLine(y), rawData[y], img->bytesPerLine());
}
Where rawData is a two-dimension array.
This is how I saved a raw BGRA frame to the disk:
QImage image((const unsigned char*)pixels, width, height, QImage::Format_RGB32);
image.save("out.jpg");
Syntactically, your code appears to be correct.
Reading the class signature, you may want to call setPixel in the following manner:
img->setPixel(i, u, QRbg(##FFRRGGBB));
Where ##FFRRGGBB is a color quadruplet, unless, of course, you want monochrome 8 bit support.
Additionally, declaring a naked pointer is dangerous. The following code is equivalent:
QImage image(640, 480, QImage::Format_something);
QPixmap::fromImage(image);
And will deallocate appropriately upon function completion.
Qt Examples directory is a great place to search for functionality. Also, peruse the class documentation because they're littered with examples.

Qt - Get audio amplitude from QBytearray

I'm trying to create a program, using Qt (c++), which can record audio from my microphone using QAudioinput and QIODevice. I made a research and I came up with an example located on the this page. This example does what I need.
Now, I am trying to create an audio waveform of the recorded sound. I want to extract audio amplitudes and save them on a QList. To do that I use the following code:
//Check the number of samples in input buffer
qint64 len = m_audioInput->bytesReady();
//Limit sample size
if(len > 4096)
len = 4096;
//Read sound samples from input device to buffer
qint64 l = m_input->read(m_buffer.data(), len);
if(l > 0)
{
//Assign sound samples to short array
short* resultingData = (short*)m_buffer.data();
for ( i=0; i < len; i++ )
{
btlist.append( resultingData[ i ]);
}
}
m_audioInput is QAudioinput | m_buffer is QBytearray | m_input is QIODevice | btlist is QList
I use the following QAudioFormat:
m_format.setFrequency(44100); //set frequency to 44100
m_format.setSampleRate(44100); //set sample rate to 44100
m_format.setChannels(1); //set channels to mono
m_format.setSampleSize(16); //set sample sze to 16 bit
m_format.setSampleType(QAudioFormat::SignedInt ); //signed integer sample
m_format.setByteOrder(QAudioFormat::LittleEndian); //Byte order
m_format.setCodec("audio/pcm"); //set codec as simple audio/pcm
When I print my QList, using qWarning() << btlist.at(int), I get some positive and negative numbers which represents my audio amplitudes. I used Microsoft Excel to plot the data and compare it with the actual sound waveform.
(EDIT BASED ON THE OP COMMENT)
I am drawing the waveform using QPainter in Qt like this
for(int i = 1; i < btlist.size(); i++){
double x1 = (i-(i/1.25))-0.2;
double y1 = btlist.at(i-1);
double x2 = i-(i/1.25);
double y2 = btlist.at(i);
painter.drawLine(x1,y1,x2, y2);
}
The problem is that I also get lots of zeros (0) in my QList between the amplitude data like this, which if I draw as a waveform they are a straight line, which is not normal because it causes corruption to my waveform.
My question is why is that happening? What these zeros (0) represent? Am I doing something wrong? Also, is there a better way to extract audio amplitudes from QBytearray?
Thank you.
The drawline method you are using take integer values. Which means most of the time both of your x indexes will be the same. By simplifiyng your formula the x value at a given i is (i/5.0). By itself it is not an issue because the lines will be superposed, and it is a perfect way of drawing (just to make sure that's what you want to do).
The zero you see can be perfectly valid. They represent silence.
The real issue is that the range of your 16 bits PCM values is [-32767 , 32768]. I doubt that the paint device you are using cover this range. You need to normalize your y-axis. Moreover, it seems taht the qt coordinated system doesn't have negative values (edit: Nevermind the negatives, its says logical coordinates are converted).
For instance, convert your pcm values using :
((btlist.at(i) / MAX_AMPLITUDE + 1.0) / 2) * paintDevice.height();
Edit:
Btw, you are not using l, which is the real amount of data you read. If it is inferior to len, you will read invalid values at the end of your buffer, possibly read garbage\ read zeros\crash.
And your buffer is a byte buffer. And you iterate using a short pointer. So whether you use l or len the maximum size need to be divided by two. This is probably the cause of the ling line of zero in your picture.
for ( i=0; i < l/2; i++ )
{
btlist.append( resultingData[ i ]);
}

Efficient conversion of AVFrame to QImage

I need to extract frames from a video in my Qt based application. Using ffmpeg libraries I am able to fetch frames as AVFrames which I need to convert to QImage to use in other parts of my application. This conversion needs to be efficient. So far it seems sws_scale() is the right function to use but I am not sure what source and destination pixel formats are to be specified.
Came up with the following 2-step process that first converts a decoded AVFame to another AVFrame in RGB colorspace and then to QImage. It works and is reasonably fast.
src_frame = get_decoded_frame();
AVFrame *pFrameRGB = avcodec_alloc_frame(); // intermediate pframe
if(pFrameRGB==NULL) {
;// Handle error
}
int numBytes= avpicture_get_size(PIX_FMT_RGB24,
is->video_st->codec->width, is->video_st->codec->height);
uint8_t *buffer = (uint8_t*)malloc(numBytes);
avpicture_fill((AVPicture*)pFrameRGB, buffer, PIX_FMT_RGB24,
is->video_st->codec->width, is->video_st->codec->height);
int dst_fmt = PIX_FMT_RGB24;
int dst_w = is->video_st->codec->width;
int dst_h = is->video_st->codec->height;
// TODO: cache following conversion context for speedup,
// and recalculate only on dimension changes
SwsContext *img_convert_ctx_temp;
img_convert_ctx_temp = sws_getContext(
is->video_st->codec->width, is->video_st->codec->height,
is->video_st->codec->pix_fmt,
dst_w, dst_h, (PixelFormat)dst_fmt,
SWS_BICUBIC, NULL, NULL, NULL);
QImage *myImage = new QImage(dst_w, dst_h, QImage::Format_RGB32);
sws_scale(img_convert_ctx_temp,
src_frame->data, src_frame->linesize, 0, is->video_st->codec->height,
pFrameRGB->data,
pFrameRGB->linesize);
uint8_t *src = (uint8_t *)(pFrameRGB->data[0]);
for (int y = 0; y < dst_h; y++)
{
QRgb *scanLine = (QRgb *) myImage->scanLine(y);
for (int x = 0; x < dst_w; x=x+1)
{
scanLine[x] = qRgb(src[3*x], src[3*x+1], src[3*x+2]);
}
src += pFrameRGB->linesize[0];
}
If you find a more efficient approach, let me know in the comments
I know, it's too late, but maybe someone will find it useful. From here I got the clue of doing the same conversion, which looks a bit shorter.
So I created QImage which is reused for every decoded frame:
QImage img( width, height, QImage::Format_RGB888 );
Created frameRGB:
frameRGB = av_frame_alloc();
//Allocate memory for the pixels of a picture and setup the AVPicture fields for it.
avpicture_alloc( ( AVPicture *) frameRGB, AV_PIX_FMT_RGB24, width, height);
After the the first frame is decoded I create conversion context SwsContext this way (it will be used for all the next frames):
mImgConvertCtx = sws_getContext( codecContext->width, codecContext->height, codecContext->pix_fmt, width, height, AV_PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);
And finally for every decoded frame conversion is performed:
if( 1 == framesFinished && nullptr != imgConvertCtx )
{
//conversion frame to frameRGB
sws_scale(imgConvertCtx, frame->data, frame->linesize, 0, codecContext->height, frameRGB->data, frameRGB->linesize);
//setting QImage from frameRGB
for( int y = 0; y < height; ++y )
memcpy( img.scanLine(y), frameRGB->data[0]+y * frameRGB->linesize[0], mWidth * 3 );
}
See the link for the specifics.
A simpler approach, I think:
void takeSnapshot(AVCodecContext* dec_ctx, AVFrame* frame)
{
SwsContext* img_convert_ctx;
img_convert_ctx = sws_getContext(dec_ctx->width,
dec_ctx->height,
dec_ctx->pix_fmt,
dec_ctx->width,
dec_ctx->height,
AV_PIX_FMT_RGB24,
SWS_BICUBIC, NULL, NULL, NULL);
AVFrame* frameRGB = av_frame_alloc();
avpicture_alloc((AVPicture*)frameRGB,
AV_PIX_FMT_RGB24,
dec_ctx->width,
dec_ctx->height);
sws_scale(img_convert_ctx,
frame->data,
frame->linesize, 0,
dec_ctx->height,
frameRGB->data,
frameRGB->linesize);
QImage image(frameRGB->data[0],
dec_ctx->width,
dec_ctx->height,
frameRGB->linesize[0],
QImage::Format_RGB888);
image.save("capture.png");
}
Today, I have tested directly pass the image->bit() to swscale and finally it works, so it doesn't need to copy to memory. For example:
/* 1. Get frame and QImage to show */
struct my_frame *frame = get_frame(source);
QImage *myImage = new QImage(dst_w, dst_h, QImage::Format_RGBA8888);
/* 2. Convert and write into image buffer */
uint8_t *dst[] = {myImage->bits()};
int linesizes[4];
av_image_fill_linesizes(linesizes, AV_PIX_FMT_RGBA, frame->width);
sws_scale(myswscontext, frame->data, (const int*)frame->linesize,
0, frame->height, dst, linesizes);
I just discovered that scanLine is just seeking thru the buffer.. all you need is use AV_PIX_FMT_RGB32 for the AVFrame and QImage::FORMAT_RGB32 for the QImage.
Then after decoding just do a memcpy
memcpy(img.scanLine(0), pFrameRGB->data[0], pFrameRGB->linesize[0] * pFrameRGB->height());
I had problems with the other proposed solutions as :
They did not mention freeing either AVFrame, SwsContext or the allocated buffers, which caused massive memory leaks (I had thousands of frames to handle). These problems couldn't all be solved easily as QImage relies on the underlying data, and does not copy it. If freeing the buffer directly, the QImage points to freed data and breaks. This could be solved by using QImage's cleanupFunction to free the buffer once the image is no longer needed, but with other problems it wasn't good anyways.
In some cases one of the suggestions, of passing QImage.bits directly to sws_scale, would not work as QImage are minimum 32 bit aligned. Therefore for certain dimensions it would not match the expected width by sws_scale and output each line shifted a little bit.
A third problem is that they used deprecated AVPicture elements.
I listed the problem in another question Converting an AVFrame to QImage with conversion of pixel format and in the end found a solution using a temporary buffer, which could be copied to the QImage, and then safely freed.
So see my answer for a fully working, efficient, and with no deprecated function calls, implementation : https://stackoverflow.com/a/68212609/7360943

how to print a uint16 monochrome image in Qt?

I'm trying to print a image from a Dicom file. I pass the raw data to a convertToFormat_RGB888 function. As far as I know, Qt can't handle monochrome 16 bits images.
Here's the original image (converted to jpg here):
http://imageshack.us/photo/my-images/839/16bitc.jpg/
bool convertToFormat_RGB888(gdcm::Image const & gimage, char *buffer, QImage* &imageQt)
Inside this function, I get inside this...
...
else if (gimage.GetPixelFormat() == gdcm::PixelFormat::UINT16)
{
short *buffer16 = (short*)buffer;
unsigned char *ubuffer = new unsigned char[dimX*dimY*3];
unsigned char *pubuffer = ubuffer;
for (unsigned int i = 0; i < dimX*dimY; i++)
{
*pubuffer++ = *buffer16;
*pubuffer++ = *buffer16;
*pubuffer++ = *buffer16;
buffer16++;
}
imageQt = new QImage(ubuffer, dimX, dimY, QImage::Format_RGB888);
...
This code is a little adaptation from here:
gdcm.sourceforge.net/2.0/html/ConvertToQImage_8cxx-example.html
But the original one I got a execution error. Using mine at least I get an image, but it's not the same.
Here is the new image (converted to jpg here):
http://imageshack.us/photo/my-images/204/8bitz.jpg/
What am I doing wrong?
Thanks.
Try to get values of pixels from buffer manually and pass it to QImage::setPixel. It can be simplier.
You are assigning 16-bit integer to 8-bit variables here:
*pubuffer++ = *buffer16;
The result is undefined and most compilers just move the lower 8 bits to the destination. You want the upper 8 bits
*pubuffer++ = (*buffer16) >> 8;
The other issue is endianness. Depending to the endianness of the source data, you may need to call one of the QtEndian functions.
Lastly, you don't really need to use any of the 32 or 24-bit Qt image formats. Use 8-bit QImage::Format_Indexed8 and set the color table to grays.

Resources