I've found very similiar topic: how to convert an opencv cv::Mat to qimage , but it does not solve my problem.
I have function converting cv::Mat to QImage
QImage cvMatToQImg(cv::Mat& mat)
{
cv::Mat rgb;
if(mat.channels()==1)
{
cv::cvtColor(mat,rgb,CV_GRAY2BGR);
cv::cvtColor(rgb,rgb,CV_BGR2BGRA);
QImage temp = QImage((unsigned char*)(rgb.data), rgb.cols,
rgb.rows,QImage::Format_ARGB32 );
QImage returnImage = temp.copy();
return returnImage;
}
And it's works for my but I want to make it more efficient.
First: why changing 2 cvtColor functions with:
cv::cvtColor(mat,rgb,CV_GRAY2BGRA)
fails on
QImage returnImage = temp.copy()
with segfault.
Then how to eliminate copying of QImage. When I simply return temp image, I'm getting segfault.
Any other optimalizations can be done there? It's very often used function so I want to make it as fast as possible.
Your solution to the problem is not efficient, in particular it is less efficient then the code I posted on the other question you link to.
Your problem is that you have to convert from grayscale to color, or RGBA. As soon as you need this conversation, naturally a copy of the data is needed.
My solution does the conversion between grayscale and color, as well as between cv::Mat and QImage at the same time. That's why it is the most efficient you can get.
In your solution, you first try to convert and then want to build QImage around OpenCV data directly to spare a second copy. But, the data you point to is temporary. As soon as you leave the function, the cv::Mat free's its associated memory and that's why it is not valid anymore also within the QImage. You could manually increase the reference counter of the cv::Mat beforehand, but that opens the door for a memory leak afterwards.
In the end, you attempt a dirty solution to a problem better solved in a clean fashion.
It may be easiest to roll your own solution. Below is the current OpenCV implementation for going from gray to RGBA format:
template<typename _Tp>
struct Gray2RGB
{
typedef _Tp channel_type;
Gray2RGB(int _dstcn) : dstcn(_dstcn) {}
void operator()(const _Tp* src, _Tp* dst, int n) const
{
if( dstcn == 3 )
for( int i = 0; i < n; i++, dst += 3 )
{
dst[0] = dst[1] = dst[2] = src[i];
}
else
{
_Tp alpha = ColorChannel<_Tp>::max();
for( int i = 0; i < n; i++, dst += 4 )
{
dst[0] = dst[1] = dst[2] = src[i];
dst[3] = alpha;
}
}
}
int dstcn;
};
Here is where the actual cvtColor call occurs:
case CV_GRAY2BGR: case CV_GRAY2BGRA:
if( dcn <= 0 ) dcn = 3;
CV_Assert( scn == 1 && (dcn == 3 || dcn == 4));
_dst.create(sz, CV_MAKETYPE(depth, dcn));
dst = _dst.getMat();
if( depth == CV_8U )
CvtColorLoop(src, dst, Gray2RGB<uchar>(dcn));
This code is contained in the color.cpp file in the imgproc library.
As you can see, since you are not setting the dstCn parameter in your cvtColor calls, it defaults to dcn = 3. To go straight from gray to BGRA, set dstCn to 4. Since OpenCV's default color order is BGR, you'll still need to swap the color channels for it to look right (assuming you get your image data from an OpenCV function). So, it may be worth it to implement your own converter possibly following the above example, or using ypnos answer here.
Also, have a look at my other answer involving how to integrate OpenCV with Qt.
The problem is that both the cv::Mat and QImage data isn't necessarily contiguous.
New data rows in opencv start on a 32bit boundary (not sure about QImage - I think it's system dependant) so you can't copy a memeory block unless your rows happen to be exact multiples of 4bytes
See How to output this 24 bit image in Qt
Related
I am new to both opencv and opencv. What I am doing is to convert a QImage image to an opencv Mat image, and then display both of them. Here is my code for this conversion:
i = new QImage("lena.png");
QImage lena = i->scaled(labW,labH,Qt::IgnoreAspectRatio);
//Original
QImage lenaRGB = lena.convertToFormat(QImage::Format_RGB888);
ui->imgWindow->setPixmap(QPixmap::fromImage(lena,Qt::AutoColor));
//method 1
Mat lena_cv, out;
QImage lena2 = lenaRGB.rgbSwapped();
QImage swapped = lena2;
swapped = swapped.rgbSwapped();
lena_cv = Mat(swapped.width(),swapped.height(),CV_8UC3, swapped.bits(),swapped.bytesPerLine()).clone();
namedWindow("CV Image");
imshow("CV Image", lena_cv);
//method 2
Mat out2,out3;
out2.create(Size(lena2.width(),lena2.height()),CV_8UC3);
int width = lena2.width();
int height = lena2.height();
memcpy(out2.data, lena2.bits(), sizeof(char)*width*height*3);
cvtColor(out2,out3,CV_RGB2GRAY);
namedWindow("CV Image2");
imshow("CV Image2",out3);
Both of the above two conversions cannot yield desired images, as shown below:
It is also noted that the conversion cannot proceed without using rgbSwapped, i.e.,:
lena_cv = Mat(lenaRGB.width(),lenaRGB.height(),CV_8UC3, lenaRGB.bits(),lenaRGB.bytesPerLine());
because:
The resulting image lena_cv cannot be displayed. If an additional step to convert lena_cv to BGR format using cvtColor before image display:
Exception at 0x7ffdff394008, code: 0xe06d7363: C++ exception, flags=0x1
(execution cannot be continued) (first chance) at c:\opencv-3.2.0
\sources\modules\core\src\opencl\runtime\opencl_core.cpp:278
This indicates the post conversion to BGR fails. I am not sure RGB to BGR conversion (of QImage) is necessary or not for converting QImage to CV image.
Can anyone help identify the issue with the above codes. Thanks :)
The "skew" in the third image is almost likely a result of assuming that each scan line occupies exactly width*3 bytes. There's typically a "stride" (or "steps") factor with each row in many image formats image such that the number of bytes per row is on some 4-byte or 16-byte boundary. Fortunately, QImage has a helper method called bytesPerLine that tells you how long each source row is.
So instead of this:
memcpy(out2.data, lena2.bits(), sizeof(char)*width*height*3);
Do this:
unsigned char* src = lena2.bits();
unsigned char* dst = out2.data;
int stride = lena2.bytesPerLine();
for (int row = 0; row < height; row++)
{
memcpy(dst + width*3*row, src+row*stride, width*3); // copy a single row, accounting for stride bytes
}
All of this assume it's the QImage that has the stride bytes and not the target Mat image you are transforming the bits too. If I have this backwards, then adjust the code to account for the steps member of Mat. (I don't see you using this, so I'm willing to be the above code is what you need).
The "blue" image is mostly likely just the RGB color bytes needing to be swapped for every pixel. Not sure why you are calling rgbSwapped unless that was the effect you were going for. Oh wait, you're probably referring to that noise effect at the bottom of the image. I'm willing to bet you need to think about "stride" bytes as well here too.
I'm new to graphics programming (pixels, images, etc..)
I'm trying to convert Raw data to QImage and display it on QLabel. The problem is that, the raw data can be any data (it's not actually image raw data, it's binary file.)
The reason if this is that, to understand deeply how pixels and things like that work, I know I'll get random image with weird results, but it will work.
I'm doing something like this, but I think I'm doing it wrong!
QImage *img = new QImage(640, 480, QImage::Format_RGB16); //640,480 size picture.
//here I'm trying to fill newly created QImage with random pixels and display it.
for(int i = 0; i < 640; i++)
{
for(int u = 0; u < 480; u++)
{
img->setPixel(i, u, rawData[i]);
}
}
ui->label->setPixmap(QPixmap::fromImage(*img));
am I doing it correctly? By the way, can you point me where should I learn these things? Thank you!
Overall it's correct. QImage is a class that allows to manipulate its own data directly, but you should use correct pixel format.
A bit more efficient example:
QImage* img = new QImage(640, 480, QImage::Format_RGB16);
for (int y = 0; y < img->height(); y++)
{
memcpy(img->scanLine(y), rawData[y], img->bytesPerLine());
}
Where rawData is a two-dimension array.
This is how I saved a raw BGRA frame to the disk:
QImage image((const unsigned char*)pixels, width, height, QImage::Format_RGB32);
image.save("out.jpg");
Syntactically, your code appears to be correct.
Reading the class signature, you may want to call setPixel in the following manner:
img->setPixel(i, u, QRbg(##FFRRGGBB));
Where ##FFRRGGBB is a color quadruplet, unless, of course, you want monochrome 8 bit support.
Additionally, declaring a naked pointer is dangerous. The following code is equivalent:
QImage image(640, 480, QImage::Format_something);
QPixmap::fromImage(image);
And will deallocate appropriately upon function completion.
Qt Examples directory is a great place to search for functionality. Also, peruse the class documentation because they're littered with examples.
I need to extract frames from a video in my Qt based application. Using ffmpeg libraries I am able to fetch frames as AVFrames which I need to convert to QImage to use in other parts of my application. This conversion needs to be efficient. So far it seems sws_scale() is the right function to use but I am not sure what source and destination pixel formats are to be specified.
Came up with the following 2-step process that first converts a decoded AVFame to another AVFrame in RGB colorspace and then to QImage. It works and is reasonably fast.
src_frame = get_decoded_frame();
AVFrame *pFrameRGB = avcodec_alloc_frame(); // intermediate pframe
if(pFrameRGB==NULL) {
;// Handle error
}
int numBytes= avpicture_get_size(PIX_FMT_RGB24,
is->video_st->codec->width, is->video_st->codec->height);
uint8_t *buffer = (uint8_t*)malloc(numBytes);
avpicture_fill((AVPicture*)pFrameRGB, buffer, PIX_FMT_RGB24,
is->video_st->codec->width, is->video_st->codec->height);
int dst_fmt = PIX_FMT_RGB24;
int dst_w = is->video_st->codec->width;
int dst_h = is->video_st->codec->height;
// TODO: cache following conversion context for speedup,
// and recalculate only on dimension changes
SwsContext *img_convert_ctx_temp;
img_convert_ctx_temp = sws_getContext(
is->video_st->codec->width, is->video_st->codec->height,
is->video_st->codec->pix_fmt,
dst_w, dst_h, (PixelFormat)dst_fmt,
SWS_BICUBIC, NULL, NULL, NULL);
QImage *myImage = new QImage(dst_w, dst_h, QImage::Format_RGB32);
sws_scale(img_convert_ctx_temp,
src_frame->data, src_frame->linesize, 0, is->video_st->codec->height,
pFrameRGB->data,
pFrameRGB->linesize);
uint8_t *src = (uint8_t *)(pFrameRGB->data[0]);
for (int y = 0; y < dst_h; y++)
{
QRgb *scanLine = (QRgb *) myImage->scanLine(y);
for (int x = 0; x < dst_w; x=x+1)
{
scanLine[x] = qRgb(src[3*x], src[3*x+1], src[3*x+2]);
}
src += pFrameRGB->linesize[0];
}
If you find a more efficient approach, let me know in the comments
I know, it's too late, but maybe someone will find it useful. From here I got the clue of doing the same conversion, which looks a bit shorter.
So I created QImage which is reused for every decoded frame:
QImage img( width, height, QImage::Format_RGB888 );
Created frameRGB:
frameRGB = av_frame_alloc();
//Allocate memory for the pixels of a picture and setup the AVPicture fields for it.
avpicture_alloc( ( AVPicture *) frameRGB, AV_PIX_FMT_RGB24, width, height);
After the the first frame is decoded I create conversion context SwsContext this way (it will be used for all the next frames):
mImgConvertCtx = sws_getContext( codecContext->width, codecContext->height, codecContext->pix_fmt, width, height, AV_PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);
And finally for every decoded frame conversion is performed:
if( 1 == framesFinished && nullptr != imgConvertCtx )
{
//conversion frame to frameRGB
sws_scale(imgConvertCtx, frame->data, frame->linesize, 0, codecContext->height, frameRGB->data, frameRGB->linesize);
//setting QImage from frameRGB
for( int y = 0; y < height; ++y )
memcpy( img.scanLine(y), frameRGB->data[0]+y * frameRGB->linesize[0], mWidth * 3 );
}
See the link for the specifics.
A simpler approach, I think:
void takeSnapshot(AVCodecContext* dec_ctx, AVFrame* frame)
{
SwsContext* img_convert_ctx;
img_convert_ctx = sws_getContext(dec_ctx->width,
dec_ctx->height,
dec_ctx->pix_fmt,
dec_ctx->width,
dec_ctx->height,
AV_PIX_FMT_RGB24,
SWS_BICUBIC, NULL, NULL, NULL);
AVFrame* frameRGB = av_frame_alloc();
avpicture_alloc((AVPicture*)frameRGB,
AV_PIX_FMT_RGB24,
dec_ctx->width,
dec_ctx->height);
sws_scale(img_convert_ctx,
frame->data,
frame->linesize, 0,
dec_ctx->height,
frameRGB->data,
frameRGB->linesize);
QImage image(frameRGB->data[0],
dec_ctx->width,
dec_ctx->height,
frameRGB->linesize[0],
QImage::Format_RGB888);
image.save("capture.png");
}
Today, I have tested directly pass the image->bit() to swscale and finally it works, so it doesn't need to copy to memory. For example:
/* 1. Get frame and QImage to show */
struct my_frame *frame = get_frame(source);
QImage *myImage = new QImage(dst_w, dst_h, QImage::Format_RGBA8888);
/* 2. Convert and write into image buffer */
uint8_t *dst[] = {myImage->bits()};
int linesizes[4];
av_image_fill_linesizes(linesizes, AV_PIX_FMT_RGBA, frame->width);
sws_scale(myswscontext, frame->data, (const int*)frame->linesize,
0, frame->height, dst, linesizes);
I just discovered that scanLine is just seeking thru the buffer.. all you need is use AV_PIX_FMT_RGB32 for the AVFrame and QImage::FORMAT_RGB32 for the QImage.
Then after decoding just do a memcpy
memcpy(img.scanLine(0), pFrameRGB->data[0], pFrameRGB->linesize[0] * pFrameRGB->height());
I had problems with the other proposed solutions as :
They did not mention freeing either AVFrame, SwsContext or the allocated buffers, which caused massive memory leaks (I had thousands of frames to handle). These problems couldn't all be solved easily as QImage relies on the underlying data, and does not copy it. If freeing the buffer directly, the QImage points to freed data and breaks. This could be solved by using QImage's cleanupFunction to free the buffer once the image is no longer needed, but with other problems it wasn't good anyways.
In some cases one of the suggestions, of passing QImage.bits directly to sws_scale, would not work as QImage are minimum 32 bit aligned. Therefore for certain dimensions it would not match the expected width by sws_scale and output each line shifted a little bit.
A third problem is that they used deprecated AVPicture elements.
I listed the problem in another question Converting an AVFrame to QImage with conversion of pixel format and in the end found a solution using a temporary buffer, which could be copied to the QImage, and then safely freed.
So see my answer for a fully working, efficient, and with no deprecated function calls, implementation : https://stackoverflow.com/a/68212609/7360943
I'm trying to print a image from a Dicom file. I pass the raw data to a convertToFormat_RGB888 function. As far as I know, Qt can't handle monochrome 16 bits images.
Here's the original image (converted to jpg here):
http://imageshack.us/photo/my-images/839/16bitc.jpg/
bool convertToFormat_RGB888(gdcm::Image const & gimage, char *buffer, QImage* &imageQt)
Inside this function, I get inside this...
...
else if (gimage.GetPixelFormat() == gdcm::PixelFormat::UINT16)
{
short *buffer16 = (short*)buffer;
unsigned char *ubuffer = new unsigned char[dimX*dimY*3];
unsigned char *pubuffer = ubuffer;
for (unsigned int i = 0; i < dimX*dimY; i++)
{
*pubuffer++ = *buffer16;
*pubuffer++ = *buffer16;
*pubuffer++ = *buffer16;
buffer16++;
}
imageQt = new QImage(ubuffer, dimX, dimY, QImage::Format_RGB888);
...
This code is a little adaptation from here:
gdcm.sourceforge.net/2.0/html/ConvertToQImage_8cxx-example.html
But the original one I got a execution error. Using mine at least I get an image, but it's not the same.
Here is the new image (converted to jpg here):
http://imageshack.us/photo/my-images/204/8bitz.jpg/
What am I doing wrong?
Thanks.
Try to get values of pixels from buffer manually and pass it to QImage::setPixel. It can be simplier.
You are assigning 16-bit integer to 8-bit variables here:
*pubuffer++ = *buffer16;
The result is undefined and most compilers just move the lower 8 bits to the destination. You want the upper 8 bits
*pubuffer++ = (*buffer16) >> 8;
The other issue is endianness. Depending to the endianness of the source data, you may need to call one of the QtEndian functions.
Lastly, you don't really need to use any of the 32 or 24-bit Qt image formats. Use 8-bit QImage::Format_Indexed8 and set the color table to grays.
I am using a GDI+ Graphic to draw a 4000*3000 image to screen, but it is really slow. It takes about 300ms. I wish it just occupy less than 10ms.
Bitmap *bitmap = Bitmap::FromFile("XXXX",...);
//--------------------------------------------
// this part takes about 300ms, terrible!
int width = bitmap->GetWidth();
int height = bitmap->GetHeight();
DrawImage(bitmap,0,0,width,height);
//------------------------------------------
I cannot use CachedBitmap, because I want to edit the bitmap later.
How can I improve it? Or is any thing wrong?
This native GDI function also draws the image into the screen, and it just take 1 ms:
SetStretchBltMode(hDC, COLORONCOLOR);
StretchDIBits(hDC, rcDest.left, rcDest.top,
rcDest.right-rcDest.left, rcDest.bottom-rcDest.top,
0, 0, width, height,
BYTE* dib, dibinfo, DIB_RGB_COLORS, SRCCOPY);
//--------------------------------------------------------------
If I want to use StretchDIBits, I need to pass BITMAPINFO, But how can I get BITMAPINFO from a Gdi+ Bitmap Object? I did the experiment by FreeImage lib, I call StretchDIBits using FreeImageplus object, it draw really fast. But now I need to draw Bitmap, and write some algorithm on Bitmap's bits array, how can I get BITMAPINFO if I have an Bitmap object? It's really annoying -___________-|
If you're using GDI+, the TextureBrush class is what you need for rendering images fast. I've written a couple of 2d games with it, getting around 30 FPS or so.
I've never written .NET code in C++, so here's a C#-ish example:
Bitmap bmp = new Bitmap(...)
TextureBrush myBrush = new TextureBrush(bmp)
private void Paint(object sender, PaintEventArgs e):
{
//Don't draw the bitmap directly.
//Only draw TextureBrush inside the Paint event.
e.Graphics.FillRectangle(myBrush, ...)
}
You have a screen of 4000 x 3000 resolution? Wow!
If not, you should draw only the visible part of the image, it would be much faster...
[EDIT after first comment] My remark is indeed a bit stupid, I suppose DrawImage will mask/skip unneeded pixels.
After your edit (showing StretchDIBits), I guess a possible source of speed difference might come from the fact that StretchDIBits is hardware accelerated ("If the driver cannot support the JPEG or PNG file image" is a hint...) while DrawImage might be (I have no proof for that!) coded in C, relying on CPU power instead of GPU's one...
If I recall correctly, DIB images are fast (despite being "device independent"). See High Speed Win32 Animation: "use CreateDIBSection to do high speed animation". OK, it applies to DIB vs. GDI, in old Windows version (1996!) but I think it is still true.
[EDIT] Maybe Bitmap::GetHBITMAP function might help you to use StretchDIBits (not tested...).
Just a thought; instead of retrieving the width and height of the image before drawing, why not cache these values when you load the image?
Explore the impact of explicitly setting the interpolation mode to NearestNeighbor (where, in your example, it looks like interpolation is not actually needed! But 300ms is the kind of cost of doing high-quality interpolation when no interpolation is needed, so its worth a try)
Another thing to explore is changing the colour depth of the bitmap.
Unfortunately when I had a similar problem, I found that GDI+ is known to be much slower than GDI and not generally hardware accelerated, but now Microsoft have moved on to WPF they will not come back to improve GDI+!
All the graphics card manufacturers have moved onto 3D performance and don't seem interested in 2D acceleration, and there's no clear source of information on which functions are or can be hardware accelerated or not. Very frustrating because having written an app in .NET using GDI+, I am not happy to change to a completely different technology to speed it up to reasonable levels.
i don't think they'll make much of a different, but since you're not actually needing to resize the image, try using the overload of DrawImage that doesn't (attempt) to resize:
DrawImage(bitmap,0,0);
Like i said, i doubt it will make any difference, because i'm sure that DrawImage checks the Width and Height of the bitmap, and if there's no resizing needed, just calls this overload. (i would hope it doesn't bother going through all 12 million pixels performing no actual work).
Update: My ponderings are wrong. i had since found out, but guys comment reminded me of my old answer: you want to specify the destination size; even though it matches the source size:
DrawImage(bitmap, 0, 0, bitmap.GetWidth, bitmap.GetHeight);
The reason is because of dpi differences between the dpi of bitmap and the dpi of the destination. GDI+ will perform scaling to get the image to come out the right "size" (i.e. in inches)
What i've learned on my own since last October is that you really want to draw a "cached" version of your bitmap. There is a CachedBitmap class in GDI+. There are some tricks to using it. But in there end i have a function bit of (Delphi) code that does it.
The caveat is that the CachedBitmap can become invalid - meaning it can't be used to draw. This happens if the user changes resolutions or color depths (e.g. Remote Desktop). In that case the DrawImage will fail, and you have to re-created the CachedBitmap:
class procedure TGDIPlusHelper.DrawCachedBitmap(image: TGPImage;
var cachedBitmap: TGPCachedBitmap;
Graphics: TGPGraphics; x, y: Integer; width, height: Integer);
var
b: TGPBitmap;
begin
if (image = nil) then
begin
//i've chosen to not throw exceptions during paint code - it gets very nasty
Exit;
end;
if (graphics = nil) then
begin
//i've chosen to not throw exceptions during paint code - it gets very nasty
Exit;
end;
//Check if we have to invalidate the cached image because of size mismatch
//i.e. if the user has "zoomed" the UI
if (CachedBitmap <> nil) then
begin
if (CachedBitmap.BitmapWidth <> width) or (CachedBitmap.BitmapHeight <> height) then
FreeAndNil(CachedBitmap); //nil'ing it will force it to be re-created down below
end;
//Check if we need to create the "cached" version of the bitmap
if CachedBitmap = nil then
begin
b := TGDIPlusHelper.ResizeImage(image, width, height);
try
CachedBitmap := TGPCachedBitmap.Create(b, graphics);
finally
b.Free;
end;
end;
if (graphics.DrawCachedBitmap(cachedBitmap, x, y) <> Ok) then
begin
//The calls to DrawCachedBitmap failed
//The API is telling us we have to recreate the cached bitmap
FreeAndNil(cachedBitmap);
b := TGDIPlusHelper.ResizeImage(image, width, height);
try
CachedBitmap := TGPCachedBitmap.Create(b, graphics);
finally
b.Free;
end;
graphics.DrawCachedBitmap(cachedBitmap, x, y);
end;
end;
The cachedBitmap is passed in by reference. The first call to DrawCachedBitmap it cached version will be created. You then pass it in subsequent calls, e.g.:
Image imgPrintInvoice = new Image.FromFile("printer.png");
CachedBitmap imgPrintInvoiceCached = null;
...
int glyphSize = 16 * (GetCurrentDpi() / 96);
DrawCachedBitmap(imgPrintInvoice , ref imgPrintInvoiceCached , graphics,
0, 0, glyphSize, glyphSize);
i use the routine to draw glyphs on buttons, taking into account the current DPI. The same could have been used by the Internet Explorer team to draw images when the user is running high dpi (ie is very slow drawing zoomed images, because they use GDI+).
/*
First sorry for ma English, and the code is partly in polish, but it's simple to understand.
I had the same problem and I found the best solution. Here it is.
Dont use: Graphics graphics(hdc); graphics.DrawImage(gpBitmap, 0, 0); It is slow.
Use: GetHBITMAP(Gdiplus::Color(), &g_hBitmap) for HBITMAP and draw using my function ShowBitmapStretch().
You can resize it and it is much faster! Artur Czekalski / Poland
*/
//--------Global-----------
Bitmap *g_pGDIBitmap; //for loading picture
int gRozXOkna, gRozYOkna; //size of working window
int gRozXObrazu, gRozYObrazu; //Size of picture X,Y
HBITMAP g_hBitmap = NULL; //for displaying on window
//------------------------------------------------------------------------------
int ShowBitmapStretch(HDC hdc, HBITMAP hBmp, int RozX, int RozY, int RozXSkal, int RozYSkal, int PozX, int PozY)
{
if (hBmp == NULL) return -1;
HDC hdc_mem = CreateCompatibleDC(hdc); //utworzenie kontekstu pamięciowego
if (NULL == hdc_mem) return -2;
//Trzeba połączyć BMP z hdc_mem, tzn. umieścić bitmapę w naszym kontekście pamięciowym
if (DeleteObject(SelectObject(hdc_mem, hBmp)) == NULL) return -3;
SetStretchBltMode(hdc, COLORONCOLOR); //important! for smoothness
if (StretchBlt(hdc, PozX, PozY, RozXSkal, RozYSkal, hdc_mem, 0, 0, RozX, RozY, SRCCOPY) == 0) return -4;
if (DeleteDC(hdc_mem) == 0) return -5;
return 0; //OK
}
//---------------------------------------------------------------------------
void ClearBitmaps(void)
{
if (g_hBitmap) { DeleteObject(g_hBitmap); g_hBitmap = NULL; }
if (g_pGDIBitmap) { delete g_pGDIBitmap; g_pGDIBitmap = NULL; }
}
//---------------------------------------------------------------------------
void MyOpenFile(HWND hWnd, szFileName)
{
ClearBitmaps(); //Important!
g_pGDIBitmap = new Bitmap(szFileName); //load a picture from file
if (g_pGDIBitmap == 0) return;
//---Checking if picture was loaded
gRozXObrazu = g_pGDIBitmap->GetWidth();
gRozYObrazu = g_pGDIBitmap->GetHeight();
if (gRozXObrazu == 0 || gRozYObrazu == 0) return;
//---Uworzenie bitmapy do wyświatlaia; DO IT ONCE HERE!
g_pGDIBitmap->GetHBITMAP(Gdiplus::Color(), &g_hBitmap); //creates a GDI bitmap from this Bitmap object
if (g_hBitmap == 0) return;
//---We need to force the window to redraw itself
InvalidateRect(hWnd, NULL, TRUE);
UpdateWindow(hWnd);
}
//---------------------------------------------------------------------------
void MyOnPaint(HDC hdc, HWND hWnd) //in case WM_PAINT; DO IT MANY TIMES
{
if (g_hBitmap)
{
double SkalaX = 1.0, SkalaY = 1.0; //scale
if (gRozXObrazu > gRozXOkna || gRozYObrazu > gRozYOkna || //too big picture, więc zmniejsz;
(gRozXObrazu < gRozXOkna && gRozYObrazu < gRozYOkna)) //too small picture, można powiększyć
{
SkalaX = (double)gRozXOkna / (double)gRozXObrazu; //np. 0.7 dla zmniejszania; FOR DECREASE
SkalaY = (double)gRozYOkna / (double)gRozYObrazu; //np. 1.7 dla powiększania; FOR INCREASE
if (SkalaY < SkalaX) SkalaX = SkalaY; //ZAWSZE wybierz większe skalowanie, czyli mniejszą wartość i utaw w SkalaX
}
if (ShowBitmapStretch(hdc, g_hBitmap, gRozXObrazu, gRozYObrazu, (int)(gRozXObrazu*SkalaX), (int)(gRozYObrazu*SkalaX), 0, 0, msg) < 0) return;
Try using copy of Bitmap from file. FromFile function on some files returns "slow" image, but its copy will draw faster.
Bitmap *bitmap = Bitmap::FromFile("XXXX",...);
Bitmap *bitmap2 = new Bitmap(bitmap); // make copy
DrawImage(bitmap2,0,0,width,height);
I have made some researching and wasn't able to find a way to render images with GDI/GDI+ more faster than
Graphics.DrawImage/DrawImageUnscaled
and at the same time simple like it.
Till I discovered
ImageList.Draw(GFX,Point,Index)
and yeah it's really so fast and simple.