Shader to render QR code - qt

What is the best (in sense of performance and memory consumption) way to represent QR code graphically in Qt Quick application?
I think QR code bitmap can be represented graphically as square matrix of black and white cells using some shader. It would be performance-optimal solution.
Currently I can only create a GridView with a bunch of Rectangles. It is considered as a waste of memory to store and CPU/GPU time to render.
How may the shader looks like?
Say, given QBitArray of n*n size.

The shader itself would be trivial, basically you divide the fragment position x and y by the qr code size and floor that to get row and column, and then find the 1d index by adding the two, then lookup the qt data array at that index, if it contains a 0, the fragment color is white, if it contains 1, the color is black.
However, QML shaders currently don't provide facilities to pass regular 1d arrays.
You would have to convert the array to a bitmap image, and pass it to the array, which means you will also have to implement an image provider in order to get QImage to work with QML, because amazingly, it still doesn't by default.
I wouldn't bother about performance too much, that's premature optimization, which is bad in 99% of the cases. Even a trivial, 100% QML solution is sufficiently fast:
ApplicationWindow {
id: main
visible: true
width: 640
height: 480
color: "darkgray"
property var qrdata: []
MouseArea {
anchors.fill: parent
onClicked: {
qrdata = []
for (var i = 0; i < (100 * 100); ++i) qrdata.push(Math.round(Math.random()))
code.requestPaint()
}
}
Canvas {
id: code
width: 300
height: 300
onPaint: {
console.time("p")
var c = getContext("2d")
c.fillStyle = Qt.rgba(1, 1, 1, 1);
c.fillRect(0, 0, width, height)
c.fillStyle = Qt.rgba(0, 0, 0, 1);
var l = qrdata.length
var step = Math.sqrt(l)
var size = width / step
for (var i = 0; i < l; ++i) {
if (qrdata[i]) {
var rw = Math.floor(i / step), cl = i % step
c.fillRect(cl * size, rw * size, size, size)
}
}
console.timeEnd("p")
}
}
}
On my system, drawing a 100 x 100 qr code takes about 2 milliseconds. IMO that's sufficiently good and it is not really worth it to invest time into making are more complex low level solution.
However, what I would personally do is implement an image provider, convert the qr code data into an image, then scale that image as large as I want with smooth: false which will avoid blurring and preserve a crisp result. That is by far the most direct, efficient and straightforward solution.

If you've got just one QR code in the application then save your time and do a GridView.
Other options are:
C++ custom QQuickItem: generate and load a texture (Qt SceneGraph API)
C++ custom QQuickFramebufferObject: generate and load a texture (mostly pure OpenGL API)
C++ custom QQuickPaintedItem(QPainter 2D API)
QML-JS Canvas/Context2D (HTML 2D API)
QML-JS Canvas3D/Context3D: generate and load a texture (WebGL API) - like all other C++ options, but in JS version of OpenGL
C++ custom QQuickImageProvider: generate and load a texture (ImageProvider and OpenGL API) while passing the whole QR data as an image name to your custom QQuickImageProvider (maybe a bit too clever)
Using vertex-buffers/uniform-buffers instead of textures may work, but it needs an unusual shader code. QR fits more as a texture, I think.

Related

Qt, Is there a more efficient way to crop out part of an Qimage?

I am making a simple editor where the user can click on points of an image and crop out a shape. My implementation is terribly inefficient and as I'm new to qt, I have trouble deciphering all the functions on qt's docs.
QPolygonF polygon(points);
std::map<std::string, int> map = pointsHandler.getOutsideVals();
for(int i = map["Left"]; i < map["Right"]; i++){
for(int j = map["Top"]; j < map["Bottom"]; j++){
for(int n = 0; n < points.size(); n++){
if(polygon.containsPoint(QPointF(i,j), Qt::OddEvenFill)){
image.setPixelColor(QPoint(i - xOffset, j - yOffset), Qt::transparent);
}
}
}
}
painter.drawImage(xOffset,yOffset, image);
Currently how I'm doing it is looping through a rectangle given by the outer most points of the polygon. If a point is in the polygon or not I change the pixel value to be transparent. The polygon is made from the users clicked points which I then store the outer most values in a map. When I crop out large portions, it takes far to long and was I looking for some advice to make this more efficient. Thank you.
EDIT
I am now using setClipPath mentioned by G.M. and have no performance issues, however the way I found to get the job done now seems like a waste of memory. Using setClipPath(...) the best work around I found was to make multiple Qt class objects on the stack, it works great just seems like I'm working around to much stuff. Here's the updated code.
QPolygon clipPolygon = QPolygonF(points).toPolygon();
QRegion clippedRegion(clipPolygon, Qt::OddEvenFill);
QRect translatedImageRect = image.rect().translated(QPoint(xOffset, yOffset));
QRegion unClippedRegion = QRegion(translatedImageRect).subtracted(clippedRegion);
painter.save();
painter.setClipRegion(unClippedRegion, Qt::ReplaceClip);
painter.drawImage(xOffset, yOffset, image);
painter.restore();
It works great, just feel like I'm wasting memory.
You can use QPainter to make a rectangle of your image transparent.
QImage image("/home/tim/Bilder/Example.png");
QPainter painter(&image);
painter.setCompositionMode(QPainter::CompositionMode_Source);
painter.fillRect(0, 0, 10, 10, Qt::transparent);
painter.end();
image.save("/home/tim/Bilder/changed.png", "PNG");

Performance Issue while using QGLWidget with Qt5

I'm trying to develop an application which will be used for the visualization of 3D objects and its simulations. In this I have to draw 'n' number of objects (may be a triangle, rectangle or some other non-convex polygons) with individual color shades.For this I'm using QGLWidget in Qt5 (OS - Windows 7/8/10).
structure used for populating objects information:
typedef struct {
QList<float> r,g,b;
QList<double> x,y,z;
}objectData;
The number of objects and their corresponding coordinate values will be read from a file.
paintGL function:
void paintGL() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(25, GLWidget::width()/(float)GLWidget::height(), 0.1, 100);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0,0,5, 0,0,0, 0,1,0);
glRotatef(140, 0.0, 0.0, 1.0);
glRotatef(95, 0.0, 1.0, 0.0);
glRotatef(50, 1.0, 0.0, 0.0);
glTranslated(-1.0, 0.0, -0.6);
drawObjects(objData, 1000)
}
Drawing of Objects Function:
void drawObjects(objectData objData,int objCnt) {
glPushMatrix();
glBegin(GL_POLYGON);
for(int i = 0; i < objCnt; i++) {
glColor3f(objData.r[i],objData.g[i],objData.b[i] );
glVertex3d(objData.x[i],objData.y[i],objData.z[i]);
}
glEnd();
glFlush();
glPopMatrix();
}
Issue:
Now, when the number of objects to be drawn exceeds a certain maximum value (for example say n = 5000), the application speed gradually decreases. I'm unable to use QThread since it already inherits QGLWidget.
Please suggest how to improve the performance of the application when number of objects count is higher. I don't know where I'm doing mistake.
Screenshot of that sample:
Sample image which contains number of objects in mesh view
You are using the fixed pipeline instead of the programmable one, where you tell to each stage of the rendering process, what should be done, and nothing more. Among other noticeable differences that I encourage you to research (research "modern opengl", which will lead you to doing OpenGL 3.3 and above type of work).
The old fixed pipeline is terribly inefficient, when the computer has to talk to the graphics card for every geometries while rendering. By contrast, the modern programmable pipeline allows you to push the data of the models to render into the VRAM, from where it will be directly accessed during rendering (very fast memory accesses).
You also get rid of the generic ways of "doing stuff", that are mechanically slower than customized ones.
Also, I encourage you to use QOpenGLWidget instead of the former QGLWidget class. As mentioned in http://doc.qt.io/qt-5/qglwidget.html, this class is obsolete.
Modern OpenGL quick start:
http://www.opengl-tutorial.org/
So, you are not doing anything "wrong". You are just not using the current technology. Have fun!
You are using OpenGLs immediate mode which is very slow for large numbers of vertices und should almost never be used. Use the retained mode instead. See this answer for more detail: https://stackoverflow.com/a/6734071
Thank you #dave and #Zedka9. It works fine for me when I started to use the intermediate mode in openGL. I have modified the drawObject function like this
Drawing of Objects Function:
After organizing and copying the vertices and colors to these buffers
GLfloat vertices[1024*1024],colors[1024*1024];
int vertArrayCnt; // number of verticies
void drawObjects(void) {
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer(3, GL_FLOAT, 0, colors);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glPushMatrix();
glDrawArrays(GL_TRIANGLES, 0, vertArrayCnt);
glPopMatrix();
glDisableClientState(GL_VERTEX_ARRAY); // disable vertex arrays
glDisableClientState(GL_COLOR_ARRAY);
}

Usage of Map and Translate Functions in Processing

New to Processing working on understanding this code:
import com.onformative.leap.LeapMotionP5;
import java.util.*;
LeapMotionP5 leap;
LinkedList<Integer> values;
public void setup() {
size(800, 300);
frameRate(120); //Specifies the number of frames to be displayed every second
leap = new LeapMotionP5(this);
values = new LinkedList<Integer>();
stroke(255);
}
int lastY = 0;
public void draw() {
**translate(0, 180)**; //(x, y, z)
background(0);
if (values.size() >= width) {
values.removeFirst();
}
values.add((int) leap.getVelocity(leap.getHand(0)).y);
System.out.println((int) leap.getVelocity(leap.getHand(0)).y);
int counter = 0;
** for (Integer val : values)** {
**val = (int) map(val, 0, 1500, 0, height);**
line(counter, val, counter - 1, lastY);
point(counter, val);
lastY = val;
counter++;
}
** line(0, map(1300, 0, 1500, 0, height), width, map(1300, 0, 1500, 0, height)); //(x1, y1, x2, y2)**
}
It basically draw of graph of movement detected on the y axis using the Leap Motion sensor. Output looks like this:
I eventually need to do something similar to this that would detect amplitude instead of velocity simultaneously on all 3 axis instead of just the y.
The use of Map and Translate are whats really confusing me. I've read the definitions of these functions on the Processing website so I know what they are and the syntax, but what I dont understand is the why?! (which is arguably the most important part.
I am asking if someone can provide simple examples that explain the WHY behind using these 2 functions. For instance, given a program that needs to do A, B, and C, with data foo, y, and x, you would use Map or Translate because A, B, and C.
I think programming guides often overlook this important fact but to me it is very important to truly understanding a function.
Bonus points for explaining:
for (Integer val : values) and LinkedList<Integer> values; (cant find any documentation on the processing website for these)
Thanks!
First, we'll do the easiest one. LinkedList is a data structure similar to ArrayList, which you may be more familiar with. If not, then it's just a list of values (of the type between the angle braces, in this case integer) that you can insert and remove from. It's a bit complicated on the inside, but if it doesn't appear in the Processing documentation, it's a safe bet that it's built into Java itself (java documentation).
This line:
for (Integer val : values)
is called a "for-each" or "foreach" loop, which has plenty of very good explanation on the internet, but I'll give a brief explanation here. If you have some list (perhaps a LinkedList, perhaps an ArrayList, whatever) and want to do something with all the elements, you might do something like this:
for(int i = 0; i < values.size(); i++){
println(values.get(i)); //or whatever
println(values.get(i) * 2);
println(pow(values.get(i),3) - 2*pow(values.get(i),2) + values.get(i));
}
If you're doing a lot of manipulation with each element, it quickly gets tedious to write out values.get(i) each time. The solution would be to capture values.get(i) into some variable at the start of the loop and use that everywhere instead. However, this is not 100% elegant, so java has a built-in way to do this, which is the for-each loop. The code
for (Integer val : values){
//use val
}
is equivalent to
for(int i = 0; i < values.size(); i++){
int val = values.get(i);
//use val
}
Hopefully that makes sense.
map() takes a number in one linear system and maps it onto another linear system. Imagine if I were an evil professor and wanted to give students random grades from 0 to 100. I have a function that returns a random decimal between 0 and 1, so I can now do map(rand(),0,1,0,100); and it will convert the number for me! In this example, you could also just multiply by 100 and get the same result, but it is usually not so trivial. In this case, you have a sensor reading between 0 and 1500, but if you just plotted that value directly, sometimes it would go off the screen! So you have to scale it to an appropriate scale, which is what that does. 1500 is the max that the reading can be, and presumably we want the maximum graphing height to be at the edge of the screen.
I'm not familiar with your setup, but it looks like the readings can be negative, which means that they might get graphed off the screen, too. The better solution would be to map the readings from -1500,1500 to 0,height, but it looks like they chose to do it a different way. Whenever you call a drawing function in processing (eg point(x,y)), it draws the pixels at (x,y) offset from (0,0). Sometimes you don't want it to draw it relative to (0,0), so the translate() function allows you to change what it draws things relative against. In this case, translating allows you to plot some point (x,0) somewhere in the middle of the screen, rather than on the edge.
Hope that helps!

Raw data to QImage

I'm new to graphics programming (pixels, images, etc..)
I'm trying to convert Raw data to QImage and display it on QLabel. The problem is that, the raw data can be any data (it's not actually image raw data, it's binary file.)
The reason if this is that, to understand deeply how pixels and things like that work, I know I'll get random image with weird results, but it will work.
I'm doing something like this, but I think I'm doing it wrong!
QImage *img = new QImage(640, 480, QImage::Format_RGB16); //640,480 size picture.
//here I'm trying to fill newly created QImage with random pixels and display it.
for(int i = 0; i < 640; i++)
{
for(int u = 0; u < 480; u++)
{
img->setPixel(i, u, rawData[i]);
}
}
ui->label->setPixmap(QPixmap::fromImage(*img));
am I doing it correctly? By the way, can you point me where should I learn these things? Thank you!
Overall it's correct. QImage is a class that allows to manipulate its own data directly, but you should use correct pixel format.
A bit more efficient example:
QImage* img = new QImage(640, 480, QImage::Format_RGB16);
for (int y = 0; y < img->height(); y++)
{
memcpy(img->scanLine(y), rawData[y], img->bytesPerLine());
}
Where rawData is a two-dimension array.
This is how I saved a raw BGRA frame to the disk:
QImage image((const unsigned char*)pixels, width, height, QImage::Format_RGB32);
image.save("out.jpg");
Syntactically, your code appears to be correct.
Reading the class signature, you may want to call setPixel in the following manner:
img->setPixel(i, u, QRbg(##FFRRGGBB));
Where ##FFRRGGBB is a color quadruplet, unless, of course, you want monochrome 8 bit support.
Additionally, declaring a naked pointer is dangerous. The following code is equivalent:
QImage image(640, 480, QImage::Format_something);
QPixmap::fromImage(image);
And will deallocate appropriately upon function completion.
Qt Examples directory is a great place to search for functionality. Also, peruse the class documentation because they're littered with examples.

gdi+ Graphics::DrawImage really slow~~

I am using a GDI+ Graphic to draw a 4000*3000 image to screen, but it is really slow. It takes about 300ms. I wish it just occupy less than 10ms.
Bitmap *bitmap = Bitmap::FromFile("XXXX",...);
//--------------------------------------------
// this part takes about 300ms, terrible!
int width = bitmap->GetWidth();
int height = bitmap->GetHeight();
DrawImage(bitmap,0,0,width,height);
//------------------------------------------
I cannot use CachedBitmap, because I want to edit the bitmap later.
How can I improve it? Or is any thing wrong?
This native GDI function also draws the image into the screen, and it just take 1 ms:
SetStretchBltMode(hDC, COLORONCOLOR);
StretchDIBits(hDC, rcDest.left, rcDest.top,
rcDest.right-rcDest.left, rcDest.bottom-rcDest.top,
0, 0, width, height,
BYTE* dib, dibinfo, DIB_RGB_COLORS, SRCCOPY);
//--------------------------------------------------------------
If I want to use StretchDIBits, I need to pass BITMAPINFO, But how can I get BITMAPINFO from a Gdi+ Bitmap Object? I did the experiment by FreeImage lib, I call StretchDIBits using FreeImageplus object, it draw really fast. But now I need to draw Bitmap, and write some algorithm on Bitmap's bits array, how can I get BITMAPINFO if I have an Bitmap object? It's really annoying -___________-|
If you're using GDI+, the TextureBrush class is what you need for rendering images fast. I've written a couple of 2d games with it, getting around 30 FPS or so.
I've never written .NET code in C++, so here's a C#-ish example:
Bitmap bmp = new Bitmap(...)
TextureBrush myBrush = new TextureBrush(bmp)
private void Paint(object sender, PaintEventArgs e):
{
//Don't draw the bitmap directly.
//Only draw TextureBrush inside the Paint event.
e.Graphics.FillRectangle(myBrush, ...)
}
You have a screen of 4000 x 3000 resolution? Wow!
If not, you should draw only the visible part of the image, it would be much faster...
[EDIT after first comment] My remark is indeed a bit stupid, I suppose DrawImage will mask/skip unneeded pixels.
After your edit (showing StretchDIBits), I guess a possible source of speed difference might come from the fact that StretchDIBits is hardware accelerated ("If the driver cannot support the JPEG or PNG file image" is a hint...) while DrawImage might be (I have no proof for that!) coded in C, relying on CPU power instead of GPU's one...
If I recall correctly, DIB images are fast (despite being "device independent"). See High Speed Win32 Animation: "use CreateDIBSection to do high speed animation". OK, it applies to DIB vs. GDI, in old Windows version (1996!) but I think it is still true.
[EDIT] Maybe Bitmap::GetHBITMAP function might help you to use StretchDIBits (not tested...).
Just a thought; instead of retrieving the width and height of the image before drawing, why not cache these values when you load the image?
Explore the impact of explicitly setting the interpolation mode to NearestNeighbor (where, in your example, it looks like interpolation is not actually needed! But 300ms is the kind of cost of doing high-quality interpolation when no interpolation is needed, so its worth a try)
Another thing to explore is changing the colour depth of the bitmap.
Unfortunately when I had a similar problem, I found that GDI+ is known to be much slower than GDI and not generally hardware accelerated, but now Microsoft have moved on to WPF they will not come back to improve GDI+!
All the graphics card manufacturers have moved onto 3D performance and don't seem interested in 2D acceleration, and there's no clear source of information on which functions are or can be hardware accelerated or not. Very frustrating because having written an app in .NET using GDI+, I am not happy to change to a completely different technology to speed it up to reasonable levels.
i don't think they'll make much of a different, but since you're not actually needing to resize the image, try using the overload of DrawImage that doesn't (attempt) to resize:
DrawImage(bitmap,0,0);
Like i said, i doubt it will make any difference, because i'm sure that DrawImage checks the Width and Height of the bitmap, and if there's no resizing needed, just calls this overload. (i would hope it doesn't bother going through all 12 million pixels performing no actual work).
Update: My ponderings are wrong. i had since found out, but guys comment reminded me of my old answer: you want to specify the destination size; even though it matches the source size:
DrawImage(bitmap, 0, 0, bitmap.GetWidth, bitmap.GetHeight);
The reason is because of dpi differences between the dpi of bitmap and the dpi of the destination. GDI+ will perform scaling to get the image to come out the right "size" (i.e. in inches)
What i've learned on my own since last October is that you really want to draw a "cached" version of your bitmap. There is a CachedBitmap class in GDI+. There are some tricks to using it. But in there end i have a function bit of (Delphi) code that does it.
The caveat is that the CachedBitmap can become invalid - meaning it can't be used to draw. This happens if the user changes resolutions or color depths (e.g. Remote Desktop). In that case the DrawImage will fail, and you have to re-created the CachedBitmap:
class procedure TGDIPlusHelper.DrawCachedBitmap(image: TGPImage;
var cachedBitmap: TGPCachedBitmap;
Graphics: TGPGraphics; x, y: Integer; width, height: Integer);
var
b: TGPBitmap;
begin
if (image = nil) then
begin
//i've chosen to not throw exceptions during paint code - it gets very nasty
Exit;
end;
if (graphics = nil) then
begin
//i've chosen to not throw exceptions during paint code - it gets very nasty
Exit;
end;
//Check if we have to invalidate the cached image because of size mismatch
//i.e. if the user has "zoomed" the UI
if (CachedBitmap <> nil) then
begin
if (CachedBitmap.BitmapWidth <> width) or (CachedBitmap.BitmapHeight <> height) then
FreeAndNil(CachedBitmap); //nil'ing it will force it to be re-created down below
end;
//Check if we need to create the "cached" version of the bitmap
if CachedBitmap = nil then
begin
b := TGDIPlusHelper.ResizeImage(image, width, height);
try
CachedBitmap := TGPCachedBitmap.Create(b, graphics);
finally
b.Free;
end;
end;
if (graphics.DrawCachedBitmap(cachedBitmap, x, y) <> Ok) then
begin
//The calls to DrawCachedBitmap failed
//The API is telling us we have to recreate the cached bitmap
FreeAndNil(cachedBitmap);
b := TGDIPlusHelper.ResizeImage(image, width, height);
try
CachedBitmap := TGPCachedBitmap.Create(b, graphics);
finally
b.Free;
end;
graphics.DrawCachedBitmap(cachedBitmap, x, y);
end;
end;
The cachedBitmap is passed in by reference. The first call to DrawCachedBitmap it cached version will be created. You then pass it in subsequent calls, e.g.:
Image imgPrintInvoice = new Image.FromFile("printer.png");
CachedBitmap imgPrintInvoiceCached = null;
...
int glyphSize = 16 * (GetCurrentDpi() / 96);
DrawCachedBitmap(imgPrintInvoice , ref imgPrintInvoiceCached , graphics,
0, 0, glyphSize, glyphSize);
i use the routine to draw glyphs on buttons, taking into account the current DPI. The same could have been used by the Internet Explorer team to draw images when the user is running high dpi (ie is very slow drawing zoomed images, because they use GDI+).
/*
First sorry for ma English, and the code is partly in polish, but it's simple to understand.
I had the same problem and I found the best solution. Here it is.
Dont use: Graphics graphics(hdc); graphics.DrawImage(gpBitmap, 0, 0); It is slow.
Use: GetHBITMAP(Gdiplus::Color(), &g_hBitmap) for HBITMAP and draw using my function ShowBitmapStretch().
You can resize it and it is much faster! Artur Czekalski / Poland
*/
//--------Global-----------
Bitmap *g_pGDIBitmap; //for loading picture
int gRozXOkna, gRozYOkna; //size of working window
int gRozXObrazu, gRozYObrazu; //Size of picture X,Y
HBITMAP g_hBitmap = NULL; //for displaying on window
//------------------------------------------------------------------------------
int ShowBitmapStretch(HDC hdc, HBITMAP hBmp, int RozX, int RozY, int RozXSkal, int RozYSkal, int PozX, int PozY)
{
if (hBmp == NULL) return -1;
HDC hdc_mem = CreateCompatibleDC(hdc); //utworzenie kontekstu pamięciowego
if (NULL == hdc_mem) return -2;
//Trzeba połączyć BMP z hdc_mem, tzn. umieścić bitmapę w naszym kontekście pamięciowym
if (DeleteObject(SelectObject(hdc_mem, hBmp)) == NULL) return -3;
SetStretchBltMode(hdc, COLORONCOLOR); //important! for smoothness
if (StretchBlt(hdc, PozX, PozY, RozXSkal, RozYSkal, hdc_mem, 0, 0, RozX, RozY, SRCCOPY) == 0) return -4;
if (DeleteDC(hdc_mem) == 0) return -5;
return 0; //OK
}
//---------------------------------------------------------------------------
void ClearBitmaps(void)
{
if (g_hBitmap) { DeleteObject(g_hBitmap); g_hBitmap = NULL; }
if (g_pGDIBitmap) { delete g_pGDIBitmap; g_pGDIBitmap = NULL; }
}
//---------------------------------------------------------------------------
void MyOpenFile(HWND hWnd, szFileName)
{
ClearBitmaps(); //Important!
g_pGDIBitmap = new Bitmap(szFileName); //load a picture from file
if (g_pGDIBitmap == 0) return;
//---Checking if picture was loaded
gRozXObrazu = g_pGDIBitmap->GetWidth();
gRozYObrazu = g_pGDIBitmap->GetHeight();
if (gRozXObrazu == 0 || gRozYObrazu == 0) return;
//---Uworzenie bitmapy do wyświatlaia; DO IT ONCE HERE!
g_pGDIBitmap->GetHBITMAP(Gdiplus::Color(), &g_hBitmap); //creates a GDI bitmap from this Bitmap object
if (g_hBitmap == 0) return;
//---We need to force the window to redraw itself
InvalidateRect(hWnd, NULL, TRUE);
UpdateWindow(hWnd);
}
//---------------------------------------------------------------------------
void MyOnPaint(HDC hdc, HWND hWnd) //in case WM_PAINT; DO IT MANY TIMES
{
if (g_hBitmap)
{
double SkalaX = 1.0, SkalaY = 1.0; //scale
if (gRozXObrazu > gRozXOkna || gRozYObrazu > gRozYOkna || //too big picture, więc zmniejsz;
(gRozXObrazu < gRozXOkna && gRozYObrazu < gRozYOkna)) //too small picture, można powiększyć
{
SkalaX = (double)gRozXOkna / (double)gRozXObrazu; //np. 0.7 dla zmniejszania; FOR DECREASE
SkalaY = (double)gRozYOkna / (double)gRozYObrazu; //np. 1.7 dla powiększania; FOR INCREASE
if (SkalaY < SkalaX) SkalaX = SkalaY; //ZAWSZE wybierz większe skalowanie, czyli mniejszą wartość i utaw w SkalaX
}
if (ShowBitmapStretch(hdc, g_hBitmap, gRozXObrazu, gRozYObrazu, (int)(gRozXObrazu*SkalaX), (int)(gRozYObrazu*SkalaX), 0, 0, msg) < 0) return;
Try using copy of Bitmap from file. FromFile function on some files returns "slow" image, but its copy will draw faster.
Bitmap *bitmap = Bitmap::FromFile("XXXX",...);
Bitmap *bitmap2 = new Bitmap(bitmap); // make copy
DrawImage(bitmap2,0,0,width,height);
I have made some researching and wasn't able to find a way to render images with GDI/GDI+ more faster than
Graphics.DrawImage/DrawImageUnscaled
and at the same time simple like it.
Till I discovered
ImageList.Draw(GFX,Point,Index)
and yeah it's really so fast and simple.

Resources