Thanks for answers,Actually I am not puzzled about draw 1024*768 pixels is slower than 100* 100 pixels... It is so simple a logic..
Which made me puzzled is that DrawImage's interpolation algorithm may be very slow, while there exists lots of better algorithm, and its decoder seems can decode from a jpg with a certain resolution, it is really cool, I search for sometime but do not find any free lib to do this...
It is really strange!
I add the following code into on Paint method. c:\1.jpg is 5M jpg file, about 4000*3000
//--------------------------------------------------------------
HDC hdc = pDC->GetSafeHdc();
bitmap = Bitmap::FromFile(L"c:\\1.jpg",true);
Graphics graphics(hdc);
graphics.SetInterpolationMode( InterpolationModeNearestNeighbor );
graphics.DrawImage(bitmap,0,0,200,200);
The above is really fast! even real time! I don't think decode a 5m JPG can be that fast!
//--------------------------------------------------------------
HDC hdc = pDC->GetSafeHdc();
bitmap = Bitmap::FromFile(L"c:\\1.jpg",true);
Graphics graphics(hdc);
graphics.SetInterpolationMode( InterpolationModeNearestNeighbor );
graphics.DrawImage(bitmap,0,0,2000,2000);
The above code become really slow
//--------------------------------------------------------------
If I add Bitmap = Bitmap::FromFile(L"c:\1.jpg", true); // into construct
leave
Graphics graphics(hdc);
graphics.SetInterpolationMode( InterpolationModeNearestNeighbor );
graphics.DrawImage(bitmap,0,0,2000,2000);
in OnPaint method,
The code is still a bit slow~~~
//------------------------------------------------------------------
Comparing with decoding, the drawImage Process is really slow...
Why and How did they do that? Did Microsoft pay the men taking charge of decoder double salary than the men taking charge of writing drawingImage?
So, what you're really wondering is why
graphics.DrawImage(bitmap,0,0,200,200);
is faster than
graphics.DrawImage(bitmap,0,0,2000,2000);
Correct?
Well, the fact that you are drawing 100 times more pixels in the second case could have something to do with it.
You don't need to decode JPGs if you're scaling down by a factor of 8. JPG images consist of blocks of 8 by 8 pixels, DCT-transformed. The average value of this block is the 0,0 coefficient of the DCT. So, scaling down a factor of 8 is merely a matter of throwing away all other components. Scaling down even further (eg 4000->200) is just a matter of scaling down from 4000 to 500, and then scaling normally from 500 to 200 pixels.
It could be possible that the decoding is deferred until needed. That's why it is so fast.
Maybe on the 200x200 case GDI+ only decodes enough blocks to paint 200x200 and on 2000x2000 they decodes more.
Graphic routines always contains some obscure optimizations, you could never know.
Maybe Reflector will tell you?
Just a guess, but could you try drawing with 4000x3000 or 2000x1500? Perhaps the fact that 4000 and 3000 are divisible by 200 is speeding up the whole and 3000 not being divisible by 200 slows it down (although this really would be weird).
Generally, do some profiling or time measurement. If 2000x2000 is about 100 times slower than 200x200, everything is okay. And don't bother if 2000x2000 is too slow. If your screen is at 1024x768, you can't see the whole image, so you better pick the part of the image that is visible on the screen and draw it, 1024x768 is 5 times faster than 2000x2000.
Related
I create a big image stitched out of many single microscope images.
Suddenly, (after several month of working properly) the stitched overview images became blurry and they are containing strange structural artefacts like askew lines (not the rectangulars, they are because of not perfect stitching)
If I open any particular tile in full size, they are not blurry and the artefacts are hardly observable. (Consider, the image below is already 4x scaled)
The overview image is created manually by scaling each tile using QImage::scaled and copying all of them to the corresponding region in the big image. I'm not using opencv's stitching.
I assume, this happens because of image contents, because most if the overview images are ok.
The question is, how can I avoid such hardly observable artefacts to become very clearly visible after scaling? Is there some means in OpenCV or QImage?
Is there any algorithms to find out, if image content could lead to such effect for defined scale-factor?
Many thanks in advance!
Are you sure the camera is calibrated properly? That the lightning is uniform? Is the lens clear? Do you have electrical components that interfere with the camera connection?
If you add image frames of photos on a uniform material (or non-uniform material, moved randomly for significant time), the resultant integrated image should be completely uniform.
If your produced image is not uniform, especially if you get systematic noise (like the apparent sinusoidal noise in the provided pictures), write a calibration function that transforms image -> calibrated image.
Filtering in Fourier space is another way to filter out the noise but considering that the image is rotated you will lose precision, and you'll be cutting off components of the real signal, too. The following empiric method will reduce the noise in your particular case significantly:
ground_output: composite image with per-pixel sum of >10 frames (more is better) over uniform material (e.g. excited slab of phosphorus)
ground_input: the average(or sqrt(sum of px^2)) in ground_output
calib_image: ground_input /(per px) ground_output. Saved for the session, or persistent in a file (important: ensure no lossy compression! (jpeg)).
work_input: the images to work on
work_output = work_input *(per px) calib_image: images calibrated for systematic noise.
If you can't create a perfect ground_input target such as having a uniform material on hand, do not worry too much. If you move any material uniformly (or randomly) for enough time, it will act as a uniform material in this case (think of a blurred photo).
This method has the added advantage of calibrating solitary faulty pixels that ccd cameras have (eg NormalPixel.value(signal)).
If you want to have more fun you can always fit the calibration function to something more complex than a zero-intercept line (steps 3. and 5.).
I suggest scaling the image with some other software to verify if the artifacts are in fact caused by Qt or are inherent in the image you've captured.
The askew lines look a lot like analog tv interference, or CCTV noise induced by 50 or 60 Hz power lines running alongside the signal cable or some other electrical interference on the signal.
If the image distortion is caused by signal interference then you can try to mitigate it by moving the signal lines away from whatever could be the source of the problem, or fit something to try to filter the noise (baluns for example).
I've seen many mandelbrot image generator drawing a low resolution fractal of the mandelbrot and then continuously improve the fractal. Is this a tiling algorithm? Here is an example: http://neave.com/fractal/
Update: I've found this about recursively subdivide and calculate the mandelbrot: http://www.metabit.org/~rfigura/figura-fractal/math.html. Maybe it's possible to use a kd-tree to subdivide the image?
Update 2: http://randomascii.wordpress.com/2011/08/13/faster-fractals-through-algebra/
Update 3: http://www.fractalforums.com/programming/mandelbrot-exterior-optimization/15/
Author of Fractal eXtreme and the randomascii blog post linked in the question here.
Fractal eXtreme does a few things to give a gradually improving fractal image:
Start from the middle, not from the top. This is a trivial change that many early fractal programs ignored. The center should be the area the user cares the most about. This can either be starting with a center line, or spiraling out. Spiraling out has more overhead so I only use it on computationally intense images.
Do an initial low-res pass with 8x8 blocks (calculating one pixel out of 64). This gives a coarse initial view that is gradually refined at 4x4, 2x2, then 1x1 resolutions. Note that each pass does three times as many pixels as all previous passes -- don't recalculate the original points. Subsequent passes also start at the center, because that is more important.
A multi-pass method lends itself well to guessing. If four pixels in two rows have the same value then the pixels in-between probably have the same value, so don't calculate them. This works extremely well on some images. A cleanup pass at the end to look for pixels that were miscalculated is necessary and usually finds a few errors, but I've never seen visible errors after the cleanup pass, and this can give a 10x+ speedup. This feature can be disabled. The success of this feature (guess percentage) can be viewed in the status window.
When zooming in (double-click to double the magnification) the previously calculated pixels can be used as a starting point so that only three quarters of the pixels need calculating. This doesn't work when the required precision increases but these discontinuities are rare.
More sophisticated algorithms are definitely possible. Curve following, for instances.
Having fast math also helps. The high-precision routines in FX are fully unwound assembly language (generated by C# code) that uses 64-bit multiplies.
FX also has a couple of checks for points within the two biggest bulbs, to avoid calculating them at all. It also watches for cycles in calculations -- if the exact same point shows up then the calculations will repeat.
To see this in action visit http://www.cygnus-software.com/
I think that site is not as clever as you give it credit for. I think what happens on a zoom is this:
Take the previous image, scale it up using a standard interpolation method. This gives you the 'blurry' zoomed in image. Click the zoom in button several times to see this best
Then, in concentric circles starting from the central point, recalculate squares of the image in full resolution for the new zoom level. This 'sharpens' the image progressively from the centre outwards. Because you're probably looking at the centre, you see the improvement straight away.
You can more clearly see what it's doing by zooming far in, then dragging the image in a diagonal direction, so that almost all the screen is undrawn. When you release the drag, you will see the image rendered progressively in squares, in concentric circles from the new centre.
I haven't checked, but I don't think it's doing anything clever to treat in-set points differently - it's just that because an entirely-in-set square will be black both before and after rerendering, you can't see a difference.
The oldschool Mandelbrot rendering algorithm is the one that begins calculating pixels at the top-left position, goes right until it reaches the end of the screen then moves to the beginning of next line, like an ordinary typewriter machine (visually).
The linked algorithm is just calculating pixels in a different order, and when it calculates one, it quickly makes assumption about certain neighboring pixels and later goes back to properly redraw them. That's when you see improvement, think of it as displaying a progressive JPEG. If you zoom into the set, certain pixel values will remain the same (they don't need to be recalculated) the interim pixels will be guessed, quickly drawn and later recalculated.
A continuously improving Mandelbrot is just for your eyes, it will never finish earlier than a properly calculating per-pixel algorithm which can detect "islands".
I have a QImage of size 12x12 in GIF format. I want to rotate it on certain angle with very high frequency. My application involves a robot so when it changes its orientation(which it does very frequently) my QImage in simulation should also be rotated but it causes loss of information. I am doing it something like below.
robot_transform.rotate(angle);
*robot2 = robot->transformed(robot_transform,Qt::SmoothTransformation);
*robot2= robot2->scaled(12,12, Qt::KeepAspectRatio,Qt::SmoothTransformation);
I need suggestions that whats wrong in this approach and secondly is there any other optimal approach for the desired application?
Thanks
I would increase the resolution of the source image to at least double. Rotating an image to non-90-degree angles will cause loss of pixel information. An higher res source can compensate for that.
Most sprite based animations use pre-rendered images for each possible angle.
The problem is the scaling afterwards, you need to crop the center of the image. You can do this with QImage::copy.
I'm trying to write a simple game that has some dogs and cats move around. Now my dogs, cats is the read and blue rectangle, I need to make them more pretty and now is 2 options for me:
Using an Image.
Draw them by some graphic class.
But I not sure what should I choose!
If use Image, I must draw some (the object can animate). But it make thing simple.
Use graphic function give me more power (like in case there more than just dog and cat, OO will help me alot). But I will make my barin hurt.
And the the most important is which is faster?
BTW I'm using Qt.
I'd most definitely use a bitmap.
Rendering a bitmap image is usually faster because it can be fully cached in memory unless it's large (which it shouldn't be, in your case). The underlying mechanics would simply involve a blazingly-fast memcpy (or the like) once it's been read from the disk.
Drawing a dog using vector graphics would incur a large overhead in terms of performance due to function calls, math/transformations, and would end up effectively needing more memory than a bitmap.
I want to draw a dib on to a HDC, the same size.
I am using :
des and src are of the same size.
::StretchDIBits(hdc,
des.left,des.top,des.right - des.left,des.bottom - des.top,
src.left, GetHeight() - src.bottom, src.right - src.left,src.bottom - src.top,
m_pImg->accessPixels(),m_pImg->getInfo(), DIB_RGB_COLORS, SRCCOPY);
but I find it is slow, because the des size is the same, I just need to copy the dib onto a dc.
Is there any method faster than StretchDIBits?
just as
StretchBlt (slow) vs Bitblt.(faster)
StretchDIBits (slow ) vs ?(faster)
The speed difference comes from doing any necessary color conversion in addition to the generality necessary to handle the stretching (even if your target size is the same as your source size).
If you're just drawing the image just once, then I think function you're looking for is SetDIBitsToDevice.
If you care about the speed because you're drawing the same DIB multiple times, then you can improve performance by copying the DIB to a compatible memory DC once, and then BitBlt-ing from the memory DC to the screen (or printer) each time you need it. Use CreateCompatibleDC to create the memory DC, and then use StretchDIBits or SetDIBitsToDevice to get the image on it. After that, you can use BitBlt directly. You might also look into using a DIBSECTION, which gives a compromise in performance between a true DIB and a compatible DC.