Alternative for GDI+ - gdi+

I like GDI+ because its high performance and it's included with Windows XP. However, its blur class and effect class is only available in GDI+ 1.1, which only comes with Windows Vista or later. Despite the fact that Microsoft plans to drop support for Windows XP soon, there are still a large percentage of people who are still sticking with XP. If you make any consumer-targeted software, you have to support Windows XP. But unfortunately, GDI+ 1.1 is not redistributable under XP.
I tried a couple of opensource image libraries. However, when it comes to performance, for example, the gaussian blurring operation, they are significantly slower than gdi+.
Can anyone recommend a better alternative to GDI+ with XP support?

I'm surprised that you find the GDI+ Blur to be appealing based on it's performance.
Note that unlike it's GDI predecessor, GDI+ is not hardware accelerated (CPU-based rendering) - see this article for some details of GDI/GDI+ on XP, Vista, W7, including some basic rendering benchmarks comparing the two.
As Abdias Software mentions, the WPF BlurEffect is a good solution, as it uses DirectX for rendering.
The other option for high-performance Guassian blurring, is to implement a GPU-based blur (via a shader in some GPU-accelerated API, e.g. OpenGL/GLSL, DirectX, or Direct2D) For example: http://callumhay.blogspot.com/2010/09/gaussian-blur-shader-glsl.html

GDI+ is already sorted under Legacy graphics.
The alternative MS embraces now is Windows Presentation Foundation, or WPF for short. This is also available under XP and has better performance than GDI+.
Or as we did in the old days, write code from scratch (it isn't for everyone though). Or as an alternative you can manipulate the buffers directly by locking the bitmap and go through the byte-array to add convolutions or averaging (as used in blurring).
As a note: GDI+ do support convolutions through its Matrix class.
There is also DirectX which is more low-level and high-performing.
Personally I like/prefer GDI+ and use buffer manipulation when seen needed. I am not worried that this nor XP will go away anytime soon even when MS drop its support.

Related

Resizing images (jpeg or decompressed image)

In my last question I asked whether there was a better way to rotate images than I had thought of. I ended up discovering jpegtran and have since found libjpeg-turbo.
Now I am looking for a better way to resize the images (jpegs) than imagemagick and graphicsmagick.
Is there a specialized commandline tool to resize the images in a more efficient way than imagemagick or graphicsmagick? Maybe the resizing can be done on the GPU using opencl or opengl?
The provided hardware is the same as in the other post:
Intel Atom D525 (1,8 Ghz)
Mobility Radeon HD 5430 Series
4 GB of RAM
SSD Vertility 3
Check this link out: http://leocharre.com/articles/faster-image-resizing-in-linux/
In particular the author mentions that imgresize is faster than imagemagick, and epeg is extremely fast.
epeg (http://www.systhread.net/texts/200507epeg1.php) seems quite well documented for generating thumbnails. If the quality is good enough, this could be the solution.
OpenCL is a standard for cross-platform, parallel programming of modern processors found in personal computers, servers and handheld/embedded devices. It's directly supported by ATI. You'll need to get AMD APP SDK (formerly known as AMD Stream SDK) to get GPU support (also check out this getting started guide).
Take a look at Intel's IPP - Integrated Performance Primitives. It's a multi-threaded software library of functions for multimedia and data processing applications. Among other features, it's has functions to resize images (bilinear, nearest neighbor, etc). Unfortunately, it is not free (cheapest version costs $199).
VIPS is a free image processing system. It claims that compared to most image processing libraries, VIPS needs little memory and runs quickly, especially on machines with more than one CPU. See the Speed and Memory Use page for a simple benchmark against other similar systems.
You can actually do a lot of bulk processing like this with GIMP's CLI options.
http://www.gimp.org/tutorials/Basic_Batch/
There is also djpeg and cjpeg from the Independent JPEG Group which can rescale and image to an M/N fraction. Not perfect but very fast.
Simply use FFMpeg.exe. It can resize , convert , change quality and so on.
And also it works with almost all known types of videos/audios/pictures.
It works in linux/unix too, and there is open source code for it written in C++.
You can get it Here (for Windows/compiled exe) or Here (source code and so on).
If you are developing a program, I recomend you to use standard GDIPlus library.
It does everything with pictures.

Radeon HD 4850 and OpenCL: will cl_khr_fp64 work on this videocard?

This videocard (Radeon HD 4850) conforms only with OpenCL 1.0, per AMD Compatibility table. I need some hardware to conduct intensive financial calculations with doubleN types (no floats at all!). According to this cardtable, this card is able to work with double types. Now I have the possibility to buy it at quite an attractive price.
I'd greatly appreciate if an answerer has real experience in working with this card for OpenCL with fp64 extension. Of course, if there are problems with this card, please put two lines here.
Thank you and sorry for my English.
I haven't used this card with DP before, but if the spec says it is supported, then it's worth a try.
In my opinion, you should go with a newer model card though. There are a lot of cheap cards out that will outperform the 4850, and they will support some new features as well.
This card supports double precision but the 4xxx series doesn't include local memory in the chip. As the standard mandates local memory support it is emulated with global memory and very slow. Many algorithms require local memory for obtaining a good speed-up. So, a newer card 5xxx and higher is a lot better.
In addition, some combinations of older cards/older SDK versions only support double precision through the cl_amd_fp64 extension (not the official cl_khr_fp64 extension) because of some small things from the standard that are not supported. For the most part, this doesn't matter much except that you need to change the extension name in your code to make it work with doubles.
As a general tip, I would try to avoid the 4xxx series if you intend to make serious GPGPU development. Keep in mind also, that the newer 7xxx series it is much more optimized for GPU computations than both the 5xxx and 6xxx series closing much of the gap with NVIDIA cards. So, if you can, try to aim for a 7xxx with double precision support.

Developing with OpenCl on ATI and Nvidia on the same time

our workgroup is slowly trying a little bit of OpenCl in a side project. So far 'everybody' is working on NVIDIA Quadro FX 580. Now we are planning to buy new computers for new colleages and instead of the FX 580 we could buy ATI FirePro V4800 instead, which costs only 15Eur more and give us 1Gig instead of 512Gig of Ram which will benificial for our data intensive tasks.
So, how much trouble is it to develop OpenCl code at the same time on Nvidia and ATI?
I read the following SO question, Running OpenCL on hardware from mixed vendors, which was very pessimistic about developing on/for different vendors. On the other side, the question is already a year old.
What do you reccomend?
I have previous worked extensively with CUDA programming language.
I have been planning to start developing apps using OpenCL. As you mentioned one of the best features with OpenCL is running on many vendor hardware (Intel, AMD and Nvidia).
One project that I came across that used openCL extensively for large scale development is http://sourceforge.net/projects/hypgad/. It might be a good idea to look at the source code from this group and understand how they have developed their application on so many hardware including sony cell processor.
Another approach would be to use PyOPENCL, which provides higher abstraction than OpenCL and can significantly reduce the coding effort.
Do you need the code to run unchanged on both bits of hardware? If so you may have to develop for a limited subset of common functions.
If you can run slightly different c ode on each you will probably get better performance - in CUDA/OpenCL you generally have to tune the algorithms for the amount of ram, number of GPU engines anyway so it shoudldn't be much more work to also tweak for NVidia/AMD
The biggest problem is workgroup sizes. Some ATI cards I have used crash at above 64, but then it may be the Apple OSX 10.6 drivers I am using.
Developing for both ATI and NVIDIA is actually not too difficult so long as you avoid using any part of either vendor's SDK. Stick to OpenCL as it is defined in the OpenCL spec. (www.khronos.org/opencl) and your code will stay syntax portable. Due to differences in the underlying architectures, performance portability may be an issue. Local & Global worksizes really have to be determined independently for each card to maximize performance. Another thing to pay attention to is the types being used. Vector types (float2, float4) are especially useful on ATI cards, as each processing element actually contains 4 execution units (one for each RGB color channel, plus aplha).

OpenCL vs. DirectCompute?

I'm looking for comparisons between OpenCL and DirectCompute, but I haven't found anything. OpenCL's advantages of being cross-platform and having a wider range of supported GPUs don't matter to me. I'm fine with coding on Windows against DX11 GPUs only. Assuming that, what are the pros and cons of each API?
I know this question was raised before, but I'm looking for more details.
I'm not interested in CUDA, since I don't want to restrict myself to only Nvidia hardware.
Probably the biggest difference for a coder is that DirectCompute is programmed by a language which is similar to HLSL, and OpenCL is programmed via a C-like language.
Another difference to consider is that, generally, for commodity level GPUs, the DirectX support is better (faster and less buggy) than OpenGL support on Windows. This may translate to more stable support for DirectCompute, but really, this is just speculation.
Well the major advantage of OpenCL is that it is not just limited to graphics cards. You can make use of your multicore CPU, Graphics Card and potentially any number of other hardware acceleration devices (DSPs etc) all from the same program.
I'm not sure if DirectCompute allows that freedom.
The OpenCL cross-platform-ness is not just a detail, as the host code (the one calling the OpenCL API and submitting kernels) can itself be cross-platform (see link text, link text...).
Write once, run on any GPGPU, anywhere.
Otherwise the OpenCL tooling is really getting better, with an ATI Stream plugin for Visual Studio, the NVidia & ATI SDKs that contains tons of samples, etc...
Another option now is C++ AMP which gives you modern C++ syntax without a need for a seperate compiler while still preserving hardware portability. Please follow links from here for more info and feel free to post questions as you have them: http://blogs.msdn.com/b/nativeconcurrency/archive/2011/09/13/c-amp-in-a-nutshell.aspx
I use OpenCL because i can easily port my App to Linux but with DirectCompute this is not possible.
I think also that the performance of the OpenCL implementation will increase with time (that it comes at the same Level like CUDA for NVidia Cards) and also that the (driver)bugs will (hopefully ;) ) be eliminated with time.

Is GDI+ just a layer on top of GDI, or something new?

When GDI+ came out, I remember all the brouhaha about how it was the "new, faster, better" way to display stuff in Windows. But everytime I looked at it, it seemed to me that it was really just a COM wrapper around GDI.
Is that true? Or is GDI+ really an independent graphical library that simply shares some paradigms with GDI?
Personally, I'm not sure how it could be independent, but I never saw a definite statement one way or another.
GDI+ is built over GDI, and adds several more features. For example, GDI+ adds support for transparency, anti-aliasing bitmap stretching, etc...
GDI+ is mainly a Object based API, and GDI is a function api. Most of the functionalities in GDI+ are not hardware optimized (there are handled by software), to contrast with GDI. For example, in GDI, BitBlt is handled directly by hardware. GDI+ bitmap painting functions are not.
GDI+ is a powerful API, but be careful with its performance.
GDI+ is available in C++, COM and .NET
Many GDI functions are accelerated by the graphics hardware, and some GDI+ routines may use GDI underneath. But most of GDI+ is independant of GDI.
An important, and telling, example is text rendering. In GDI+ text rendering is done completely in software; the anti-aliasing, glyph pixel-fitting and other effects is done without the video card.
(source: microsoft.com)
Microsoft's Chris Jackson had an interesting blog post where he profiled the speed difference between text rendering in GDI and GDI+:
...my GDI code path was rendering
approximately 99,000 glyphs per
second, while my GDI+ code path was
rendering approximately 16,000 glyphs
per second.
Another example is line drawing. GDI+ supports anti-aliased line/polygon and circle/ellipse drawing, while GDI does not:
(source: microsoft.com)
(source: microsoft.com)
(source: microsoft.com)
GDI+ is not COM. GDI+ has an underlying "flat" API that is callable from C (or any other language, therefore), and an object-oriented wrapper in C++ that just calls the flat API. There are also wrappers in .NET (System.Drawing) and Delphi that also just call the flat API. It works completely different from GDI in that you don't set objects (pens, brushes, fonts) to a device context, but rather pass them to drawing functions. It does not have much in common with GDI. I don't know though if the implementation of GDI+ uses GDI - but it likely doesn't, because it has so many features that are just not available in GDI.
Unfortunately, it is slower than GDI. It's very powerful though.
As decasteljau pointed out in the meantime, the performance issues might come from the fact that it is not rendered in hardware, unlike OpenVG or WPF. I recently used XNA because of that for a graphical realtime application.

Resources