I knew that OpenCL have a runtime scheduler.
But I couldn’t find how to do scheduling on OpenCL.
Does anyone know about the OpenCL scheduler?
Related
I am just curious about the workflow of OpenCL.
Does the OpenCL runtime does some kind of a JIT compilation?
I will explain the best of my understanding,
From the kernel file generate a SPIR file.
Then the host program requests the OpenCL runtime to execute the SPIR file.
The OpenCL runtime schedules and translate the SPIR into a form that each device can understand and then send it to the device.
If what I've explained is correct, then I think in #3, there should be some kind of JIT compilation for each devices. I think this is supported for portability, but doesn't this hurt performance?
Do Nvidia GPUs support an OpenCL aware MPI, with similar functionality as its CUDA aware MPI?
Also, do AMD GPUs' OpenCL provide such functionality?
I would have thought a simple google search would have answered this, but surprisingly I did not find much info on this while there is a ton of info on CUDA aware MPI!
I'm learning OpenCL and I have a compatible x86 CPU, but my GPU doesn't support OpenCL at all.
So when I call the clGetDevices API, it returns nothing.
As I'm just learning this framework and I'm not looking for optimization or higher performance, is it necessary to get a new system ? (While OpenCL programs are running on my platform)
Thanks in advance :)
http://www.acooke.org/cute/Developing0.html describes how i worked with a cpu (only) a few years ago. basically, the AMD OpenCL driver worked with my Intel CPU.
I've been doing some research in to OpenCL, and the possibility of using it on a project. The question I have is, is there a way to run OpenCL code on a CPU that is unsupported by the OpenCL SDKs in a C++ application. I know Java has Aparapi, however I'm wondering how to run OpenCL code in a C++ application without hardware that is supported by the SDKs. There is some code I would like to write in OpenCL kernels to take advantage of the OpenCL parallelism where available, however I'm unsure if I wouldn't be able to run it on older hardware (still X86, but not recent hardware). Could anyone explain to me how this can be done, or if it is even a problem at all to run OpenCL code on older systems?
Thanks,
Peter
I would say best way to approach this is to check if the device supports OpenCL via OpenCL API calls such as clPlatformIDs then once you figure it isn't a OpenCL device then run the required code as normal C/C++ function otherwise run it using openCL kernel. But for portability you need to write the program logic twice once in .cl file and once as normal c/c++ method/function.
With some friends we want to use openCL. For this we look to buy a new computer, but we asked us the best between AMD and Intel for use of openCL. The graphics card will be a Nvidia and we don't have choice on the graphic card, so we start to want buy an intel cpu, but after some research we figure out that may be AMD cpu are better with openCL. We didn't find benchmarks which compare the both.
So here is our questions:
Is AMD better than Intel with openCL?
Is it a matter to have a Nvidia card with an AMD cpu for the performance of openCL?
Thank you,
GrWEn
You shouldn't care as much about what CPU you use as much as what GPU you use. You would need to choose between an AMD/ATI GPU or nVidia GPU.
I would personally recommend an nVidia GPU as, in addition to OpenCL support, you can experiment with their more proprietary CUDA technology which offers a far richer development experience than OpenCL does today. While you're at it take a look at the new AMP technology that was just announced by Microsoft for C++ which aims to bring language extensions akin to nVidia's CUDA. nVidia also has offerings for the enterprise with their Tesla GPUs with several vendors offering GPU clusters and you can even get a GPU compute cluster on Amazon EC2 now which is all based on nVidia hardware.
You want to buy a new computer with your friends? What kind of project do you plan to do? The question about the hardware is answered with the needs you have. If you give some more information, we can provide better suggestions.
As written before, the CPU is not the important point as long as you do not want to buy a multiprocessor multicore system like 4 Quadprocessors. The difference in performance is mostly the differences of the GPUs used and there you can find different cards for all needs. From a cheap GPU to the nVidia Tesla cards.
It is definitely not a problem to run a nVidia board on a AMD system. I do it here. You also can use the OpenCL devices from the AMD Multicore CPU and the nVidia GPU in parallel.
You should pay attention: If you plan to buy a potent system to run your software (like a webserver), every developer of OpenCL software needs a system for testing. So every developer needs at least a modern multi-core CPU with an OpenCL SDK. Where the OpenCL kernels are developed does not matter. OpenCL is platform independed.
Both Intel and AMD have good OpenCL-support for their CPUs, so currently it does not really matter which you cooose. If you want to use the embedded GPU on AMD Fusion or Intel SandyBridge, then I suggest you go for Fusion since Intel does not have a driver for their GPUs (yet). Depending on what you are going to use OpenCL for, I could suggest a GPU - sometimes NVidia is faster, sometimes AMD.
AMP, CUDA, RenderScript and the many, many others all work nice but they don't work on all hardware as OpenCL does. CUDA certainly has advantages, but in the time you have learnt openCL I can assure you the tools around OpenCL have catched up.
The CPU has no influence on GPU OpenCL performance.
You might also want to try running the OpenCL kernels on CPU. Checkout the Intel OpenCL compiler beta. You can even run kernels on both CPU and GPU.