What is the best way to perform fft with opencl? (my gpu is intel HD graphics 4600)
I found clfft library, but stucked on installing it. I read documentation, but don't understand couple things. It requires some dependencies, one of them is AMD APP SDK, and as I can guess its only for amd gpu's? Can I use it with intel hd graphics?
Related
I'm trying to install caffe and I wonder if I can use cuDNN with AMD/OpenCL.
Because my graphic card is AMD
https://github.com/BVLC/caffe/tree/opencl
I'm afraid this won't work: cudnn is an extension of cuda, which is a propriety of NVIDIA. Thus, a non-NVIDIA GPU does not support CUDA and thus does not support cuDNN.
With a non NVIDIA card, you cannot run CUDA code (main caffe branch), but you should be able to enjoy opencl GPU code. You should give the opencl branch a chance.
The short answer is that if your graphics card is AMD then you'll have to use OpenCL, not cuDNN. You cannot make them work together.
cuDNN and OpenCL are competition, and so it doesn't even make sense to try and use them together.
If instead you are asking if you can use NVIDIA's cuDNN library on AMD hardware, the answer is no. It just isn't compatible. cuDNN was made specifically to work on the NVIDIA hardware and take advantage of the unique properties of that chip set.
There is an OpenCL variant of cuDNN from Intel:
https://github.com/01org/clDNN/
Since it is OpenCL-based, should work also on AMD GPUs (although I haven't tested it myself)
I am afraid if you can really use it for AMD graphics card since clDNN is built for DL inference in particular for Intel graphics cards (HD and Iris). Check e.g., OpenVINO toolkit (designed by Intel), it uses clDNN under the hood for GPU plugins to accelerate inference on the Intel GPUs.
From the GPU plugin page, it says:
clDNN is an open source performance library for Deep Learning (DL) applications intended for acceleration of Deep Learning Inference on Intel® Processor Graphics including Intel® HD Graphics and Intel® Iris® Graphics.
I have two GPUs installed on my machine. I am working with library which uses OpenCL acceleration that only support one GPU and it is not configurable. I can not tell it which one I want. It seems that this library for some reason chose one of my GPUs that I do not want.
How can I delete/stop/deactivate this GPU from being supported as an OpenCL device?
I want to do this so I get only one supported GPU and the library will be forced to use it.
Note: Any option that contains change or edit the library is available for me.
P.S. I am on Windows 10 with Intel processor and Intel GPU + NVidia GPU
On Windows the OpenCL ICD system uses Registry entries to find all of the installed OpenCL platforms.
Solution: Using RegEdit you can backup and then remove the entry for the GPU you do not want to use. The Registry location is HKEY_LOCAL_MACHINE\SOFTWARE\Khronos\OpenCL\Vendors.
Reference: https://www.khronos.org/registry/cl/extensions/khr/cl_khr_icd.txt
Is it possible to enable OpenCL on an A10-7800 without using it for the X server? I have a Linux box that I use for GPGPU programming. A discrete GEForce 740 card is used for both the X server and running OpenCL & Cuda programs I develop. I would also like the option of running OpenCL code on the APU's integrated GPU cores.
Everything I've read so far implies that if I want to use the APU for OpenCL, I have to install Catalyst and, AFAIK, that means using it for the X server. Is this true? Would there be an advantage to using the APU for my X server and using the GEForce solely for GPGPU code?
I had a similar goal, so I've built a system with AMD APU (4 regular cores + 6 GPUs) and Nvidia discrete graphics board. Sorry to say it wasn't easy to make it work, so I asked a question on the Ask Ubuntu forum, didn't get any answers, experimented a lot with hardware and software setup, and finally have posted my own answer to my question.
I'll describe my setup again here - who knows, what might happen with my auto-answered question on the Ask Ubuntu?
At first, I had to enable the integrated graphics hardware via a BIOS flag. This flag is called IGFX Multi-Monitor on my motherboard (ASUS A88X-PRO).
The second step was to find a right mix of a low-level graphics driver and high-level OpenCL implementation. The low-level driver for AMD processors is called AMD Catalyst and has a file name fglrx. I didn't install this driver from the Ubuntu software center - instead I used a version 15.302, directly downloaded from the AMD site. I had to install a significant number of prerequisites for this driver. The most important finding was that I had to skip running the aticonfig command after the fglrx installation - this command actually configures the X server to use this driver for graphics output, and I didn't want that.
Then I've installed the AMD SDK Ver 3.0 (release 130.136, earlier releases didn't work with my fglrx) - it's the OpenCL implementation from AMD. The clinfo command reports both CPUs and GPUs with correct number of cores now.
So, I have a hybrid AMD processor, supported by the OpenCL, with all the graphics output, supported by a discrete graphics card with Nvidia processor.
Good luck!
I maintain a Linux server (OpenSUSE, but the distribution shouldn't matter) containing both NVIDIA and (a discrete) AMD GPU. It's headless, so technically I do not know whether the X server will create additional problems, but I don't think so. You can always configure xorg.conf to use exactly the driver you want. Or for that matter: install Catalyst, but delete the X server driver file itself, which is not the same thing that you need for OpenCL.
There is one problem with a mixed-vendor system that I noticed, however: AMDs OpenCL driver (ICD) will go spelunking for a libGL.so library, I guess in order to do OpenCL/OpenGL-interop. If it finds any of the NVIDIA-supplied libGL.so's, it will get confused and hang - at least on my machine. I "solved" this by deleting all libGL.so's (I do not need it on a headless compute server), but that might not be an acceptable solution for you. Maybe you can arrange things such that the AMD-supplied libGL.so's take precedence, possibly by installing the AMD driver last.
I am quite new in the world of GPU Computing. So I would really like someone to explain me the very basics. I have to Intel chipsets with the following GPUs:
GMA4500
HD graphics
I am interested in running algebraic and bitwise functions with huge data sets, like transpose of an array or bitwise shift of the lines of an array, in a GPU. The goal is of course to gain more performance.
My main question is how can I program such on GPUs? In the past I have used CUDA to program on nVIDIA video card. I understand from previous topics that I can't use CUDA for an INTEL GPUs. Thanks in advance!!
Update 1
I found out that Intel supports OpenCL for HD graphics. More precisely the Intel SDK for OpenCL applications provides a comprehensive development environment for OpenCL application on Intel® platforms including compatible drivers, code samples, development tools, such as the code builder, optimization guide, and support for optimization tools.
The SDK supports OpenCL 1.2 on 3rd and 4th generation Intel® Core™ processors with Intel® HD Graphics and Intel® Iris™ Graphics Family, Intel® Atom™ Processors with Intel HD Graphics, Intel® Xeon® processors, and Intel® Xeon Phi™ coprocessors.
OpenCL is the standard, cross-vendor API for GPGPU programming, roughly analogous to nVidia's proprietary CUDA.
So, Intel SDK works with intel cpu, gpu, and xeon phi.
AMD SDK works with AMD gpu and cpu.
I would like to develop an application that targets intel cpu and AMD gpu.
Can anyone suggest a development strategy to achieve this?
Thanks.
Edit: I would like to run both cpu and gpu kernels concurrently on the same system.
When you get list of available platforms, in case of Intel CPU/AMD GPU you shall have 2 platforms, each with it's own ID.
Usually, that's it, you create devices an so on, using necessary platform ID in each case.
If you are using Windows, it's not so difficult to see in debugger, that different platforms corresponds to different OpenCL libraries (just go deeper into cl_platform_id structure) - both of dll's are loaded.
Put your OpenCL code (not necessarily the kernel) in a library and create and link the DLL files for the AMD and Intel (and NVIDIA) devices. Create a new program and dynamically load the library based on which platforms the user has installed.
Kind of a pain in the butt but it works in Labview so it should work in other languages.
If you are using Windows, you can use LoadLibrary and put the library in a folder that is in your PATH (Windows Environment Variable) or in the same folder as the .EXE.