Custom OpenCL Platform/Device - opencl

I am quite new to opencl and would like to do a bit of experimenting. Specifically, I want to know if anyone can point me in the right direction to create a custom platform or device with an opencl interface attached. How this is to be used is to create a simple simulator/debugger that runs alongside the GPU and CPU. Are there any official documents relating to the development of custom opencl platforms, devices etc.? Also, are there any good online resources that covers this area?
Thank you for any help.

Yes, you can write your own OpenCL driver that shows up as a platform with devices on a system. Khronos has released sample ICD source code you can use. This is part of the OpenCL API registry.

What you want can be achived by implementing a self made OpenCL implmentation. This means you have to implement a library that provides an interface according to the OpenCL standard. If you want this for debugging and/or device emulation you have to interrupt the calls to the device query mechanisms from an underlying OpenCL implementation (e.g., Intel's) add an additional device and pass this to the program, which requested the devices from your OpenCL library. Your proxy implementation can than just pass all operations involving real devices to the underlaying OpenCL implementation, while operations involving your virtual device get handled by your own code.

Related

Light Weight Bluetooth LE library in C

I have been looking around for a simple Bluetooth LE library in C that allows me to scan for BLE devices, connect and receive periodic notifications from a given service UUID from the BLE device. Something that directly works with Bluetooth sockets and libbluetooth(created from BlueZ) and not using DBUS. Pairing and security functionality are not required.
Came across https://github.com/labapart/gattlib. Appears to be good but uses dbus API and has dependency on libdbus, glib, so on. To use this library, there is an additional 5MB of libraries required, hence decided to go without dbus. We do not have space on our device to support 5MB of bluetooth stack on compressed rootfs image. The total size of our rootfs image is 9 MB. The bluetooth stack with dbus itself appears to be more than 50% of our rootfs size.
There is also - https://github.com/edrosten/libblepp which is in c++ and doesn't use dbus. This would require to write a C wrapper to be used in C programs and also overhead of C++ constructs such as compiler generated copy constructors, assignment operators and so on. Also issues in cross-compiling.
Target board is Xilinx Zynq running Linux and the build system is buildroot.
Please suggest.
Thanks
Found a solution, it may be of help for someone...
After searching and going through Linux Conference and IOT conference videos on youtube, figured that Bluez has light weight executables and the code is present in src/shared folder of Bluez. For btgattclient.c produces "gatt-client" executable when compiled which does the same functionality as "gatttool" and is not dependant on bluetoothd or dbus. The only dependency it has is on glib-2.0.
This is helpful if we need lightweight tools when the OS has no bluetoothd running or has no dbus library installed.
Thanks
If you want to use BlueZ for BLE communication, the only supported API is the D-Bus API. Everything else is either discouraged or deprecated.
If you want something more minimal and/or not use BlueZ at all, you can use the HCI_CHANNEL_USER feature in Linux to get raw access to the HCI connection in the kernel. With this you can use any Bluetooth Host stack software or write your own minimal if you only require an extremely small subset.
Questions asking for software library recommendations are not allowed on Stack Overflow due to the possibility for opinion-based results though.

OpenCL: Writing to pointer in main memory

Is it possible, using OpenCL's DMA capabilities, to write to a main memory address that is passed into the cl program? I understand doing so would likely break the program, but the intent here is to run a GPU process and then overwrite the address space of the CPU program used to run it, so breakage is expected.
Thanks!
Which version of the OpenCL API are you targeting?
In OpenCL 2.0 and above you can use Shared Virtual Memory (SVM) to share address between host and device(s) in platforms that support it.
You can get more information about it in the Intel OpenCL SVM overview.
If you are using previous versions, or your hardware does not support it, you can use pinned memory with the appropriate flags to clCreateBuffer. In particular, CL_MEM_USE_HOST_PTR or CL_MEM_ALLOC_HOST_PTR, see clCreateBuffer in Khronos.
Note that, when using CL_MEM_USE_HOST_PTR has some alignment restrictions.
In general, in OpenCL, when and how the DMA is used depends on the hardware platform, so you should refer to the vendor documentation for details.

AES Encrypting for ESP8266 implemented on Software or Hardware? How to implement?

I need to write a basic encryption program for ESP8266. I did read the datasheet (https://www.espressif.com/sites/default/files/documentation/0a-esp8266ex_datasheet_en.pdf), and them says that existis the methods of encrypt: WEP/TKIP/AES. My main question is: The AES method, is implemented on software or hardware? This module is very simple, (36KB RAM, 90MHz CPU clock), so the algorithm is heavy to process. If AES is implemented in hardware, I think this task gets simpler, but I don't know how to use this. I did read at web, and the examples uses a #include "AES.h" lib, I don't know if this is implemented on hardware or software. The site of ESP8266 don't reply this question. So, I wants know about this and how, or where I found help, to implement this.
Ps.: I don't want use Arduino.
Also, I've already use this, https://github.com/CHERTS/esp8266-devkit/tree/master/Espressif/examples/ESP8266. But, for little jobs.
It's a software implementation. The RTOS SDK contains two implementations of AES, one of them shared with the basic SDK - all in software:
https://github.com/CHERTS/esp8266-devkit/blob/master/Espressif/ESP8266_RTOS_SDK/third_party/mbedtls/library/aes.c
https://github.com/CHERTS/esp8266-devkit/blob/master/Espressif/ESP8266_RTOS_SDK/third_party/ssl/crypto/ssl_aes.c
https://github.com/CHERTS/esp8266-devkit/blob/master/Espressif/ESP8266_SDK/third_party/mbedtls/library/aes.c
In addition, there's an implementation optimized for the AES-NI instruction set: https://github.com/CHERTS/esp8266-devkit/blob/master/Espressif/ESP8266_RTOS_SDK/third_party/mbedtls/library/aesni.c
However, AES-NI is only implemented by certain Intel and AMD CPUs. So it will not be compiled.
There are no signs of a hardware implementation.

OpenCL, Vulkan, Sycl

I am trying to understand the OpenCL ecosystem and how Vulkan comes into play.
I understand that OpenCL is a framework to execute code on GPUs as well as CPUs, using kernels that may be compiled to SPIR.
Vulkan can also be used as a compute-API using the same SPIR language.
SYCL is a new specification that allows writing OpenCL code as proper standard-conforming C++14. It is my understanding that there are no free implementations of this specification yet.
Given that,
How does OpenCL relate to Vulkan? I understand that OpenCL is higher level and abstracts the devices, but does ( or could ) it use Vulkan internally? (instead of relying on vendor specific drivers)
Vulkan is advertised as both a compute and graphics API, however I found very little resources for the compute part. Why is that ?
Vulkan has performance advantages over OpenGL. Is the same true for Vulkan vs OpenCl? (OpenCL is sadly notorious to being slower than CUDA.)
Does SYCL use OpenCL internally or could it use Vulkan ? Or does it use neither and instead rely on low level, vendor specific APIs to be implemented ?
How does OpenCL relates to vulkan ? I understand that OpenCL is higher level and abstracts the devices, but does ( or could ) it uses Vulkan internally ?
They're not related to each other at all.
Well, they do technically use the same intermediate shader language, but Vulkan forbids the Kernel execution model, and OpenCL forbids the Shader execution model. Because of that, you can't just take a shader meant for OpenCL and stick it in Vulkan, or vice-versa.
Vulkan is advertised as both a compute and graphics api, however I found very little resources for the compute part - why is that ?
Because the Khronos Group likes misleading marketing blurbs.
Vulkan is no more of a compute API than OpenGL. It may have Compute Shaders, but they're limited in functionality. The kind of stuff you can do in an OpenCL compute operation is just not available through OpenGL/Vulkan CS's.
Vulkan CS's, like OpenGL's CS's, are intended to be used for one thing: to support graphics operations. To do frustum culling, build indirect graphics commands, manipulate particle systems, and other such things. CS's operate at the same numerical precision as graphical shaders.
Vulkan has a performance advantages over OpenGL. Is the same true for Vulkan vs OpenCl?
The performance of a compute system is based primarily on the quality of its implementation. It's not OpenCL that's slow; it's your OpenCL implementation that's slower than it possibly could be.
Vulkan CS's are no different in this regard. The performance will be based on the maturity of the drivers.
Also, there's the fact that, again, there's a lot of stuff you can do in an OpenCL compute operation that you cannot do in a Vulkan CS.
Does SYCL uses OpenCL internally or could it use vulkan ?
From the Khronos Group:
SYCL (pronounced ‘sickle’) is a royalty-free, cross-platform abstraction layer that builds on the underlying concepts, portability and efficiency of OpenCL...
So yes, it's built on top of OpenCL.
How does OpenCL relates to vulkan ?
They both can pipeline a separable work from host to gpu and gpu to host using queues to reduce communication overhead using multiple threads. Directx-opengl cannot?
OpenCL: Initial release August 28, 2009. Broader hardware support. Pointers allowed but only to be used in device. You can use local memory shared between threads. Much easier to start a hello world. Has api overhead for commands unless they are device-side queued. You can choose implicit multi-device synchronization or explicit management. Bugs are mostly fixed for 1.2 but I don't know about version 2.0.
Vulkan: Initial release 16 February 2016(but progress from 2014). Narrower hardware support. Can SPIR-V handle pointers? Maybe not? No local-memory option? Hard to start hello world. Less api overhead. Can you choose implicit multi-device management? Still buggy for Dota-2 game and some other games. Using both graphics and compute pipeline at the same time can hide even more latency.
if opencl had vulkan in it, then it has been hidden from public for 7-9 years. If they could add it, why didn't they do it for opengl?(maybe because of pressure by physx/cuda?)
Vulkan is advertised as both a compute and graphics api, however I
found very little resources for the compute part - why is that ?
It needs more time, just like opencl.
You can check info aboout compute shaders here:
https://www.khronos.org/registry/vulkan/specs/1.0/xhtml/vkspec.html#fundamentals-floatingpoint
Here is an example of particle system managed by compute shaders:
https://github.com/SaschaWillems/Vulkan/tree/master/computeparticles
below that, there are raytracers and image processing examples too.
Vulkan has a performance advantages over OpenGL. Is the same true for
Vulkan vs OpenCl?
Vulkan doesn't need to synchronize for another API. Its about command buffers synchronization between commandqueues.
OpenCL needs to synchronize with opengl or directx (or vulkan?) before using a shared buffer(cl-gl or dx-cl interop buffers). This has an overhead and you need to hide it using buffer swapping and pipelining. If no shared buffer exists, it can run concurrently on modern hardware with opengl or directx.
OpenCL is sadly notorious to being slower than CUDA
It was, but now its mature and challenges cuda, especially with much wider hardware support from all gaming gpus to fpgas using version 2.1, such as in future Intel can put an fpga into a Core i3 and enable it for (soft-x86 core ip) many-core cpu model closing the gap between a gpu performance and a cpu to upgrade its cpu-physx gaming experience or simply let an opencl physics implementation shape it and use at least %90 die-area instead of a soft-core's %10-%20 effectively used area.
With same price, AMD gpus can compute faster on opencl and with same compute power Intel igpus draw less power. (edit: except when algorithms are sensitive to cache performance where Nvidia has upperhand)
Besides, I wrote a SGEMM opencl kernel and run on a HD7870 at 1.1 Tflops and checked internet then saw a SGEMM henchmark on a GTX680 for same performance using a popular title on CUDA!(price ratio of gtx680/hd7870 was 2). (edit: Nvidia's cc3.0 doesn't use L1 cache when reading global arrays and my kernel was purely local/shared memory + some registers "tiled")
Does SYCL uses OpenCL internally or could it use vulkan ? Or does it
use neither and instead relies on low level, vendor specific apis to
be implemented ?
Here,
https://www.khronos.org/assets/uploads/developers/library/2015-iwocl/Khronos-SYCL-May15.pdf
says
Provides methods for dealing with targets that do not have
OpenCL(yet!)
A fallback CPU implementation is debuggable!
so it can fall back to a pure threaded version(similar to java's aparapi).
Can access OpenCL objects from SYCL objects
Can construct SYCL objects from OpenCL object
Interop with OpenGL remains in SYCL
- Uses the same structures/types
it uses opencl(maybe not directly, but with an upgraded driver communication?), it develops parallel to opencl but can fallback to threads.
from the smallest OpenCL 1.2 embedded device to the most advanced
OpenCL 2.2 accelerators

Do most modern kernels use DMA for network IO with generic Ethernet controllers?

In most modern operating systems like Linux and Windows, is network IO typically accomplished using DMA? This is concerning generic Ethernet controllers; I'm not asking about things that require special drivers (such as many wireless cards, at least in Linux). I imagine the answer is "yes," but I'm interested in any sources (esp. for the Linux kernel), as well as resources providing more general information. Thanks.
I don't know that there really is such a thing as a generic network interface controller, but the nearest thing I know of -- the NE2000 interface specification, implemented by a large number of cheap controllers -- appears to have at least some limited DMA support, and more sophisticated controllers are likely to include more sophisticated features.
The question should be a bit different:
Is typical network adapter have dma
controller on board ?
After finding answer on this question ( i guess in 99.9% it will be yes), you should ask about specific driver for each card. I assume that any decent driver will fully utilize hardware capabilities (i.e DMA support in our case), but question about OS is not relevant, since no OS can force the driver to implement DMA support. A high level OS like Windows and Linux provide a primitives to easier implementation of DMA, but implementing is responsibility of the driver.

Resources