OpenCL execute kernel wrile copying data to CPU - opencl

I am learning with OpenCL and I have heard, that there is possibility co compute on GPU and copy data at once. I have taks like this:
queue.enqueueNDRangeKernel(ker, cl::NullRange, cl::NDRange(1024*1024));
queue.enqueueReadBuffer(buff, true, 0, 1024*1024, &buffer[0]);
Am I able to somehow execute there operations at once? To copy first results back to CPU while executing kernels with higher indices?
I would like to do something like:
for(int i=0; i<1024; ++i){
queue.enqueueNDRangeKernel(ker, cl::Range(i*1024), cl::NDRange(1024));
queue.enqueueReadBuffer(buff, true, i*1024, 1024, &buffer[i*1024]);
}
But to execute kernels and reads asynchronously. Is something like this possible? Are two queues and kernel completing events correct solution?
Thank you for your time.

Yes, using separate command queues for upload, compute, and download (and events to synchronize!) is the correct way to overlap copy and compute. On some pro-level hardware you can even overlap upload and download because they have two DMA engines.

If you read though the spec you'll see you can answer your own question. In particular, look at the 'cl_event' parameter to several OpenCL functions.
Also if you look carefully at your own code you'll see you set the blocking parameter to true (which should really be CL_TRUE if you want to block, though maybe that's handled by your queue object?). You'll want to change that and use events instead, and use the necessary clFlush() between getting an event and making use of it in an event list.
Finally, assuming you're executing the kernel multiple times with new data each time, you can queue up multiple instances of the kernel, though this necessitates holding more data in memory on the device, so you may need to be careful you don't run out of memory.
Edit: If you are queuing up multiple instances, you will want to use either CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE or multiple command queues (or even both). I find the former easier to use with proper event usage, but it really comes down to personal preference.

Related

How are you supposed to update a texture per frame in Vulkan?

I'm trying to work with 2D in vulkan along with 3D. So right now testing out updating a texture for every frame as whatever 2D is going on. I've gotten something of a texture updater working, the problem is that it's very slow and probably not the way it's supposed to be done. Is there any better way of getting this done? The code is based on the https://vulkan-tutorial.com/ code.
https://vulkan-tutorial.com/code/26_depth_buffering.cpp
void UpdateTexture()
{
vkDeviceWaitIdle(device);
vkFreeMemory(device, textureImageMemory, nullptr);
VkBuffer stagingBuffer;
VkDeviceMemory stagingBufferMemory;
createBuffer(imageSize, VK_BUFFER_USAGE_TRANSFER_SRC_BIT, VK_MEMORY_PROPERTY_HOST_COHERENT_BIT, stagingBuffer, stagingBufferMemory);
void* data;
vkMapMemory(device, stagingBufferMemory, 0, imageSize, 0, &data);
memcpy(data, pixel2.data(), static_cast<size_t>(imageSize));
vkUnmapMemory(device, stagingBufferMemory);
createImage(texWidth, texHeight, VK_FORMAT_R8G8B8A8_SRGB, VK_IMAGE_TILING_OPTIMAL, VK_IMAGE_USAGE_TRANSFER_DST_BIT | VK_IMAGE_USAGE_SAMPLED_BIT, VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, textureImage, textureImageMemory);
transitionImageLayout(textureImage, VK_FORMAT_R8G8B8A8_SRGB, VK_IMAGE_LAYOUT_UNDEFINED, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL);
copyBufferToImage(stagingBuffer, textureImage, static_cast<uint32_t>(texWidth), static_cast<uint32_t>(texHeight));
transitionImageLayout(textureImage, VK_FORMAT_R8G8B8A8_SRGB, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL);
vkDestroyBuffer(device, stagingBuffer, nullptr);
vkFreeMemory(device, stagingBufferMemory, nullptr);
createTextureImageView();
createDescriptorPool();
createDescriptorSets();
createCommandBuffers();
}
This code looks like a direct translation of some OpenGL code, and not particularly good/modern OpenGL code at that.
There's a lot wrong in this code, but most of it boils down to over-synchronization.
First, you should always view any call to vkDeviceWaitIdle as the wrong thing to do. The only exception would be when you are preparing to destroy the VkDevice itself. There is no other reason to do a full CPU/GPU sync like that.
Presumably, this synchronization exists so that you can be sure the GPU is finished using the image before modifying it. This is the wrong thing to do. You should instead employ multiple-buffering. That is, you should have two images that you use. One is currently being used in a rendering process, while the other is being transferred into.
Instead of doing a full device sync, you instead synchronize with the batch you sent two frames ago. That is, if you're wanting to transfer data for use by frame 10, then you must first do a fence-sync operation with the batch you sent in frame 8. Frame 9 is still being processed, but frame 8 is probably done by now. So the synchronization shouldn't hurt too much.
Second, never allocate memory in the middle of an operation like this. Memory gets allocated early in your application, and you leave it allocated until it's time to destroy your application. If you need a staging buffer, then keep it around and reuse it in subsequent frames. Make sure to allocate sufficient storage up-front.
Whatever your createBuffer call is doing, it seems very much like a bad idea. Vulkan is not OpenGL; Vulkan separated memory from buffers/textures that use it for a reason. Creating APIs that hide this separation basically throws all of that away.
Similarly, never unmap memory, unless you're about to destroy that memory object. There's no problem in Vulkan (or OpenGL) with leaving a piece of memory mapped indefinitely. Just map the entire memory's range and leave it mapped. Indeed, you could just pass the mapped pointer directly to your image loader, depending on how the memory get written by the image loading code (if it tries to read data from this pointer, they could be trouble).
Lastly, the commands doing the transfer need to be synchronized with the commands that consume the image. How this happens depends on which queues are being used to do the transfer.
And of course, if you want optimal performance, you may want to check to see if your implementation can read from linear images in your shader. If it can, then you may not need staging at all; you can just write the data directly to the memory in Vulkan's image format, and use it directly.
Employing all of the above is going to add a lot of complexity to your application. But that's how it's supposed to work.
A naive way consists in using the CPU to define the update depending on the time or data and then update the data for the shader, such as a MVP transformation matrix. But this is inefficient with lots of syncing and too low refresh rates, and also overloading the cpu in a loop.
So people recommend using many buffers sometimes mentioning old drivers. If someone can clarify it, that would be nice. I have a naive and probably wrong guess. If they know exactly the frame rate, then they can calculate the time for each frame and dispatch several frames in advance. But it confuses me because the frame rate is dynamic, especially for new screens with the FreeSync functionality that have dynamic refresh rates.
I have thought of a third possibility. One can use the clock directly in the shader. GL_EXT_shader_realtime_clock provides clockRealtimeEXT. It has no defined unit, and will wrap when exceeding the maximum value. But it is said "globally coherent by all invocations on the GPU". During initialization, you can measure its rate using a uniform buffer, and then assume the rate will be constant. And also manage the wrapping.
Then if you can write your shaders as a function of time, for example in a translation, that would be efficient. You just need the initial data. Remember that one must avoid if conditions in shaders.

Is it possible to write with several processors in the same file, at the end of the file, in an ordonated way?

I have 2 processors (this is an example), and I want these 2 processors to write in a file. I want them to write at the end of file, but not in a mixed pattern, like that :
[file content]
proc0
proc1
proc0
proc1
proc0
proc1
(and so on..)
I'd like to make them write following this kind of pattern :
[file content]
proc0
proc0
proc0
proc1
proc1
proc1
(and so on..)
Is it possible? If so, what's the setting to use?
The sequence in which your processes have outputs ready to report is, essentially, unknowable in advance. Even repeated runs of exactly the same MPI program will show differences in the ordering of outputs. So something, somewhere, is going to have to impose an ordering on the writes to the file.
A very common pattern, the one Wesley has already mentioned, is to have all processes send their outputs to one process, often process 0, and let it deal with the writing to file. This master-writer could sort the outputs before writing but this creates a couple of problems: allocating space to store output before writing it and, more difficult to deal with, determining when a collection of output records can be sorted and written to file and the output buffers be reused. How long does the master-writer wait and how does it know if a process is still working ?
So it's common to have the master-writer write outputs as it gets them and for another program to order the output file as desired after the parallel program has finished. You could tack this on to your parallel program as a step after mpi_finalize or you could use a completely separate program (such as sort on a Linux machine). Of course, for this to work each output record has to contain some sequencing information on which to sort.
Another common pattern is to only have one process which does any writing at all, that is, none of the other processes do any output at all. This completely avoids the non-determinism of the sequencing of the writing.
Another pattern, less common partly because it is more difficult to implement and partly because it depends on underlying mechanisms which are not always available, is to use mpi io. With mpi io multiple processes can write to different parts of a file as if simultaneously. To actually write simultaneously the program needs to be executing on hardware, network and operating system which supports parallel i/o. It can be tricky to implement this even with the right platform, and especially when the volume of output from processes is uncertain.
In my experience here on SO people asking question such as yours are probably at too early a stage in their MPI experience to be tackling parallel i/o, even if they have access to the necessary hardware.
I disagree with High Performance Mark. MPI-IO isn't so tricky in 2014 (as long as you have have access to any file system besides NFS -- install PVFS if you need a cheap easy parallel file system).
If you know how much data each process has, you can use MPI_SCAN to efficiently compute how much data was written by "earlier" processes, then use MPI_FILE_WRITE_AT_ALL to carry out the I/O efficiently. Here's one way you might do this:
incr = (count*datatype_size);
MPI_Scan(&incr, &new_offset, 1, MPI_LONG_LONG_INT,
MPI_SUM, MPI_COMM_WORLD);
MPI_File_write_at_all(mpi_fh, new_offset, buf, count,
datatype, status)
The answer to your question is no. If you do things that way, you'll end up with jumbled output from all over the place.
However, you can get the same thing by sending your output to a single processor having it do all of the writing itself. For example, at the end of your application, just have everything send to rank 0 and have rank 0 write it all to a file.

What is the correct way in OpenCL to concatenate results of work-groups?

Suppose that in an OpenCL kernel, each work-group outputs unknown amount of data. Is there any efficient way to align that output on the global memory so that there are no holes in it?
One method might be to use atomic_add() to acquire an index into an array, once you know how large a chunk your workgroup requires. In OpenCL 1.0, this type of operation required an extension (cl_khr_global_int32_base_atomics). These operations may be very slow, possibly going as far as locking the whole global memory bus (whose latency we tend to avoid like the plague), so you probably don't want to use it on a per item basis. A downside to this scheme is that you don't know the order your results will be stored, as workgroups can (and will) execute out of order.
Another approach is to simply not store the data contiguously, but allocate enough for each workgroup. Once they finish, you can run a second batch of work to rearrange the data as required (most likely into a second buffer, as memmove-like tricks don't parallellize easily). If you're passing the data back to CPU, feel free to just run all clEnqueueReadBuffer calls in one batch before waiting for them to complete. A command queue with CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE might help a bit; you can use the cl_event arguments to specify dependencies when they occur.

Can opencl chain multiple passes without returning to CPU?

I want to auto scale some data. So, I want to pass through all the data and find the maximum extents of the data. Then I want to go through the data, do calculations, and send the results to opengl for rendering. Is this type of multipass thing possible in opencl? Or does the CPU have to direct the "find extents" calc, get the results, and then direct the other calc with that?
It sounds like you would need two OpenCL kernels, one for calculating the min and max and the other to actually scale the data. Using OpenCL command queues and events you can queue up these two kernels in order and store the results from the first in global memory, reading those results in the second kernel. The semantics of OpenCL command queues and events (assuming you don't have out-of-order execution enabled) will ensure that one completes before the other without any interaction from your host application (see clEnqueueNDRangeKernel).

Multiple OpenCl Kernels

I just wanted to ask, if somebody can give me a heads up on what to pay attention to when using several simple kernels after each other.
Can I use the same CommandQueue? Can I just run several times clCreateProgramWithSource + cl_program with a different cl_program? What did I forget?
Thanks!
You can either create and compile several programs (and create kernel objects from those), or you can put all kernels into the same program (clCreateProgramWithSource takes several strings after all) and create all your kernels from that one. Either should work fine using the same CommandQueue . Using more then one CommandQueue to execute kernels which should execute serially on the same device is not a good idea anyways, because in that case you have to manually wait for the event completion instead of asynchronously enqueuing all kernels and then waiting on the result (at least some operations should execute in parallel on device and host, so waiting at the last possible moment is generally faster and easier).

Resources