Using multiple programs in OpenCL - opencl

I am writing a piece of code that utilizes the GPU using OpenCL. I succeeded in making a kernel that runs Vector addition (in a function called VecAdd), so I know it is working. Suppose I want to make a second kernel for Vector subtraction VecSub. How should I go about that? Or more specifically: can I use the same context for both the VecAdd and VecSub function?

Hi #debruss welcome to StackOverflow!
Yes, you certainly can run multiple Kernels in the same Context.
You can define the Kernels in the same or multiple Programs. You could even run them simultaneously in two different Command Queues or a single Command Queue configured for out of order execution.
There is an example (in rust) of defining and running two Kernels in a Program here: opencl2_kernel_test.rs.

Related

Is it possible to run arbitrary Python or R scripts on a "Spark with Yarn" cluster?

I'm trying to make a cluster that does some big data activities. I'm not sure whether 'SPARK with YARN' cluster can run Python or R script.
If it is possible, what is the simplest way to run those scripts?
Thanks.
You should look into Hadoop Streaming which allows you to run Hadoop jobs created using an arbitrary programming language. You simply need to provide a pair of executables (e.g. Python scripts) - one for the map phase (going from input data to a set of intermediate key-value pairs), and one for the reduce phase (going from those intermediate key-value pairs to the output of your job).

How to use specific GPU device for learning in keras R?

I`m Da-Bin.
I want to use specific GPU device for learning in keras R not python.
Because when I tried learning two program simultaneously, one program is good working but another program is not working.
I think that it seems another program is waiting until end of learning.
So, I have two GPU 1080 ti, I want use specific device for each program.
But, multi_gpu_model function use When use the more than two device, right?
If i can use the multi_gpu_model function for learning using one device.
How can I know device name for parameter "gpu=" name ?
And, how can I use the specific device for each program?
By default gpu names are "/gpu:0", "/gpu:1" ... "/gpu:(n-1)" if you have n gpus.
You can pass a list of gpu names instead of an integer for gpus parameter of multi_gpu_model function.

Sharing a data.table in memory for parallel computing

Following the post about data.table and parallel computing, I'm trying to find a way to get an operation on a data.table parallized.
I have a data.table with 4 million rows of 14 observations and would like to share it in a common memory so that operations on it can be parallelized by using the "parallel"-package with parLapply without having to copy the table for each node in the cluster (what parLapply does). At the moment the costs for moving the data.table around are bigger than the benefit of parallel computation.
I found the "bigmemory"-package as an answer for sharing memory, but it doesn't maintain the "data.table"-structure of the data. So does anyone know a way to:
1) put the data.table in shared memory
2) maintain the "data.table"-structure of the data by doing so
3) use parallel processing on this data.table?
Thanks in advance!
Old question, but here is an answer since nobody else has answered and it might be helpful. I assume the problem you are having is because you are on windows and having to use the PSOCK type of cluster. Unfortunately for windows this means you have to copy the data to each node. However, there is a work around. Get hold of docker and spin up an Rserve instance on the docker vm (e.g. stevenpollack/docker-rserve). Since this will be linux based you can create a FORK cluster on the docker vm. Then using your native R instance you can send over only once copy of the data to the Rserve instance (check out the RSclient library), do your parallelized job on the vm, and collect the results back into your native R.
The "complete" solution, shared read and write access from multiple processes, and their problems is discussed here: https://github.com/Rdatatable/data.table/issues/3104
As rookie mentioned, if you fork an R process (with parallel::makeCluster(type = "FORK") or future::plan(multicore) (note that this does not work reliably in RStudio), the operating system will reuse memory pages that are not modified by the child process. So, your workers will share the same memory as long as they don't modify it (Copy-on-write). But this works only if you have all parallel workers on the same machine and fork() has its own problems (although this might be going too far if you simply want to conduct some parallel analysis).
Meanwhile, you could find the packages feather and fst interesting. feather provides a file format that can be read both by R and python and if I understood the docs correctly, feather::feather() gives you a file-backed read-only data-frame, albeit no data.table. This allows for moving data between those two languages.
fst employs the Zstandard compression algorithm to achieve very fast reading and writing speeds to disk. You can read in a part of a fst file using the fst() function (instead of read_fst()). So, every worker could just read the part of your table that it needs. Concurrent writing to the fst file is not possible. You would need to save every result in its own file and concatenate them afterwards.
Alternatively, for concurrent reading and writing, you could switch to a database, albeit that is slower than data.table. See SO/SQLite concurrent access

Spawn process that run in parallel in R

I am writing a script that needs to be running continuously storing information on a MySQL database.
However, at some point of the day I will like to produce some summary of the data being colected, but writing this in the same script will stop collecting data while doing these summaries. Here's a sketch of the problem:
while (1==1) {
# get data and store it on the relational database
# At some point of the day (or time interval) do some summaries
if (time == certain_time) {
source("analyze_data.R")
}
}
The problem is that I'll like the data collection not to stop, being executed by another core of the computer.
I have seen references to packages parallel and multicore but my impression is that they are useful to repetitive tasks applied over vectors or lists.
You can use parallel to fork a process but you are right that the program will wait eternally for all the forked processes to come back together before proceeding (that is kind of the use case of parallel).
Why not run two separate R programs, one that collects the data and one that grabs it? Then, you simply run one continuously in the background and the other at set times. The problem then becomes one of getting the data out of the continuous data gathering program and into the summary program.
Do the logic outside of R:
Write 2 scripts; 1 with a while loop storing data, the other with a check. Run the while loop with one process and just leave it running.
Meanwhile, run your other (checking script) on demand to crunch the data. Or, put it in a cron job.
There are robust tools outside of R to handle this kind of thing; why do it inside R?

Running two commands parallel to each other in R on windows

I have tried reading around on the net on using parallel computing in R.
My problem is that I want to utilize all my cores on my PC, and after reading different ressources I am not sure I need packages like multicore for my purposes, which unfortunately does not work on windows.
Can I simply split my very large data sets into several sub datasets, and run the same function on each, and have the same function run on different cores? They dont need to talk to each other, and I just need the output from each. Is that really impossible to do on windows?
Suppose I have a function called timeanalysis() and two datasets, 1 and 2. Cant I call the same function twice, and tell it to use a different core each time?
timeanalysis(1)
timeanalysis(2)
I have found the snowfall package to be the easiest to use in windows for parallel tasks.

Resources