How to use specific GPU device for learning in keras R? - r

I`m Da-Bin.
I want to use specific GPU device for learning in keras R not python.
Because when I tried learning two program simultaneously, one program is good working but another program is not working.
I think that it seems another program is waiting until end of learning.
So, I have two GPU 1080 ti, I want use specific device for each program.
But, multi_gpu_model function use When use the more than two device, right?
If i can use the multi_gpu_model function for learning using one device.
How can I know device name for parameter "gpu=" name ?
And, how can I use the specific device for each program?

By default gpu names are "/gpu:0", "/gpu:1" ... "/gpu:(n-1)" if you have n gpus.
You can pass a list of gpu names instead of an integer for gpus parameter of multi_gpu_model function.

Related

Using multiple programs in OpenCL

I am writing a piece of code that utilizes the GPU using OpenCL. I succeeded in making a kernel that runs Vector addition (in a function called VecAdd), so I know it is working. Suppose I want to make a second kernel for Vector subtraction VecSub. How should I go about that? Or more specifically: can I use the same context for both the VecAdd and VecSub function?
Hi #debruss welcome to StackOverflow!
Yes, you certainly can run multiple Kernels in the same Context.
You can define the Kernels in the same or multiple Programs. You could even run them simultaneously in two different Command Queues or a single Command Queue configured for out of order execution.
There is an example (in rust) of defining and running two Kernels in a Program here: opencl2_kernel_test.rs.

How to use specific a GPU device for learning in keras R?

I have two gpu device GTX 1080ti.
I do work in Rstudio with keras.
So, I want to use specific gpu device to each script.
I have heard for this, some people recommend to use with(tf$device("/:gpu1").
But, I am using R not python.
I can't find the tf function and device funtion.
And another recommendation is to use os.environ["CUDA_VISIBLE_DEVICES"]="1".
But, I can not find the os.environ function.
Please help me anybody who can help.
So, please tell me more detail about code for this problem.
Thank you.

Read C++ binary file in R

Can I read a binary file written by C++ in R?
I have been using Rcpp in my R package and the simulations typically generate a large amount of data. I am planning to write the output to binary files in C++ and then read those back in R. This works if I write as text files but I didn't find a solution with binary files. The program sometimes crashes abruptly if I pass data using many NumericVectors (I am yet to fully understand the memory management using Rcpp).
Can this approach enable me to share larger datasets between C++ and R compared to what is possible by passing vectors? In C++, the maximum vector size is limited by RAM and address bus (may be?) but I think R is able to load larger vectors using swap. Am I correct or misunderstanding the concepts?
Yes you can. But it's "complicated".
You are embarking on a topic called binary serialization. There is a lot of work out there. In essence you are somewhere in the continum between of
minimal: open a file, write out N binary items; then on the other side read N binaries. We did something similar at work years ago where wrote some metadata with <rows,cols,version> and then a binary blob of rows * cols double to attach to a matrix
maximal: use a fully descriptive meta language like Protocol Buffer or MessagePack to describe the binary content, write it in C++ (using the appropriate library) and read in back in R (using the corresponding packages---I am involved with one each: RProtoBuf and RcppMsgPack).
And a lot in between. If you really only need to communicate between C(++) and R you could try the RData / rds format. There is one library: librdata and I experimented with it (and filed some bug reports and made some pull requests). I might start there.
So in short: do some research, figure out what to do and then do it :)
PS If you call C++ via Rcpp from R then you may not need files. We can pass large object back and forth -- the limit may be your RAM.

Simple OpenCL example in R with R code?

Is it possible to use OpenCL but with R code? I still don't have a good understanding of OpenCL and GPU programming. For example, suppose I have the following R code:
aaa <- function(x) mean(rnorm(1000000))
sapply(1:10, aaa)
I like that I can kind of use mclapply as a dropin replacement for lapply. Is there a way to do that for OpenCL? Or to use OpenCL as a backend for mclapply? I'm guessing this is not possible because I have not been able to find an example, so I have two questions:
Is this possible and if so can you give a complete example using my function aaa above?
If this is not possible, can you please explain why? I do not know much about GPU programming. I view GPU just like CPUs, so why cannot I run R code in parallel?
I would start by looking at the High Performance Computing CRAN task view, in particular the Parallel computing: GPUs section.
There are a number of packages listed there which take advantage of GPGPU for specific tasks that lend themselves to massive parallelisation (e.g. gputools, HiPLARM). Most of these use NVIDIA's own CUDA rather than OpenCL.
There is also a more generic OpenCL package, but it requires you to learn how to write OpenCL code yourself, and merely provides an interface to that code from R.
It isn't possible because GPUs work differently than CPUs which means you can't give them the same instructions that you'd give a CPU.
Nvidia puts on a good show with this video of describing the difference between CPU and GPU processing. Essentially the difference is that GPUs typically have, by orders of magnitude, more cores than CPUs.
Your example is one that can be extended to GPU code because it is highly parallel.
Here's some code to create random numbers (although they aren't normally distributed) http://cas.ee.ic.ac.uk/people/dt10/research/rngs-gpu-mwc64x.html
Once you create the random numbers you could break them into chunks and then sum each of the chunks in parallel and then add the sums of the chunks to get the overall sum Is it possible to run the sum computation in parallel in OpenCL?
I realize that your code would make the random number vector and its sum in serial and parallel that operation 10 times but with GPU processing, having a mere 10 tasks isn't very efficient since you'd leave so many cores idle.

Running two commands parallel to each other in R on windows

I have tried reading around on the net on using parallel computing in R.
My problem is that I want to utilize all my cores on my PC, and after reading different ressources I am not sure I need packages like multicore for my purposes, which unfortunately does not work on windows.
Can I simply split my very large data sets into several sub datasets, and run the same function on each, and have the same function run on different cores? They dont need to talk to each other, and I just need the output from each. Is that really impossible to do on windows?
Suppose I have a function called timeanalysis() and two datasets, 1 and 2. Cant I call the same function twice, and tell it to use a different core each time?
timeanalysis(1)
timeanalysis(2)
I have found the snowfall package to be the easiest to use in windows for parallel tasks.

Resources