Intel MKL and Oracle R Distribution - r

I am trying to test the multi-threading advantages of using Oracle R Distribution. I have a workstation with a 12 core CPU and 32 GB of RAM available that I'd really like to exploit.
I've downloaded the latest Oracle R distribution and the 30 day trial of Intel MKL 11.1. I've specified my PATH per the Oracle documentation and in R studio when I run Sys.BlasLapack(), I am getting Intel Math Kernel Library (Intel MKL).
However my jobs aren't running any faster. Do I need to run one of the .bat files to actually compile and set parameters for the MKL? I don't have Visual Studio and I can't find anything on the web telling me how to do this. Any pointers? I am using Windows 7 Professional.

Short Answer: Run the benchmark from here under standard BLAS and Intel MKL to see if the MKL is working. MKL will only improve performance for some operations.
To actually get the full power of the Oracle R implementation you would have to use the embedded R functions. These are the ones that start with ore.
In Oracle R Enterprise, embedded R execution is the ability to store R
scripts in Oracle Database and to invoke such scripts, which then
execute in one or more R engines that run in the database and that are
dynamically started and managed by the database.
We have tried out ORE in the office with Oracle running on an Exadata box; we began to see performance lift only when the datasets were extremely large.
If your goal is to take advantage of a more powerful BLAS you don't actually need Oracle R to do that. On a Unix distribution you can build open source R with using the --with-blas option (see this link). I believe the same approach can be used for Windows although I've never compiled R from source with Windows.
Not all R functions run faster with the a different BLAS, in particular most modeling functions like glm don't use the BLAS. To check the performance of your system with different BLAS I have used scripts from this site. They will run much faster if the Intel MKL is being used. Maybe you should try one on your Oracle R distribution and compare with your open source install to confirm that ORE is using the Intel BLAS.
Overall I did not get much day to day performance improvement out of installing the Intel BLAS when I tried it. Revolution Analytics makes a big deal over how their non-free distribution of R leverages the Intel MKL. But they had to rewrite many R functions to take advantage of the increased speed.

Related

Trouble setting up R, OpenMP and glmmTMB

I am looking at a couple of complex models that seem to need a lot of computational power. I am currently using the R package "glmmTMB" to account for spatio-temporal autocorrelation and random effects. In theory, glmmTMB should be able to run much faster using parallelization: https://cran.r-project.org/web/packages/glmmTMB/vignettes/parallel.html
If your OS supports OpenMP parallelization and R was installed using
OpenMP, glmmTMB will automatically pick up the OpenMP flags from R’s
Makevars and compile the C++ model with OpenMP support. If the flag is
not available, then the model will be compiled with serial
optimization only.
Instead of running these models on my personal maschine, I decided to set up a virtual maschine in a HPC environment. How can I install R using OpenMP on Ubuntu 20.04? I couldn't find anything on this topic.

R does not engage my computer's GPU when running complex code

I am running R Studio 64bit on a Windows10 laptop with a Nvidia GPU in it, however, when I am running code, specifically Rshiny apps, they take a long time. This laptop has a GPU but my task manager shows that the GPU is not being utilized. Would the GPU make my program run faster? I do not know much about hardware so forgive my ignorance regarding this.
In answer to your question getting a new GPU would have no impact on the speed of your code.
By default most R code is single threaded meaning that it will only use 1 CPU core. There are various ways to do parallel processing (using more than 1 core) in R. And there are also packages that can make use of GPUs. However it sounds like you are not using either of these.
There are various different ways that you could code your application that would make it more efficient. However how you would go about this would be specific to your code. I would suggest you ask a different question regarding.
Also Hadley's excellent book, Advance R, has techniques for profiling and benchmarking your code to increase performance: http://adv-r.had.co.nz/Profiling.html

Microsoft Open R (Revolution R) using Two CPUs each with Multple Cores

Good morning.
I know it's relatively easy to spread the computation of a monte carlo simulation across multiple cores on one CPU using Microsoft Open R (running Windows 10), but can you run the processing across 2 CPUs each with say 12 cores on one machine?
Thank you
So if you are using the Microsoft RevoUtilsMath package, you will get multi-threading "for free" on a multi-processor, multicore machine. See more here. There are also CRAN packages available to support multicore. Couple of examples: here, and here.
If you use Microsoft R Server, the RevoScaleR functions are parallel.

Using Openblas with R in Reproducible R container

I am using R for a reproducible scientific machine learing & hyperparameter optimizations. I stumble upon the fact that other implementations of blas such openblas/atlas/klm can speedup this costly optimization. But results are slightly different using each blas even if optimization is forced on single thread results deviate from default R.
So I want to try using Docker to contain the experiment. I have multiple questions.
is it good to compile from source instead of binaries ?
if I compile from source, will it lead to same configuration as debian binaries ?
since results are different for each blas, there is a tool called ReproBLAS from Berkeley, is it good idea to use it with R ?
when you compile R using "--with-blas=-lopenblas" in this case openblas is single threaded or multithreaded ?

Does the integration of Intel® Parallel Studio XE 2013 for Linux* and R lead to significant performance improvements?

I am running some resource intensive computations in R. I use for-loops, bootstrap simulations and so on. I have already integrated the Intel® Math Kernel Library for Linux* with R and it seems that this has lead to a significant improvement in computation times. I am now thinking about integrating Intel® Parallel Studio XE 2013 for Linux* and R. This means passing the different compilers that ship with it to R:
(1) Will the integration of Intel® Parallel Studio XE 2013 for Linux* and R lead to significant performance improvements?
(2) Could you give some examples in which situations I would have a benefit?
Thank you!
Very rough order of magnitude:
parallel / multi-core BLAS such as the MKL will scale sublinearly in the number of cores but only for the parts of your operations that are actually BLAS calls ie not for your basic "for-loops, bootstrap simulation and so on"
byte-compiling your R code may give you up to a factor of two, maybe three
after that you may need heavier weapons such as for example Rcpp which can give 50, 70, 90-fold speedups on code involving "for-loops, bootstrap simulation and so on" which why it is eg so popular with the MCMC crowd
similarly, the Intel TBB and other parallel tricks will require rewrites of your code.
There is no free lunch.

Resources