R performance on Ryzen+Ubuntu: openBLAS/MKL, Rcpp and other improvements? - r

I'm trying to push the maximum from a Ryzen 3 3950x 16-core machine on Ubuntu 20.04, Microsoft R 3.5.2, with Intel MKL, and the RCpp code is compiled with Sys.setenv(MKL_DEBUG_CPU_TYPE=5) header.
The following are the main operations, that I'd like to optimize for:
Fast multivariate random normal (for which I use the Armadillo version):
#include <RcppArmadillo.h>
// [[Rcpp::depends(RcppArmadillo)]]
using namespace Rcpp;
// [[Rcpp::export]]
arma::mat mvrnormArma(int n, arma::vec mu, arma::mat sigma) {
int ncols = sigma.n_cols;
arma::mat Y = arma::randn(n, ncols);
return arma::repmat(mu, 1, n).t() + Y * arma::chol(sigma);
}
Fast SVD (I found that the base::svd performs better than any Rcpp realization I've found so far, including arma::svd("dc"), probably due to different U,S,V dimensions).
Fast matrix multiplications for various results (found code written in C, rewrote all of it in base R and am finding vast improvements due to multicore vs previous 1-core performance. Can base R matrix operations be further improved?)
I've tried various setups with R4.0.2 and openBLAS (through the Ropenblas package), played with various Intel MKL releases, researched about AMD's BLIS and libflame (which I don't know how to even test with R).
Overall, this setup is able to outperform a laptop with i7-8750h and Microsoft R 3.5.1 (With working MKL) by around 2x, while based on 6 vs 16 cores (and faster RAM), I was expecting at least 3-3.5x improvement (based, e.g., by cinebench and similar performance benchmarks).
How can this setup be further improved?
My main issues/questions:
First, I've noticed that the current setup, when ran with 1 worker, is using around 1000-1200% cpu when looking at top call. Through experimentation, I've found that spawning two parallel workers uses most of the cpu, around 85-95%, and delivers the best performance. For example, 3 workers uses whole 100%, but bottlenecks somewhere, drastically reducing the performance for some reason.
I'm guessing that this is a limitation either coming from R/MKL, or something when compiling Rcpp code, since 10-12 cores seems oddly specific. Can this be improved by some hints when compiling Rcpp code?
Secondly, I'm sure I'm not using the optimal BLAS/LAPACK/etc drivers for the job. My guess is that properly compiled R4.0.2 should be significantly faster than Microsoft R3.5.2, but I have absolutely no idea what am I missing, whether the AVX/AVX2 are properly called/used, and what else should I try with the machine?
Lastly, I have seen zero guides for calling/working with AMD BLIS/libflame for R. If this is trivial, would appreciate any hints/help of what to look into.

Until any other (hopefully much better) answers pops up, will post here my latest findings by guesswork. Hopefully, someone with a similar machine will find this useful. Will try expanding the answer if any additional improvements comes up.
Guide for clean R compiling. Seems outdated, but hopefully nothing much missing:
Speed up RcppArmadillo: How to link to OpenBlas in an R package
OpenBLAS and IntelMKL examples + Rstudio
OpenBLAS works terrible on my Ryzen + Ubuntu configuration; with 3.10 BLAS, compiled with zen2 hints, uses all the CPU cores, but terribly. top reports 3200% usage for R instance, but total CPU utilisation doesn't rise more than 20-30%. Hence, the performance is at least 3x slower than with Intel MKL.
IntelMKL. Versions till 2019 work with the MKL_DEBUG_CPU_TYPE workaround. Can confirm that intel-mkl-64bit-2019.5-075 works.
For later versions since 2020.0-088 a different
workaround is
needed. With my benchmarks the performance did not see any
improvement, however, this may change with future MKL releases.
The 10-12 hardcoded cap per instance appears to be controlled by several environmental variables. I found the following list as per this old guide. Likely that these may change with later versions, but seems to work with 2019.5-075:
export MKL_NUM_THREADS=2
export OMP_NESTED="TRUE"
export MKL_DOMAIN_NUM_THREADS="MKL_BLAS=1"
export OMP_NUM_THREADS=1
export MKL_DYNAMIC="TRUE"
export OMP_DYNAMIC="FALSE"
Playing around with various configurations I was able to find that reducing the number of threads and spawning more workers, for my specific benchmark that I've tested on, increased the performance drastically (around 3-4 fold). Even though the claimed CPU usage were similar to multicore variants of the configuration, 2 workers using 16 threads each (totaling in ~70% cpu utilisation) are much slower than 16 workers using 2 threads each (also, similar cpu utilisation). The results may vary with different tasks, so these seem to be the go-to parameters to play with for every longer task.
AMD BLIS. Testing this as an alternative to MKL BLAS, currently only experimenting, but the performance seems to be on par with the Intel MKL with all the fixes. Checked whether BLIS was actually used through perf, for my benchmarks the calls were made bli_dgemmsup_rd_haswell_asm_6x8m, bli_daxpyv_zen_int10 and others. Not sure yet whether the settings for compiling BLIS were optimal. The takeaway here could be that both the MKL and BLIS are actually pushing max from the CPU, given my specific benchmarks... or at least that both libraries are similarly optimized.
Important downside of sticking to AMD BLIS: noticed after months of usage, but there seems to be some unresolved issues with BLIS or LAPACK packed into AMD libraries that I have no idea about. I've noticed random matrix multiplication issues that are not replicable (essentially, hitting into this problem) and are solved by switching back to MKL libs. I can't say whether there's a problem with my way of building R or the actual libs, so just a warning.

Related

OpenCL BLAS in julia

I am really amazed by the julia language, having implemented lots of machine learning algorithms there for my current project. Even though julia 0.2 manages to get some great results out of my 2011 MBA outperforming all other solutuions on similar linux hardware (due to vecLib blas, i suppose), I would certainly like more. I am in a process of buying radeon 5870 and would like to push my matrix operations there. I use basically only simple BLAS operations such as matmul, additios and transpositions. I use julia's compact syntax A' * B + C and would certainly like to keep it.
Is there any way (or pending milestone) I can get those basic operations execute on GPU? I have like 2500x2500 single precision matrices so I expect significant speedup.
I don't believe that GPU integration into the core of Julia is planned at this time. One of the key issues is that there is substantial overhead moving the data to and from the GPU, making a drop-in replacement for BLAS operations infeasible.
I expect that most of the progress in this area will actually come from the package ecosystem, in particular packages under the JuliaGPU organization. I see there is a CLBLAS package in there.

Advice about inversion of large sparse matrices

Just got a Windows box set up with two 64 bit Intel Xeon X5680 3.33 GHz processors (6 cores each) and 12 GB of RAM. I've been using SAS on some large data sets, but it's just too slow, so I want to set up R to do parallel processing. I want to be able to carry out matrix operations, e.g., multiplication and inversion. Most of my data are not huge, 3-4 GB range, but one file is around 50 GB. It's been a while since I used R, so I looked around on the web, including the CRAN HPC, to see what was available. I think a foreach loop and the bigmemory package will be applicable. I came across this post: Is there a package for parallel matrix inversion in R that had some interesting suggestions. I was wondering if anyone has experience with the HIPLAR packages. Looks like hiparlm adds functionality to the matrix package and hiplarb add new functions altogether. Which of these would be recommended for my application? Furthermore, there is a reference to the PLASMA library. Is this of any help? My matrices have a lot of zeros, so I think they could be considered sparse. I didn't see any examples of how to pass data fro R to PLASMA, and looking at the PLASMA docs, it says it does not support sparse matrices, so I'm thinking that I don't need this library. Am I on the right track here? Any suggestions on other approaches?
EDIT: It looks like HIPLAR and package pbdr will not be helpful. I'm leaning more toward bigmemory, although it looks like I/O may be a problem: http://files.meetup.com/1781511/bigmemoryRandLinearAlgebra_BryanLewis.pdf. This article talks about a package vam for virtual associative matrices, but it must be proprietary. Would package ff be of any help here? My R skills are just not current enough to know what direction to pursue. Pretty sure I can read this using bigmemory, but not sure the processing will be very fast.
If you want to use HiPLAR (MAGMA and PLASMA libraries in R), it is only available for Linux at the moment. For this and many other things, I suggest switching your OS to the penguin.
That being said, Intel MKL optimization can do wonders for these sort of operations. For most practical uses, it is the way to go. Python built with MKL optimization for example can process large matrices about 20x faster than IDL, which was designed specifically for image processing. R has similarly shown vast improvements when built with MKL optimization. You can also install R Open from Revolution Analytics, which includes MKL optimization, but I am not sure that it has quite the same effect as building it yourself using Intel tools: https://software.intel.com/en-us/articles/build-r-301-with-intel-c-compiler-and-intel-mkl-on-linux
I would definitely consider the type of operations one is looking to perform. GPU processes are those that lend well to high parallelism (many of the same little computations running at once, as with matrix algebra), but they are limited by bus speeds. Intel MKL optimization is similar in that it can help use all of your CPU cores, but it is really optimized to Intel CPU architecture. Hence, it should provide basic memory optimization too. I think that is the simplest route. HiPLAR is certainly the future, as it is CPU-GPU by design, especially with highly parallel heterogeneous architectures making their way into consumer systems. Most consumer systems today cannot fully utilize this though I think.
Cheers,
Adam

Parallel processing in R limited

I'm running ubuntu 12.04 and R 2.15.1 with the parallel and doParallel packages. when I run anything in parallel I'm limited to 100% of a core, when I should have up to 800%, since I am running it with 8 cores. What shows up on the system monitor is that each child process is getting only 12%.
What is going on that is limiting my execution speed?
The problem may be that the R process is restricted to one core (and the subprocesses inherit that).
Try this:
> system(sprintf("taskset -p 0xffffffff %d", Sys.getpid()))
pid 3064's current affinity mask: fff
pid 3064's new affinity mask: fff
Now, if on your machine, the current affinity mask reports a 1, then this was the problem. The line above should solve it (i.e. the second line should report fff (or similar).
Simon Urbanek wrote a function mcaffinity that allows this control for multicore. As far as I know, it's still in R-devel.
For details, see e.g. this discussion on R-sig-hpc.
Update, and addition to Xin Guo's answer:
If you use implicit parallelization via openblas and explicit parallelization (via parallel/snow/multicore) together, you may want to change the number of threads that openblas uses depending on whether you are inside an explicitly parallel part or not.
This is possible (with openblas under Linux, I'm not aware of any other of the usual optimized BLAS' that provides a function to the number of threads), see Simon Fuller's blog post for details.
I experienced the same problem because of the libblas.so(.3gf) packages, and I do not know if this also causes your problem. When R starts it calls a BLAS system installed in your system to conduct linear algebraic computations. I have libopenblas.so(.3gf) and it is highly optimized with the option "CPU Affinity", that is to say, when you do numerical vector or matrix computation, the openblas package will just make 8 threads and make each one of the threads stuck to one specified and fixed CPU to speed up the codes. However, by setting this, your system is then told that all the CPU's are very busy, and thus if further parallel tasks come, the system will try to squeeze them into one CPU so as to try best not to interfere the busy CPU's.
So this was my solution which worked: I downloaded an openblas package source and compiled it with the file "Makefile.rule" changed: there is one line "#NO_AFFINITY = 1" and I just deleted "#" so that after compiled, there is no affinity option selected. Then I installed the package and the problem was solved.
For the reference of this, see https://github.com/ipython/ipython/issues/840
Please note that this is a trade-off. Removing CPU affinity will make you lose some efficiency when doing numerical computations, that's why though the openblas maintainer (Dr. Xianyi Zhang) knows the problem, he still publish the codes with the cpu affinity as a default option.
My guess is that you probably had the wrong code. I would like to post one example copied from online http://www.r-bloggers.com/parallel-r-loops-for-windows-and-linux/ :
library(doMC)
registerDoMC()
x<- iris[which(iris[,5]!='setosa'),c(1,5)]
trials<- 10000
r<- foreach(icount(trials), .combine=cbind) %dopar% {
ind<- sample(100,100,replace=T)
result1<- glm(x[ind,2]~x[ind,1],family=binomial(logit))
coefficients(result1)
}
and you can define how many cores you want to use in the parallel:
options(cores=4)

MPI and OpenMP. Do I even have a choice?

I have a linear algebra code that I am trying get to run faster. Its a iterative algorithm with a loop and matrix vector multiplications within in.
So far, I have used MATMUL (Fortran Lib.), DGEMV, Tried writing my own MV code in OpenMP but the algorithm is doing no better in terms of scalability. Speed ups are barely 3.5 - 4 irrespective of how many processors I am allotting to it (I have tried up 64 processors).
The profiling shows significant time being spent in Matrix-Vector and the rest is fairly nominal.
My question is:
I have a shared memory system with tons of RAM and processors. I have tried tweaking OpenMP implementation of the code (including Matrix Vector) but has not helped. Will it help to code in MPI? I am not a pro at MPI but the ability to fine tune the message communication might help a bit but I can't be sure. Any comments?
More generally, from the literature I have read, MPI = Distributed, OpenMP = Shared but can they perform well in the others' territory? Like MPI in Shared? Will it work? Will it be better than the OpenMP implementation if done well?
You're best off just using a linear algebra package that is already well optimized for a multitcore environment and using that for your matrix-vector multiplication. The Atlas package, gotoblas (if you have a nehalem or older; sadly it's no longer being updated), or vendor BLAS implementations (like MKL for intel CPUs, ACML for AMD, or VecLib for apple, which all cost money) all have good, well-tuned, multithreaded BLAS implementations. Unless you have excellent reason to believe that you can do better than those full time development teams can, you're best off using them.
Note that you'll never get the parallel speedup with DGEMV that you do with DGEMM, just because the vector is smaller than another matrix and so there's less work; but you can still do quite well, and you'll find you get much better perforamance with these libraries than you do with anything hand-rolled unless you were already doing multi-level cache blocking.
You can use MPI in a shared environment (though not OpenMP in a distributed one). However, achieving a good speedup depends a lot more on your algorithms and data dependencies than the technology used. Since you have a lot of shared memory, I'd recommend you stick with OpenMP, and carefully examine whether you're making the best use of your resources.

R package that automatically uses several cores?

I have noticed that R only uses one core while executing one of my programs which requires lots of calculations. I would like to take advantage of my multi-core processor to make my program run faster.
I have not yet investigated the question in depth but I would appreciate to benefit from your comments because I do not have good knowledge in computer science and it is difficult for me to get easily understandable information on that subject.
Is there a package that allows R to automatically use several cores when needed?
I guess it is not that simple.
R can only make use of multiple cores with the help of add-on packages, and only for some types of operation. The options are discussed in detail on the High Performance Computing Task View on CRAN
Update: From R Version 2.14.0 add-on packages are not necessarily required due to the inclusion of the parallel package as a recommended package shipped with R. parallel includes functionality from the multicore and snow packages, largely unchanged.
The easiest way to take advantage of multiprocessors is the multicore package which includes the function mclapply(). mclapply() is a multicore version of lapply(). So any process that can use lapply() can be easily converted to an mclapply() process. However, multicore does not work on Windows. I wrote a blog post about this last year which might be helpful. The package Revolution Analytics created, doSMP, is NOT a multi-threaded version of R. It's effectively a Windows version of multicore.
If your work is embarrassingly parallel, it's a good idea to get comfortable with the lapply() type of structuring. That will give you easy segue into mclapply() and even distributed computing using the same abstraction.
Things get much more difficult for operations that are not "embarrassingly parallel".
[EDIT]
As a side note, Rstudio is getting increasingly popular as a front end for R. I love Rstudio and use it daily. However it needs to be noted that Rstudio does not play nice with Multicore (at least as of Oct 2011... I understand that the RStudio team is going to fix this). This is because Rstudio does some forking behind the scenes and these forks conflict with Multicore's attempts to fork. So if you need Multicore, you can write your code in Rstuido, but run it in a plain-Jane R session.
On this question you always get very short answers. The easiest solution according to me is the package snowfall, based on snow. That is, on a Windows single computer with multiple cores. See also here the article of Knaus et al for a simple example. Snowfall is a wrapper around the snow package, and allows you to setup a multicore with a few commands. It's definitely less hassle than most of the other packages (I didn't try all of them).
On a sidenote, there are indeed only few tasks that can be parallelized, for the very simple reason that you have to be able to split up the tasks before multicore calculation makes sense. the apply family is obviously a logical choice for this : multiple and independent computations, which is crucial for multicore use. Anything else is not always that easily multicored.
Read also this discussion on sfApply and custom functions.
Microsoft R Open includes multi-threaded math libraries to improve the performance of R.It works in Windows/Unix/Mac all OS type. It's open source and can be installed in a separate directory if you have any existing R(from CRAN) installation. You can use popular IDE Rstudio also with this.From its inception, R was designed to use only a single thread (processor) at a time. Even today, R works that way unless linked with multi-threaded BLAS/LAPACK libraries.
The multi-core machines of today offer parallel processing power. To take advantage of this, Microsoft R Open includes multi-threaded math libraries.
These libraries make it possible for so many common R operations, such as matrix multiply/inverse, matrix decomposition, and some higher-level matrix operations, to compute in parallel and use all of the processing power available to reduce computation times.
Please check the below link:
https://mran.revolutionanalytics.com/rro/#about-rro
http://www.r-bloggers.com/using-microsoft-r-open-with-rstudio/
As David Heffernan said, take a look at the Blog of revolution Analytics. But you should know that most packages are for Linux. So, if you use windows it will be much harder.
Anyway, take a look at these sites:
Revolution. Here you will find a lecture about parallerization in R. The lecture is actually very good, but, as I said, most tips are for Linux.
And this thread here at Stackoverflow will disscuss some implementation in Windows.
The package future makes it extremely simple to work in R using parallel and distributed processing. More info here. If you want to apply a function to elements in parallel, the future.apply package provides a quick way to use the "apply" family functions (e.g. apply(), lapply(), and vapply()) in parallel.
Example:
library("future.apply")
library("stats")
x <- 1:10
# Single core
y <- lapply(x, FUN = quantile, probs = 1:3/4)
# Multicore in parallel
plan(multiprocess)
y <- future_lapply(x, FUN = quantile, probs = 1:3/4)

Resources