I found that using one of BLAS/ATLAS/MKL/OPENBLAS will give improvement on speed in R. However, will it still improve the R Package that is written in C or C++?
for example, R package Glmnet is implemented in FORTRAN and R package rpart is implemented in C++. Will it just installing BLAS/...etc will improve the execution time? or do we have to rebuild (building new C code) the package based on BLAS/...etc?
It is frequently stated, including in a comment here, that "you have to recompile R" to use different BLAS or LAPACK library. That is wrong.
You do not have to recompile R provided it is build against the shared library versions of BLAS and LAPACK.
I have a package and vignette on CRAN which uses this fact to provide a benchmarking framework in which different BLAS and LAPACK version are timed against each just by installing different ones (one commmand in Debian/Ubuntu) and running benchmarks -- this is so straightforward that it can be automated in a package such as this.
The results in that package will provide an idea of the possible speed differences. Exactly how they pan out depends on your computer, your data (size), your problem etc. But if, say, your problem uses LAPACK functions which can run benefit from running multithreaded then installing OpenBLAS may help. That is true for any R package using LAPACK as they will use the same LAPACK installation accessed through are, and these can be changed.
Related
I have been trying to create an R library dynamically loading a dependency using Intel OpenMP. When the library is loaded, there is a clash with OpenMP library.
Using KMP_DUPLICATE_LIB_OK=TRUE gets me past the loading error but the program crashes once it is in a parallel section.
Unfortunately, neither compiling R using intel OpenMP or the dependency using GNU OpenMP is an option (because I want it to work with the standard R distribution and some external dependencies linked statically have to use Intel OpenMP).
However, I can recompile the dependency with some compatibility flags or modify how linking is done (but in the end, it has to be loaded dynamically from the R library). Setting environment variables is also an option (I am thinking about https://software.intel.com/en-us/node/522775 but none of the options seems to help so far).
The R library is written in C and I doubt that the fact it is R which will load it in the end really matters.
Any idea how to handle this?
I'm developing a new R package to release to CRAN and would like to invoke the system() command directly within its source code. For example, I would like to use the gzip utility directly within my R package:
write.csv(mydat, "mydat.csv")
system("gzip mydat.csv", wait=FALSE)
Even more importantly, I would like to leverage other existing command-line utilities directly within my R package. And by command-line utilities, I mean actual large command-line software programs that are not trivial to rewrite in R.
So my question is: What are some best practices for specifying the usage of external (not R) command-line libraries during the development of an R package?
For example, the Imports and Depends fields in an R package DESCRIPTION file are only good for specifying the usage of existing R libraries within your R package. It would be a nuisance for users to have to manually install some existing non-R command-line library by using a package manager (e.g., brew), and this would go against best practices of self-contained work within an R Studio IDE. Besides, there is no guarantee that such a roundabout approach would work in a reproducible fashion, due to the difficulty of properly matching full paths to the command-line executable, coordinating with the R Studio IDE, etc.
Likewise, using tools such as https://cran.r-project.org/web/packages/ssh.utils/index.html will only serve basic command-line needs within the R environment, and hence does not apply to the needs of using large command-line software programs.
Note: The R package that I'm developing is not for personal use. It is intended for public release to CRAN and, hence, should comply with their checks. However, I could not find any specification from CRAN regarding the use of the system() command, particularly in the context of leveraging actual large command-line software programs that are not trivial to rewrite in R.
I would like to use the gzip utility directly within my R package
That is a code smell. Your package then needs to determine by means of configure (or similar) if such programs exist. So why bother? In this example, and on my box:
edd#don:~$ grep GZIP /etc/R/Renviron
R_GZIPCMD=${R_GZIPCMD-'/bin/gzip -n'}
edd#don:~$
You have access to it via most file-saving commands such as saveRDS(), the gzcon() and gzfile() functions and so on. See this older answer of mine.
For truly external programs you can rely on system(). See Christoph's seasonal package relying on our underlying x13binary binary package.
Is there any way to switch between OpenBLAS and ATLAS libraries from a running R session? I am using Ubuntu 12.04.
Thank you
The R documentation states that in order to use another BLAS library, this needs to be specified configure time. This means that R needs to be rebuilt from source if you want to switch libraries. So, it is not possible to switch between BLAS libraries in a running R session.
no need to rebuild R, but yes do it when R is not running. See here.
This is a followup to a question I posted earlier. To summarize, I am writing an R Package called Slidify, which makes use of several external non-R based libraries. My earlier question was about how to manage dependencies.
Several solutions were proposed, of which the most attractive solution was to package the external libraries as a different R package, and make it a dependency for Slidify. This is the strategy followed by the package xlsx, which packages the java dependencies as a different package xlsxjars.
An alterative is for me to provide the external libraries as a download and package a install_libraries function within Slidify, which would automatically fetch the required files and download it into the package directory. I can also add an update_libraries function which would update if things change.
My question is, are there any specific advantages to doing the CRAN dance for external libraries which are not R based. Am I missing something here?
As discussed in the comment-thread, for a package like slidify with a number of large, (mostly) fixed, and portable files, a "resource" package makes more sense:
you will know the path where it installed (as the package itself will tell you)
users can't accidentally put it somewhere else
you get CRAN tests
you get CRAN distribution, mirrors, ...
users already know install.packages() etc
the more nimble development of your package using these fixed parts is not held back by the large support files
I have developed a big library of functions in R.
For the moment I just load ("source") the functions at the beginning of all my scripts.
I have seen that I can create packages.
My question is: Will that improve the execution time of my functions? (by transforming interpreter code into machine language?)
What does the package creation does? Does it creates binaries?
Thanks
fred
There isn't an R compiler yet Packaging your R code won't improve its execution time massively. It also won't create binaries for you - you need to build those from the package tarball (or get CRAN or similar to build them for you). There is now a byte compiler for R and R's packages are now by default byte compiled. Speed improvements are in general modest - don't expect C-like speed.
Packaging R code just does exactly that; it packages the R code, code to be compiled (C Fortran etc), man pages, documentation, tests etc into a standard format that can be distributed to users and installed/built on multiple architectures.
Packages can take advantage of things like lazy loading such that R objects (your functions say) are only loaded when needed, whereas source loads them all into the global environment (by default).
If you don't intend to distribute your code then there are few benefits of packaging just for your own use, but if you do package and write documentation and examples/tests, you might be alerted to changes in the package code that break examples or cause tests to fail. That way you are better informed as to the reliability of your code, even if it is only you using it!