Speeding up package load in Julia - julia

I wrote a program to solve a linear program in Julia using GLPKMathProgInterface and JuMP. The Julia code is being called by python program which runs multiple instances of the Juila code through multiple command line calls. While I'm extremely happy with the performance of the actual solver the initialization is extremely slow. I was wondering if there were approaches to speed this up.
For example if I just save the following to a file
#time using DataFrames, CSV, GLPKMathProgInterface, JuMP, ArgParse
and run it
mylabtop:~ me$ julia test.jl
12.270137 seconds (6.54 M allocations: 364.537 MiB, 3.05% gc time)
This seems extremely slow, is there some good way to speed up using modules like a precompile step I could do once?

Since you haven't gotten any answers yet, let me give you the general first order answers - although I hope someone more qualified will answer your question in more detail (and correct me if I'm wrong).
1) Loading packages in Julia is sometimes rather slow up to the time of this writing. It has been discussed many times and you can expect improvements in the future. AFAIK this will happen in early 1.x releases after 1.0 is out. Have a look at this thread.
2) Since you typically only have to pay the loading time cost once per Julia session one approach is to keep the session running for as long as possible. You can execute your script with include("test.jl") from within the session. Let me also mention the amazing Revise.jl - it's hardly possible to overemphasize this package!
3) (I have no experience with this more difficult approach.) There is PackageCompiler.jl which allows you to compile a package into your system image. Read this blog post by Simon.
4) (Not recommended) There has also been the highly experimental static-julia which statically compiles your script into an shared library and executable.
Hope that helps.

Related

Could someone explain what "compiling" in R is, and why it would speed up this code?

I am working on some R code written by a previous student. The code is extremely computationally intensive, and for that reason he appears to have gone to great lengths to minimise the time it took in anyway possible.
One example is the following section:
# Now lets compile these functions, for a modest speed boost.
Sa <- cmpfun(Sa)
Sm <- cmpfun(Sm)
sa <- cmpfun(sa)
sm <- cmpfun(sm)
h <- cmpfun(h)
li <- cmpfun(lli)
ll <- cmpfun(ll)
He appears to have used the compiler package to do this.
I have never heard of compiling in R, and I am interested in what it does and why it would help speed up the code. I am having trouble finding material that would explain it for a novice like me.
The compiler package has been part of R since version 2.130. Compiling R functions, results in a byte code version that may run faster. There are a number ways of compiling. All base R functions are compiled by default.
Compiling individual functions via cmpfun. Alternatively, you can call enableJIT(3) once and the R code is automatically compiled.
I've found compiling R code gives a modest, cost free speed boost - see Efficient R programming for a timed example.
It appears that byte compiling will be turned on by default in R 3.4.X

Will putting functions in one file improve speed?

If I write all my functions into one file that I use for multiple scripts, will sourcing the file containing the functions once at the top of my script improve my speed? If I call source(fn.r) for example, will I be able to call the functions I created as they are already saved in the workspace? I am trying to reduce the time it takes for the script to run and improve performance. Any other tips regarding improving speed are welcome aswell
Sourcing the file loads any functions within that script. Sourcing doesn't have much impact on the speed at which those functions run, as they would be in memory regardless, but you should look at the R compiler for an easy way to get a moderate speed boost.
See this blog post about the compiler
the performance gain for various made-up functions can range between
2x to 5x times faster running time. This is great for the small
amount of work ... it requires ... Moreover, by combining
C/C++ code with R code (through the {Rcpp} and {Inline} packages) you
can improve your code’s running time by a factor of 80 ... relative to
interpreted code. But to be fair to R, the code that is used for such
examples is often unrealistic code examples that is often not
representative of real R work. Thus, effective speed gains can be
expected to be smaller.
The easiest way to use the compiler is to place this at the beginning of your script. R will then automatically compile any function you create.
require(compiler)
enableJIT(3)

lineprof equivalent for Rcpp

The lineprof package in R is very useful for profiling which parts of function take up time and allocate/free memory.
Is there a lineprof() equivalent for Rcpp ?
I currently use std::chrono::steady_clock and such to get chunk timings out of an Rcpp function. Alternatives? Does Rstudio IDE provide some help here?
To supplement #Dirk's answer...
If you are working on OS X, the Time Profiler Instrument, part of Apple's Instruments set of instrumentation tools, is an excellent sampling profiler.
Just to fix ideas:
A sampling profiler lets you answer the question, what code paths does my program spend the most time executing?
A (full) cache profiler lets you answer the question, which are the most frequently executed code paths in my program?
These are different questions -- it's possible that your hottest code paths are already optimized enough that, even though the total number of instructions executed in that path is very high, the amount of time required to execute them might be relatively low.
If you want to use instruments to profile C++ code / routines used in an R package, the easiest way to go about this is:
Create a target, pointed at your R executable, with appropriate command line arguments to run whatever functions you wish to profile:
Set the command line arguments to run the code that will exercise your C++ routines -- for example, this code runs Rcpp:::test(), to instrument all of the Rcpp test code:
Click the big red Record button, and off you go!
I'll leave the rest of the instructions in understanding instruments + the timing profiler to your google-fu + the documentation, but (if you're on OS X) you should be aware of this tool.
See any decent introduction to high(er) performance computing as eg some slides from (older) presentation of my talks page which include worked examples for both KCacheGrind (part of the KDE frontend to Valgrind) as well as Google Perftools.
In a more abstract sense, you need to come to terms with the fact that C++ != R and not all tools have identical counterparts. In particular Rprof, the R profiler which several CRAN packages for profiling build on top of, is based on the fact that R is interpreted. C++ is not, so things will be different. But profiling compiled is about as old as compiling and debugging so you will find numerous tutorials.

Why is R slowing down as time goes on, when the computations are the same?

So I think I don't quite understand how memory is working in R. I've been running into problems where the same piece of code gets slower later in the week (using the same R session - sometimes even when I clear the workspace). I've tried to develop a toy problem that I think reproduces the "slowing down affect" I have been observing, when working with large objects. Note the code below is somewhat memory intensive (don't blindly run this code without adjusting n and N to match what your set up can handle). Note that it will likely take you about 5-10 minutes before you start to see this slowing down pattern (possibly even longer).
N=4e7 #number of simulation runs
n=2e5 #number of simulation runs between calculating time elapsed
meanStorer=rep(0,N);
toc=rep(0,N/n);
x=rep(0,50);
for (i in 1:N){
if(i%%n == 1){tic=proc.time()[3]}
x[]=runif(50);
meanStorer[i] = mean(x);
if(i%%n == 0){toc[i/n]=proc.time()[3]-tic; print(toc[i/n])}
}
plot(toc)
meanStorer is certainly large, but it is pre-allocated, so I am not sure why the loop slows down as time goes on. If I clear my workspace and run this code again it will start just as slow as the last few calculations! I am using Rstudio (in case that matters). Also here is some of my system information
OS: Windows 7
System Type: 64-bit
RAM: 8gb
R version: 2.15.1 ($platform yields "x86_64-pc-mingw32")
Here is a plot of toc, prior to using pre-allocation for x (i.e. using x=runif(50) in the loop)
Here is a plot of toc, after using pre-allocation for x (i.e. using x[]=runif(50) in the loop)
Is ?rm not doing what I think it's doing? Whats going on under the hood when I clear the workspace?
Update: with the newest version of R (3.1.0), the problem no longer persists even when increasing N to N=3e8 (note R doesn't allow vectors too much larger than this)
Although it is quite unsatisfying that the fix is just updating R to the newest version, because I can't seem to figure out why there was problems in version 2.15. It would still be nice to know what caused them, so I am going to continue to leave this question open.
As you state in your updated question, the high-level answer is because you are using an old version of R with a bug, since with the newest version of R (3.1.0), the problem no longer persists.

R package that automatically uses several cores?

I have noticed that R only uses one core while executing one of my programs which requires lots of calculations. I would like to take advantage of my multi-core processor to make my program run faster.
I have not yet investigated the question in depth but I would appreciate to benefit from your comments because I do not have good knowledge in computer science and it is difficult for me to get easily understandable information on that subject.
Is there a package that allows R to automatically use several cores when needed?
I guess it is not that simple.
R can only make use of multiple cores with the help of add-on packages, and only for some types of operation. The options are discussed in detail on the High Performance Computing Task View on CRAN
Update: From R Version 2.14.0 add-on packages are not necessarily required due to the inclusion of the parallel package as a recommended package shipped with R. parallel includes functionality from the multicore and snow packages, largely unchanged.
The easiest way to take advantage of multiprocessors is the multicore package which includes the function mclapply(). mclapply() is a multicore version of lapply(). So any process that can use lapply() can be easily converted to an mclapply() process. However, multicore does not work on Windows. I wrote a blog post about this last year which might be helpful. The package Revolution Analytics created, doSMP, is NOT a multi-threaded version of R. It's effectively a Windows version of multicore.
If your work is embarrassingly parallel, it's a good idea to get comfortable with the lapply() type of structuring. That will give you easy segue into mclapply() and even distributed computing using the same abstraction.
Things get much more difficult for operations that are not "embarrassingly parallel".
[EDIT]
As a side note, Rstudio is getting increasingly popular as a front end for R. I love Rstudio and use it daily. However it needs to be noted that Rstudio does not play nice with Multicore (at least as of Oct 2011... I understand that the RStudio team is going to fix this). This is because Rstudio does some forking behind the scenes and these forks conflict with Multicore's attempts to fork. So if you need Multicore, you can write your code in Rstuido, but run it in a plain-Jane R session.
On this question you always get very short answers. The easiest solution according to me is the package snowfall, based on snow. That is, on a Windows single computer with multiple cores. See also here the article of Knaus et al for a simple example. Snowfall is a wrapper around the snow package, and allows you to setup a multicore with a few commands. It's definitely less hassle than most of the other packages (I didn't try all of them).
On a sidenote, there are indeed only few tasks that can be parallelized, for the very simple reason that you have to be able to split up the tasks before multicore calculation makes sense. the apply family is obviously a logical choice for this : multiple and independent computations, which is crucial for multicore use. Anything else is not always that easily multicored.
Read also this discussion on sfApply and custom functions.
Microsoft R Open includes multi-threaded math libraries to improve the performance of R.It works in Windows/Unix/Mac all OS type. It's open source and can be installed in a separate directory if you have any existing R(from CRAN) installation. You can use popular IDE Rstudio also with this.From its inception, R was designed to use only a single thread (processor) at a time. Even today, R works that way unless linked with multi-threaded BLAS/LAPACK libraries.
The multi-core machines of today offer parallel processing power. To take advantage of this, Microsoft R Open includes multi-threaded math libraries.
These libraries make it possible for so many common R operations, such as matrix multiply/inverse, matrix decomposition, and some higher-level matrix operations, to compute in parallel and use all of the processing power available to reduce computation times.
Please check the below link:
https://mran.revolutionanalytics.com/rro/#about-rro
http://www.r-bloggers.com/using-microsoft-r-open-with-rstudio/
As David Heffernan said, take a look at the Blog of revolution Analytics. But you should know that most packages are for Linux. So, if you use windows it will be much harder.
Anyway, take a look at these sites:
Revolution. Here you will find a lecture about parallerization in R. The lecture is actually very good, but, as I said, most tips are for Linux.
And this thread here at Stackoverflow will disscuss some implementation in Windows.
The package future makes it extremely simple to work in R using parallel and distributed processing. More info here. If you want to apply a function to elements in parallel, the future.apply package provides a quick way to use the "apply" family functions (e.g. apply(), lapply(), and vapply()) in parallel.
Example:
library("future.apply")
library("stats")
x <- 1:10
# Single core
y <- lapply(x, FUN = quantile, probs = 1:3/4)
# Multicore in parallel
plan(multiprocess)
y <- future_lapply(x, FUN = quantile, probs = 1:3/4)

Resources