using GPU in optimization in R (OpenCL)? - r

based on an idea from another thread, I was hoping you could help me out with this idea / push me in the right direction.
I have seen an example of OpenCL, which didn't look too complicated for basic calculations, so I hope to just rewrite the function for numerical gradient the optimization routine uses in the OpenCL language, and squeeze it in the optimizer function, so everytime I would optimize some function, it would do the independent calculations in the GPU.
Idea: Use gpu for calculation of functionals and gradients during the optimizations (e.g. nlminb()
Problems:
1, How to tap the optimization routine? (I can't seem to locate the C file of which does the optimisation)
2,Can I just replace the gradient calculation with what I prepare for GPU?
3,Does anyone got something similar to work? Any ideas, notes?
Thank you and have a nice day!
PS: If you think it wouldn't speed up the optim, it's hard to code / hard to do, etc. please let me know! I'm very inexperienced and lousy "programmer".

You could compile R linked to one of the OpenCL optimized BLAS libraries. But based on the attempts to speed up R with other BLAS libraries the results might be limited to special cases. Yours may be one of them though.

Related

Efficient and fast way of solving linear programming problems in R

I am working with a very large dataset, typically dealing with a few millions of combinations.
I want to solve the assignment problem.(maximise the sum)
I had tried solving it on a small test set using adagio::assignment, clue::solve_LSAP
I wasnt able to successfully install the "lpSolve" package on my system, threw some segmentation fault
Wanted to know which of these is faster or any other method which does it faster.
Thanks....
An LP formulation is not a good way to solve the assignment problem, whichever library you use. You have to use the Hungarian algorithm, and it looks like solve_LSAP does exactly that.
No need to try anything else IMHO.
EDIT: An efficient implementation of the Hungarian method should be O(n^3), which is extremely fast for any optimization algorithm. If solve_LSAP is not fast enough for your problem (assumed it is implemented correctly), it is very unlikely that any exact method will work.
You will have to use some sort of heuristic to approximate the solution.

R bindings for SLEPC / PETSC?

Anyone have much luck within bindings for R and SLEPC? Looking for a faster SVD and eigenvalue algorithm in R.
update: I'm generally interested in both scenarios(speed vs scale)
I don't follow:
SLEPC is for scalable ie very large problems
You ask for faster SVDs
If you are after speed, I'd consider
fast implementations, either something like RcppEigen or,
accelerated BLAS such as OpenBLAS which can be a drop-in for R, if R was built the right way.
But if you are really after large problems, maybe the new pbd-r.org project which wraps ScaLAPACK can be of help.
So please let us know if you're after speed or scale.

Lua alternative to optim()

I'm currently looking for a lua alternative to the R programming languages; optim() function, if anyone knows how to deal with this?
http://numlua.luaforge.net/ looks interesting but doesn't seem to have minimization. The most promising lead seems to be a Lua wrapper for GSL, which has a variety of multidimensional minimization algorithms included.
With derivatives
- BFGS (method="BFGS" in optim) and two conjugate gradient methods (Fletcher-Reeves and Polak-Ribiere) which are two of the three options available for method="CG" in optim.
Without derivatives
- the Nelder-Mead simplex (method="Nelder-Mead", the default in optim).
More specifically, see here for the Lua shell documentation covering minimization.
I agree with #Zack that you should try to use existing implementations if at all possible, and that you might need a little bit more background knowledge to know which algorithms will be useful for your particular problems ...
R's implementation of optim isn't actually written in R. If you type "optim" with no parentheses at the prompt, it'll dump out the definition of the function, and you can see that after some error checking and argument shuffling it invokes an .Internal routine (coded in C and/or Fortran) to do all the real work.
So your best bet is to find a C library for mathematical optimization -- sorry, I have no recommendations -- and wrap that into Lua. I doubt anyone has written native-Lua code for this, and I would not recommend trying to code it yourself; doing mathematical optimization efficiently is still an active domain of basic research, and the best-so-far algorithms are decidedly nontrivial to implement.

Rllvm and compiler packages: R compilation

This is a fairly general question about the future of R: Any hope to see a merger of compilerand Rllvm (from Omegahat) or another JIT compilation scheme for R (I know there is Ra, but not updated recently)?
In my tests the speed gain from compiler are marginal for "complicated" functions...
What matters isn't how complicated a function is but what kinds of computations it performs. The compiler will make most difference for functions dominated by interpreter overhead, such as ones that perform mostly simple operations on scalar or other small data. In cases like that I have seen a factor of 3 for artificial examples and a a bit
better than a factor of 2 for some production code. Functions that spend most of their time in operations implemented in native code, like linear algebra operations, will see little benefit.
This is just the first release of the compiler and it will evolve over time. LLVM is one of several possible direction we will look at but probably not for a while. In any case, I would expect using something like LLVM to provide further improvements in cases where the current compiler already makes a difference, but not to add much in cases where it does not.
(Moving from a comment to an answer ...)
This sounds more like a question for the r development mailing list. Based on my general impressions I would say "probably not". Are your complicated functions already based on heavily vectorized (and hence efficient) functions? I think a more promising direction for not-so-easily-automatically-optimized situations is the increased simplicity of embedding C++ etc. (i.e. Rcpp), inline if necessary

Quickly cross-check complex math results?

I am doing matrix operations on large matrices in my C++ program. I need to verify the results that I get, I used to use WolframAlpha for the task up until now. But my inputs are very large now, and the web interface does NOT accept such large values (textfield is limited).
I am looking for a better solution to quickly cross-check/do math problems.
I know there is Matlab but I have never used it and I don't know if thats what will suffice my needs and how steep the learning curve would be?
Is this the time to make the jump? or there are other solutions?
If you don't mind using python, numpy might be an option.
Apart from the license costs, MATLAB is the state of the art numerical math tool. There is octave as free open source alternative, with a similar syntax. The learning curve is for both tools absolutely smooth!
WolframAlpha is web interface to Wolfram Mathematica. The command syntax is exactly the same. If you have access to Mathematica at your university, it would be most smooth choice for you since you already have experience with WolframAlpha.
You may also try some packages to convert Mathematica commands to MATLAB:
ToMatlab
Mathematica Symbolic Toolbox for MATLAB 2.0
Let us know in more details what is your validation process. How your data look like and what commands have you used in WolframALpha? Then we can help you with MATLAB alternative.

Resources