The aim is to implement a fast version of the orthogonal projective non-negative matrix factorization (opnmf) in R. I am translating the matlab code available here.
I implemented a vanilla R version but it is much slower (about 5.5x slower) than the matlab implementation on my data (~ 225000 x 150) for 20 factor solution.
So I thought using c++ might speed up things but its speed is similar to R. I think this can be optimized but not sure how as I am a newbie to c++. Here is a thread that discusses a similar problem.
Here is my RcppArmadillo implementation.
// [[Rcpp::export]]
Rcpp::List arma_opnmf(const arma::mat & X, const arma::mat & W0, double tol=0.00001, int maxiter=10000, double eps=1e-16) {
arma::mat W = W0;
arma::mat Wold = W;
arma::mat XXW = X * (X.t()*W);
double diffW = 9999999999.9;
Rcout << "The value of maxiter : " << maxiter << "\n";
Rcout << "The value of tol : " << tol << "\n";
int i;
for (i = 0; i < maxiter; i++) {
XXW = X * (X.t()*W);
W = W % XXW / (W * (W.t() * XXW));
//W = W % (X*(X.t()*W)) / (W*((W.t()*X)*(X.t()*W)));
arma::uvec idx = find(W < eps);
W.elem(idx).fill(eps);
W = W / norm(W,2);
diffW = norm(Wold-W, "fro") / norm(Wold, "fro");
if(diffW < tol) {
break;
} else {
Wold = W;
}
if(i % 10 == 0) {
Rcpp::checkUserInterrupt();
}
}
return Rcpp::List::create(Rcpp::Named("W")=W,
Rcpp::Named("iter")=i,
Rcpp::Named("diffW")=diffW);
}
This suggested issue confirms that matlab is quite fast, so is there no hope when using R / c++?
The tests were made on Windows 10 and Ubuntu 16 with R version 4.0.0.
EDIT
After the interesting comments in the answer below. I am posting additional details. I ran tests on a Windows 10 machine with R 3.5.3 (as that's what Microsoft provides) and the comparison shows that RcppArmadillo with Microsoft's R is fastest.
R
user system elapsed
213.76 7.36 221.42
R with RcppArmadillo
user system elapsed
179.88 3.44 183.43
Microsoft's Open R
user system elapsed
167.33 9.96 45.94
Microsoft's Open with RcppArmadillo
user system elapsed
85.47 4.66 23.56
Are you aware that this code is "ultimately" executed by a pair of libraries called LAPACK and BLAS?
Are you aware that Matlab ships with a highly optimised one? Are you aware that on all systems that R runs on you can change which LAPACK/BLAS is being used.
The difference matters greatly. Just this morning a friend posted this tweet contrasting the same R code running on the same Windows computer but in two different R environments. The six-times faster one "simply" uses a parallel LAPACK/BLAS implementation.
Here, you haven't even told us which operating system you are on. You can get OpenBLAS (which uses parallelism) for all OSs that R runs on. You can even get the Intel MKL (which IIRC is what Matlab uses too) fairly easily on some OSs. For Ubuntu/Debian I published a script on GitHub that does it in one step.
Lastly, many years ago I "inherited" a fast program running in Matlab on a (then-large-ish) Windows computer. I rewrote the Matlab part (carefully and slowly, it's effort) in C++ using RcppArmadillo leading a few factors of improvement -- and because we could run that (now open source) code in parallel from R on the same computer another few factors. Together it was orders of magnitude turning a day-long simulation into something that ran a few minutes. So "yes, you can".
Edit: As you have access to Ubuntu, you can switch from basic LAPACK/BLAS to OpenBLAS via a single command, though I am no longer that familiar with Ubuntu 16.04 (as I run 20.04 myself).
Edit 2: Picking up the comparison from Josef's tweet, the Docker r-base container I also maintainer (as part of the Rocker Project) can use OpenBLAS. [1] So once we add it, e.g. via apt-get install libopenblas-dev the timing of a simple repeated matrix crossproduct moves from
root#0eb44b1fcc06:/# Rscript -e 'v <- matrix(1:1e6,1e3); system.time(replicate(10, crossprod(v,v)))'
user system elapsed
9.289 0.084 9.373
root#0eb44b1fcc06:/#
to
root#67bd334f53d4:/# Rscript -e 'v <- matrix(1:1e6,1e3); system.time(replicate(10, crossprod(v,v)))'
user system elapsed
2.259 2.370 0.447
root#67bd334f53d4:/#
which is substantial.
Related
I am trying to run the code below in VS Code for Julia (or directly on Julia). It is a simple example that computes the Maximum Likelihood estimator of the mean and the variance of a normal distribution (source):
Random.seed!(1234)
n = 1_000
data = randn(n)
mle = Model(optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 0))
#NLparameter(mle, problem_data[i = 1:n] == data[i])
μ0 = randn()
σ0 = rand() + 1
#info "Starting guess, mean: $μ0, std: $σ0"
#variable(mle, μ, start = μ0)
#variable(mle, σ >= 0.0, start = σ0)
#NLexpression(mle, loglikelihood,
-(n / 2) * (log(2π) + 2 * log(σ)) - inv(2 * σ^2) * sum((xi - μ)^2 for xi in problem_data)
)
#NLobjective(mle, Max, loglikelihood)
optimize!(mle)
This is a nonlinear optimization problem using JuMP, and when running optimize!(mle) I am getting 'terminal process terminated with exit code 3221226356' in VS Code. Similarly, when I run it directly in Julia, it just shuts down entirely. (I have the latest versions) (I have tried in a different computer and everything works fine). Any help would be greatly appreciated!
P.S. I have seen it might have to do with a 'heap corruption problem', but I have no idea what that means/how to solve it.
This has been cross-posted on the Julia discourse, we'll continue to debug it there: https://discourse.julialang.org/t/cant-run-simple-jump-example/67938
It's either:
An issue in VS-Code (although "when I run it directly in Julia" may rule this out)
An issue with Ipopt, which is either due to it installing an old version, or a weird incompatibility with this user's system
Either way, this is likely hard to debug.
I am trying to learn how to work with nls.lm in the R library minpack.lm by using the Rosenbrock function to see if the algorithm converges to the global minimum at f(x,y) = (1,1). I do so both with and without the analytic Jacobian. In both instances, I get a warning telling me that the algorithm has decided to revert the maximum number of iterations specified in the call to nls.lm to 1024:
Warning messages:
1: In nls.lm(par = initpar, fn = objective_rosenbrock, jac = gradient_rosenbrock, :
resetting `maxiter' to 1024!
2: In nls.lm(par = initpar, fn = objective_rosenbrock, jac = gradient_rosenbrock, :
lmder: info = -1. Number of iterations has reached `maxiter' == 1024.
The algorithm never quite reaches (1,1) as a result given my initial guess of (-1.2, 1.0). I found the source code for the library on GitHub and the following lines of code are pertinent here:
https://github.com/cran/minpack.lm/blob/master/src/nls_lm.c
OS->maxiter = INTEGER_VALUE(getListElement(control, "maxiter"));
if(OS->maxiter > 1024) {
OS->maxiter = 1024;
warning("resetting `maxiter' to 1024!");
}
Is there any logic to why the maximum number of iterations is capped to 1024? Something with bits and 2^10? I would like to use the library for a different application, but this cap on iterations might prevent that. Any insight would be appreciated.
Git blame says that this code limiting the max iterations was introduced in version 1.1-0, in 2008. The NEWS file for the package only goes back as far as version 1.1-6. I can't find the code in any public repo other than the one you point to (which is only a CRAN mirror; it doesn't contain any comments/commit messages/etc. from developers that might give us clues.)
Other than contacting the maintainer I think it's going to be hard to figure out what the rationale is for this limit.
I do have some guesses though.
The only places that maxiter is actually used in the code are here and here - in R code, not Fortran or C code, so it seems extremely unlikely that we are dealing with something like a 10-bit unsigned integer type (which seems an unlikely choice in any case). I think the limitation is there because we also have a buffer defined for holding trace information here:
double rsstrace[1024];
which, as you can see, is hard-coded to a length of 1024. Presumably bad things would happen if we tried to stuff 1025 iterations'-worth of tracing information into this array ...
My suggestions:
change all instances of '1024' in the code to something larger and see what happens. There are only four:
$ find . -type f -exec grep -Hn 1024 {} \;
./src/nls_lm.c:141: if(OS->maxiter > 1024) {
./src/nls_lm.c:142: OS->maxiter = 1024;
./src/nls_lm.c:143: warning("resetting `maxiter' to 1024!");
./src/minpack_lm.h:20: double rsstrace[1024];
it would be best to #define MAXITER 2048 (or whatever) in src/minpack_lm.h and use that instead of the numerical value.
Contact the maintainer (maintainer("minpack.lm")) and ask them about this issue.
I have written a recursive code in R.
Before invoking R, I set the stack size to 96 MB at the shell with:
ulimit -s 96000
I invoked R with maximum protection pointer stack size of 500000 with:
R --max-ppsize 500000
And I changed the maximum recursion depth to 500000:
options(expression = 500000)
I both used the binary R package at Arch Linux repositories (without memory profiling) and also a binary compiled by me with memory profiling option. Both are of version 3.4.2
I used two versions of the code with and without gc().
The problem is that R exits the code with "node stack overflow" error while only 16 MB of the 93 MB of total available stack is used and depth is just below one percent of the expressions option of 5e5:
size current direction eval_depth
93388800 16284704 1 4958
Error: node stack overflow
The current stack usage change between the last two iterations were around 10K. The only passed and saved object is a numeric vector of 19 items.
The recursive portion of the code is below:
network_recursive <- function(called)
{
print(Cstack_info())
callers <- list_caller[[called + 1]] # get the callers of the called
callers <- callers[!bool[callers + 1]] # subset for nofriends - new friends
new_friend_no <- length(callers) # number of new friends
print(list(called, callers) )
if (new_friend_no > 0) # if1 still new friends
{
friends <<- friends + new_friend_no # increment friend no
print(friends)
bool[callers + 1] <<- T # toggle friends
sapply(callers, network_recursive) # recurse network control
} # close if1
print("end of recursion")
}
What may be the reason for this stack overflow?
Some notes on the R source code, related to the issue.
The portion of the code that triggers the error is lines 5987-5988 from src/main/eval.c:
5975 #ifdef USE_BINDING_CACHE
5976 if (useCache) {
5977 R_len_t n = LENGTH(constants);
5978 # ifdef CACHE_MAX
5979 if (n > CACHE_MAX) {
5980 n = CACHE_MAX;
5981 smallcache = FALSE;
5982 }
5983 # endif
5984 # ifdef CACHE_ON_STACK
5985 /* initialize binding cache on the stack */
5986 vcache = R_BCNodeStackTop;
5987 if (R_BCNodeStackTop + n > R_BCNodeStackEnd)
5988 nodeStackOverflow();
5989 while (n > 0) {
5990 SETSTACK(0, R_NilValue);
5991 R_BCNodeStackTop++;
5992 n--;
5993 }
5994 # else
5995 /* allocate binding cache and protect on stack */
5996 vcache = allocVector(VECSXP, n);
5997 BCNPUSH(vcache);
5998 # endif
5999 }
6000 #endif
Off the top of my head, I see that you used options(expression = 500000), but the field in the list returned by "options()" is called 'expressions' (with an s). If you typed it in the way you described in your question, then the 'expressions' field remained at 5000, not the 500000 you intended to set it as. So this might be why you maxed out while only using what you thought was 1% of the stack depth.
The node stack has its own limit, which is fixed (defined in Defn.h, R_BCNODESTACKSIZE). If you have a real example where the limit is too small, please submit a bug report, we could increase it or also add a command line option for it. The "node stack" is used by the byte-code interpreter, which interprets byte-code produced by the byte-code compiler. Cstack_info() does not display the node stack usage. The node stack is not allocated on the C stack.
Programs based on deep recursion will be very slow in R anyway as function calls are quite expensive. For practical purposes, when a limit related to recursion depth is hit, it might be better to rewrite the program to avoid recursion rather then increasing the limits.
Just as an experiment one might disable the just-in-time compiler and by that reduce the stress on the node stack. It won't be completely eliminated, because some packages are already compiled at installation by default, including base and recommended packages, so e.g. sapply is compiled. Also, this might on the other hand increase the stress on the recursively eliminated expressions, and the program will run even slower.
For learning purposes, I wrote a small C Python module that is supposed to perform an IPC cuda memcopy to transfer data between processes. For testing, I wrote equivalent programs: one using theano's CudaNdarray, and the other using pycuda. The problem is, even though the test programs are nearly identical, the pycuda version works while the theano version does not. It doesn't crash: it just produces incorrect results.
Below is the relevant function in the C module. Here is what it does: every process has two buffers: a source and a destination. Calling _sillycopy(source, dest, n) copies n elements from each process's source buffer to the neighboring process's dest array. So, if I have two processes, 0 and 1, processes 0 will end up with process 1's source buffer and processes 1 will end up with process 0's source buffer.
Note that to transfer cudaIpcMemHandle_t values between processes, I use MPI (this is a small part of a larger project which uses MPI). _sillycopy is called by another function, "sillycopy" which is exposed in Python by the standard Python C API methods.
void _sillycopy(float *source, float* dest, int n, MPI_Comm comm) {
int localRank;
int localSize;
MPI_Comm_rank(comm, &localRank);
MPI_Comm_size(comm, &localSize);
// Figure out which process is to the "left".
// m() performs a mod and treats negative numbers
// appropriately
int neighbor = m(localRank - 1, localSize);
// Create a memory handle for *source and do a
// wasteful Allgather to distribute to other processes
// (could just use an MPI_Sendrecv, but irrelevant right now)
cudaIpcMemHandle_t *memHandles = new cudaIpcMemHandle_t[localSize];
cudaIpcGetMemHandle(memHandles + localRank, source);
MPI_Allgather(
memHandles + localRank, sizeof(cudaIpcMemHandle_t), MPI_BYTE,
memHandles, sizeof(cudaIpcMemHandle_t), MPI_BYTE,
comm);
// Open the neighbor's mem handle so we can do a cudaMemcpy
float *sourcePtr;
cudaIpcOpenMemHandle((void**)&sourcePtr, memHandles[neighbor], cudaIpcMemLazyEnablePeerAccess);
// Copy!
cudaMemcpy(dest, sourcePtr, n * sizeof(float), cudaMemcpyDefault);
cudaIpcCloseMemHandle(sourcePtr);
delete [] memHandles;
}
Now here is the pycuda example. For reference, using int() on a_gpu and b_gpu returns the pointer to the underlying buffer's memory address on the device.
import sillymodule # sillycopy lives in here
import simplempi as mpi
import pycuda.driver as drv
import numpy as np
import atexit
import time
mpi.init()
drv.init()
# Make sure each process uses a different GPU
dev = drv.Device(mpi.rank())
ctx = dev.make_context()
atexit.register(ctx.pop)
shape = (2**26,)
# allocate host memory
a = np.ones(shape, np.float32)
b = np.zeros(shape, np.float32)
# allocate device memory
a_gpu = drv.mem_alloc(a.nbytes)
b_gpu = drv.mem_alloc(b.nbytes)
# copy host to device
drv.memcpy_htod(a_gpu, a)
drv.memcpy_htod(b_gpu, b)
# A few more host buffers
a_p = np.zeros(shape, np.float32)
b_p = np.zeros(shape, np.float32)
# Sanity check: this should fill a_p with 1's
drv.memcpy_dtoh(a_p, a_gpu)
# Verify that
print(a_p[0:10])
sillymodule.sillycopy(
int(a_gpu),
int(b_gpu),
shape[0])
# After this, b_p should have all one's
drv.memcpy_dtoh(b_p, b_gpu)
print(c_p[0:10])
And now the theano version of the above code. Rather than using int() to get the buffers' address, the CudaNdarray way of accessing this is via the gpudata attribute.
import os
import simplempi as mpi
mpi.init()
# select's one gpu per process
os.environ['THEANO_FLAGS'] = "device=gpu{}".format(mpi.rank())
import theano.sandbox.cuda as cuda
import time
import numpy as np
import time
import sillymodule
shape = (2 ** 24, )
# Allocate host data
a = np.ones(shape, np.float32)
b = np.zeros(shape, np.float32)
# Allocate device data
a_gpu = cuda.CudaNdarray.zeros(shape)
b_gpu = cuda.CudaNdarray.zeros(shape)
# Copy from host to device
a_gpu[:] = a[:]
b_gpu[:] = b[:]
# Should print 1's as a sanity check
print(np.asarray(a_gpu[0:10]))
sillymodule.sillycopy(
a_gpu.gpudata,
b_gpu.gpudata,
shape[0])
# Should print 1's
print(np.asarray(b_gpu[0:10]))
Again, the pycuda code works perfectly and the theano version runs, but gives the wrong result. To be precise, at the end of the theano code, b_gpu is filled with garbage: neither 1's nor 0's, just random numbers as though it were copying from a wrong place in memory.
My original theory regarding why this was failing had to do with CUDA contexts. I wondered if it was possible theano was doing something with them that meant that the cuda calls made in sillycopy were run under a different CUDA context than had been used to create the gpu arrays. I don't think this is the case because:
I spent a lot of time digging deep in theano's code and saw no funny business being played with contexts
I would expect such a problem to result in a bad crash, not an incorrect result, which is not what happens.
A secondary thought is whether this has to do the fact that theano spawns several threads, even when using a cuda backend, which can be verified this by running "ps huH p ". I don't know how threads might affect anything, but I have run out of obvious things to consider.
Any thoughts on this would be greatly appreciated!
For reference: the processes are launched in the normal OpenMPI way:
mpirun --np 2 python test_pycuda.py
How can I obtain this information:
Total Memory
Free Memory
Memory used by current running application ?
I think Qt should have memory options, that would be platform-independent, but
I can't find it. So what can I do when I want to make a platform-independent application that shows memory state?
Unfortunately, there is nothing built into Qt for this. You must do this per-platform.
Here are some samples to get you started. I had to implement this in one of my apps just last week. The code below is still very much in development; there may be errors or leaks, but it might at least point you in the correct direction. I was only interested in total physical RAM, but the other values are available in the same way. (Except perhaps memory in use by the current application ... not sure about that one.)
Windows (GlobalMemoryStatusEx)
MEMORYSTATUSEX memory_status;
ZeroMemory(&memory_status, sizeof(MEMORYSTATUSEX));
memory_status.dwLength = sizeof(MEMORYSTATUSEX);
if (GlobalMemoryStatusEx(&memory_status)) {
system_info.append(
QString("RAM: %1 MB")
.arg(memory_status.ullTotalPhys / (1024 * 1024)));
} else {
system_info.append("Unknown RAM");
}
Linux (/proc/meminfo)
QProcess p;
p.start("awk", QStringList() << "/MemTotal/ { print $2 }" << "/proc/meminfo");
p.waitForFinished();
QString memory = p.readAllStandardOutput();
system_info.append(QString("; RAM: %1 MB").arg(memory.toLong() / 1024));
p.close();
Mac (sysctl)
QProcess p;
p.start("sysctl", QStringList() << "kern.version" << "hw.physmem");
p.waitForFinished();
QString system_info = p.readAllStandardOutput();
p.close();
Much better on POSIX OSes (Linux, Solaris, perhaps latest MacOS...) :
getrusage(...) secially look at ru_maxrss.
getrlimit(...) but I did not find any usefull info into.
sysconf(...) : _SC_PAGESIZE, _SC_PHYS_PAGES, _SC_AVPHYS_PAGES
sysinfo(...) : totalram, freeram, sharedram, totalswap,...
So much treasures on POSIX computers not available on Windows.
This is currently not possible in Qt. You would need to ifdef the different OS memory calls.