Hello everyone and thanks in advance! I've had a bit of an interesting journey with this problem. Here I figured out how to create a file-backed big matrix using the bigmemory package. This 7062 row by 364520 column matrix is the constraint matrix in a linear programming problem I'm trying to solve using the Rsymphony package. The code is below and the constraint matrix is called mat :
Rsymph <- Rsymphony_solve_LP(obj
,mat[1:nrow(mat),1:ncol(mat)]
,dir
,rhs
,types="B",max=F, write_lp=T)
Unfortunately when I run this, Rsymphony tries bringing the file-backed matrix into memory and I don't have enough RAM. The only reason why I even created the big matrix with bigmemory in the first place was to use as little RAM as possible. Is there any way I can with this code or using another linear programming function complete this with the amount of memory I have available? Thanks.
This was my concern before. By running mat[...] you are converting the big.matrix in to a regular matrix. The function will need to be rewritten so it is compatible with big.matrix objects. If you look at the source code for R_symphony_solve_LP you will find the following call:
out <- .C("R_symphony_solve",
as.integer(nc),
as.integer(nr),
as.integer(mat$matbeg),
as.integer(mat$matind),
as.double(mat$values),
as.double(col_lb),
as.double(col_ub),
as.integer(int),
if(max) as.double(-obj) else as.double(obj),
obj2 = double(nc),
as.character(paste(row_sense, collapse = "")),
as.double(rhs),
double(),
objval = double(1L),
solution = double(nc),
status = integer(1L),
verbosity = as.integer(verbosity),
time_limit = as.integer(time_limit),
node_limit = as.integer(node_limit),
gap_limit = as.double(gap_limit),
first_feasible = as.integer(first_feasible),
write_lp = as.integer(write_lp),
write_mps = as.integer(write_mps))
This C function will need to be rewritten for it to be compatible with big.matrix objects. If the use of this function is critically important to you, there are some examples of how to access big.matrix objects on the Rcpp Gallery website using Rcpp and RcppArmadillo. I am sorry to say there is no easy solution beyond this right now. You either need to get more RAM or start writing some more code.
Related
Introduction
I'm doing research in computational contact mechanics, in which I try to solve a PDE using a finite difference method. Long story short, I need to solve a linear system like Ax = b.
The suspects
In the problem, the matrix A is sparse, and so I defined it accordingly. On the other hand, both x and b are dense arrays.
In fact, x is defined as x = A\b, the potential solution of the problem.
So, the least one might expect from this solution is to satisfy that Ax is close to b in some sense. Great is my surprise when I find that
julia> norm(A*x-b) # Frobenius or 2-norm
5018.901093242197
The vector x does not solve the system! I've tried a lot of tricks discover what is going on, but no clues as of now. My first candidate is that I've found a bug, however I need more evidence to make this assertion.
The hints
Here are some tests that I've done to try to pinpoint the error
If you convert A to dense, the solution changes completely, and in fact it returns the correct solution.
I have repeated the proccess above in matlab, and it seems to work well with both sparse and dense matrices (that is, the sparse version does not agree with that of Julia's)
Not all sparse matrices cause a problem. I have tried other initial conditions and the solver seems to work quite well. I am not able to predict what property of the matrix can be causing this discrepancy. However;
A has a condition number of 120848.06, which is quite high, although matlab doesn't seem to complain. Also, the absolute error of the solution to the real solution is huge.
How to reproduce this "bug"
Download the .csv files in the following link
Run the following code in the folder of the files (install the packages if necessary
using DelimitedFiles, LinearAlgebra, SparseArrays;
A = readdlm("A.csv", ',');
b = readdlm("b.csv", ',');
x = readdlm("x.csv", ',');
A_sparse = sparse(A);
println(norm(A_sparse\b - x)); # You should get something close to zero, x is the solution of the sparse implementation
println(norm(A_sparse*x - b)); # You should get something not close to zero, something is not working!
Final words
It might easily be the case that I'm missing something. Are there any other implementations apart from the usual A\b to test against?
To solve a sparse square system Julia chooses to do a sparse LU decomposition. For the specific matrix A in the question, this decomposition is numerically ill-conditioned. This is evidenced by the cond(lu(A_sparse).U) == 2.879548971708896e64. This causes the solve routine to make numerical errors in turn.
A quick solution is to use a QR decomposition instead, by running x = qr(A_sparse)\b.
The solve or LU routines might need to be fixed to handle this case, or at least maintainers need to know of this issue, so opening an issue on the github repo might be good.
(this is a rewrite of my comment on question)
I wrote a program using an unsupervised K-means algorithm to try and compress images. It now works but in comparison to Python it's incredibly slow! Specifically it's finding the rowNorms thats slow. The array X is 350000+ elements.
This is the particular function:
find_closest_centroids <- function(X, centroids) {
m <- nrow(X)
c <- integer(m)
for(i in 1:m){
distances = rowNorms(sweep(centroids,2,X[i,]))
c[i] = which.min(distances)
}
return(c)
}
In Python I am able to do it like this:
def find_closest_centroids(X, centroids):
m = len(X)
c = np.zeros(m)
for i in range(m):
distances = np.linalg.norm(X[i] - centroids, axis=1)
c[i] = np.argmin(distances)
return c
Any recommendations?
Thanks.
As dvd280 has noted in his comment, R tends to do worse than many other languages in terms of performance. If are content with the performance of your code in Python, but need the function available in R, you might want to look into the reticulate package which provides an interface to python like the Rcpp package mentioned by dvd280 does for C++.
If you still want to implement this natively in R, be mindful of the data structures you use. For rowwise operations, data frames are a poor choice as they are lists of columns. I'm not sure about the data structures in your code, but rowNorms() seems to be a matrix method. You might get more mileage out of a list of rows structure.
If you feel like getting into dplyr, you could find this vignette on row-wise operations helpful. Make sure you have the latest version of the package, as the vignette is based on dplyr 1.0.
The data.table package tends to yield the best performance for large data sets in R, but I'm not familiar with it, so I can't give you any further directions on that.
Right upfront: this is an issue I encountered when submitting an R package to CRAN. So I
dont have control of the stack size (as the issue occured on one of CRANs platforms)
I cant provide a reproducible example (as I dont know the exact configurations on CRAN)
Problem
When trying to submit the cSEM.DGP package to CRAN the automatic pretest (for Debian x86_64-pc-linux-gnu; not for Windows!) failed with the NOTE: C stack usage 7975520 is too close to the limit.
I know this is caused by a function with three arguments whose body is about 800 rows long. The function body consists of additions and multiplications of these arguments. It is the function varzeta6() which you find here (from row 647 onwards).
How can I adress this?
Things I cant do:
provide a reproducible example (at least I would not know how)
change the stack size
Things I am thinking of:
try to break the function into smaller pieces. But I dont know how to best do that.
somehow precompile? the function (to be honest, I am just guessing) so CRAN doesnt complain?
Let me know your ideas!
Details / Background
The reason why varzeta6() (and varzeta4() / varzeta5() and even more so varzeta7()) are so long and R-inefficient is that they are essentially copy-pasted from mathematica (after simplifying the mathematica code as good as possible and adapting it to be valid R code). Hence, the code is by no means R-optimized (which #MauritsEvers righly pointed out).
Why do we need mathematica? Because what we need is the general form for the model-implied construct correlation matrix of a recursive strucutral equation model with up to 8 constructs as a function of the parameters of the model equations. In addition there are constraints.
To get a feel for the problem, lets take a system of two equations that can be solved recursivly:
Y2 = beta1*Y1 + zeta1
Y3 = beta2*Y1 + beta3*Y2 + zeta2
What we are interested in is the covariances: E(Y1*Y2), E(Y1*Y3), and E(Y2*Y3) as a function of beta1, beta2, beta3 under the constraint that
E(Y1) = E(Y2) = E(Y3) = 0,
E(Y1^2) = E(Y2^2) = E(Y3^3) = 1
E(Yi*zeta_j) = 0 (with i = 1, 2, 3 and j = 1, 2)
For such a simple model, this is rather trivial:
E(Y1*Y2) = E(Y1*(beta1*Y1 + zeta1) = beta1*E(Y1^2) + E(Y1*zeta1) = beta1
E(Y1*Y3) = E(Y1*(beta2*Y1 + beta3*(beta1*Y1 + zeta1) + zeta2) = beta2 + beta3*beta1
E(Y2*Y3) = ...
But you see how quickly this gets messy when you add Y4, Y5, until Y8.
In general the model-implied construct correlation matrix can be written as (the expression actually looks more complicated because we also allow for up to 5 exgenous constructs as well. This is why varzeta1() already looks complicated. But ignore this for now.):
V(Y) = (I - B)^-1 V(zeta)(I - B)'^-1
where I is the identity matrix and B a lower triangular matrix of model parameters (the betas). V(zeta) is a diagonal matrix. The functions varzeta1(), varzeta2(), ..., varzeta7() compute the main diagonal elements. Since we constrain Var(Yi) to always be 1, the variances of the zetas follow. Take for example the equation Var(Y2) = beta1^2*Var(Y1) + Var(zeta1) --> Var(zeta1) = 1 - beta1^2. This looks simple here, but is becomes extremly complicated when we take the variance of, say, the 6th equation in such a chain of recursive equations because Var(zeta6) depends on all previous covariances betwenn Y1, ..., Y5 which are themselves dependend on their respective previous covariances.
Ok I dont know if that makes things any clearer. Here are the main point:
The code for varzeta1(), ..., varzeta7() is copy pasted from mathematica and hence not R-optimized.
Mathematica is required because, as far as I know, R cannot handle symbolic calculations.
I could R-optimze "by hand" (which is extremly tedious)
I think the structure of the varzetaX() must be taken as given. The question therefore is: can I somehow use this function anyway?
Once conceivable approach is to try to convince the CRAN maintainers that there's no easy way for you to fix the problem. This is a NOTE, not a WARNING; The CRAN repository policy says
In principle, packages must pass R CMD check without warnings or significant notes to be admitted to the main CRAN package area. If there are warnings or notes you cannot eliminate (for example because you believe them to be spurious) send an explanatory note as part of your covering email, or as a comment on the submission form
So, you could take a chance that your well-reasoned explanation (in the comments field on the submission form) will convince the CRAN maintainers. In the long run it would be best to find a way to simplify the computations, but it might not be necessary to do it before submission to CRAN.
This is a bit too long as a comment, but hopefully this will give you some ideas for optimising the code for the varzeta* functions; or at the very least, it might give you some food for thought.
There are a few things that confuse me:
All varzeta* functions have arguments beta, gamma and phi, which seem to be matrices. However, in varzeta1 you don't use beta, yet beta is the first function argument.
I struggle to link the details you give at the bottom of your post with the code for the varzeta* functions. You don't explain where the gamma and phi matrices come from, nor what they denote. Furthermore, seeing that beta are the model's parameter etimates, I don't understand why beta should be a matrix.
As I mentioned in my earlier comment, I would be very surprised if these expressions cannot be simplified. R can do a lot of matrix operations quite comfortably, there shouldn't really be a need to pre-calculate individual terms.
For example, you can use crossprod and tcrossprod to calculate cross products, and %*% implements matrix multiplication.
Secondly, a lot of mathematical operations in R are vectorised. I already mentioned that you can simplify
1 - gamma[1,1]^2 - gamma[1,2]^2 - gamma[1,3]^2 - gamma[1,4]^2 - gamma[1,5]^2
as
1 - sum(gamma[1, ]^2)
since the ^ operator is vectorised.
Perhaps more fundamentally, this seems somewhat of an XY problem to me where it might help to take a step back. Not knowing the full details of what you're trying to model (as I said, I can't link the details you give to the cSEM.DGP code), I would start by exploring how to solve the recursive SEM in R. I don't really see the need for Mathematica here. As I said earlier, matrix operations are very standard in R; analytically solving a set of recursive equations is also possible in R. Since you seem to come from the Mathematica realm, it might be good to discuss this with a local R coding expert.
If you must use those scary varzeta* functions (and I really doubt that), an option may be to rewrite them in C++ and then compile them with Rcpp to turn them into R functions. Perhaps that will avoid the C stack usage limit?
Here is a piece of R code that writes to each element of a matrix in a reference class. It runs incredibly slowly, and I’m wondering if I’ve missed a simple trick that will speed this up.
nx = 2000
ny = 10
ref_matrix <- setRefClass(
"ref_matrix",fields = list(data = "matrix"),
)
out <- ref_matrix(data = matrix(0.0,nx,ny))
#tracemem(out$data)
for (iy in 1:ny) {
for (ix in 1:nx) {
out$data[ix,iy] <- ix + iy
}
}
It seems that each write to an element of the matrix triggers a check that involves a copy of the entire matrix. (Uncommenting the tracemen() call shows this.) Now, I’ve found a discussion that seems to confirm this:
https://r-devel.r-project.narkive.com/8KtYICjV/rd-copy-on-assignment-to-large-field-of-reference-class
and this also seems to be covered by Speeding up field access in R reference classes
but in both of these this behaviour can be bypassed by not declaring a class for the field, and this works for the example in the first link which uses a 1D vector, b, which can just be set as b <<- 1:10000. But I’ve not found an equivalent way of creating a 2D array without using a explicit “matrix” instance.
Am I just missing something simple, or is this actually not possible?
Let me add a couple of things. First, I’m very new to R, so could easily have missed something. Second, I’m really just curious about the way reference classes work in this case and whether there’s a simple way to use them efficiently; I’m not looking for a really fast way to set the elements of a matrix - I can do that by not having the matrix in a reference class at all, and if I really care about speed I can write a C routine to do it and call it from R.
Here’s some background that might explain why I’m interested in this, which you’re welcome to ignore.
I got here by wanting to see how different languages, and even different compiler options and different ways of coding the same operation, compared for efficiency when accessing 2D rectangular arrays. I’ve been playing with a test program that creates two 2D arrays of the same size, and calls a subroutine that sets the first to the elements of the second plus their index values. (Almost any operation would do, but this one isn’t completely trivial to optimise.) I have this in a number of languages now, C, C++, Julia, Tcl, Fortran, Swift, etc., even hand-coded assembler (spoiler alert: assembler isn’t worth the effort any more) and thought I’d try R. The obvious implementation in R passes the two arrays to a subroutine that does the work, but because R doesn’t normally pass by reference, that routine has to make a copy of the modified array and return that as the function value. I thought using a reference class would avoid the relatively minor overhead of that copy, so I tried that and was surprised to discover that, far from speeding things up, it slowed them down enormously.
Use outer:
out$data <- outer(1:ny, 1:nx, `+`)
Also, don't use reference classes (or R6 classes) unless you actually need reference semantics. KISS and all that.
I have the following piece of code:
Y.hat.tr <- array(0,c(nXtr,2))
for (i in 1:nXtr){
#print(i)
Y.hat.tr[i,2] <- ktr[,i]%*%solve(K + a*In)%*%Ytr
#Y.hat.tr[i,2] <- ktr[,i]%*%chol2inv(chol((K + a*In)))%*%Ytr
}
Y.hat.tr[,1] <- Ytr
My problem is that nXtr =300, and ktr is a 300x300 matrix. This routine takes approx 30 seconds to run in R version 3.0.1. I have tried various approaches to reduce the run time, but to no avail.
Any ideas would be gratefully received. If any other information is required please let me know
I have now taken the solve(K + a*In)%*%Ytr out of the loop, which has helped, but I was hoping to somehow vectorise this piece of code. Having thought about this for a while, and also after looking through various posts I cannot see how this can be done?
Maybe I am missing something (and without sample or simulated data to test on it is harder to check), but isn't your loop equivalent to:
Y.hat.tr[,2] <- t(ktr) %*% solve(K + a*In) %*% Ytr
?
Removing the loop altogether and using internal vectorized code may speed things up.
Also, you are using solve with 1 argument, often you can speed things by using solve with 2 arguments (fewer internal calculations), something like:
t(ktr) %*% solve( K + a*In, Ytr )
Your loop is of the type called embarrassingly parallel, which means that if you want to keep the loop and are working on a computer with more than 1 core (or have easy access to a cluster) then you could use the parallel package (and maybe simplest to convert using the foreach package) to run the calculations in parallel which sometimes can greatly speed up the process.