The R chol2inv() method gives me strange results [closed] - r

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
So I'm trying to invert a big (449x449) covariance matrix, which is thus symmetric and positive definite.
(What I'm trying to do is to invert this matrix as part of a Gaussian Process fitting for the Mauna Loa CO2 dataset.)
This inversion is pretty long, so I wanted to use chol2inv instead of solve.
But the chol2inv method gives me a very strange result : a matrix very close to 0 (sum of it is equal to 10^(-13)).
Why would chol2inv give me this?

Sounds like you have wrongly used chol2inv. It takes the upper triangular Cholesky factor rather than the covariance matrix as input. So if A is your covariance matrix, you want
chol2inv(chol(A))
not
chol2inv(A)
Just found out that this issue was answered twice long long ago.
Comparing matrix inversions in R - what is wrong with the Cholesky method? (in 2014)
matrix inversion R (in 2013)

Related

I want to obtain eigenvalues of symmetric matrix in Julia in O(nmr) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 12 months ago.
Improve this question
I am a beginner at Julia. I want to obtain r eigenvalues and eigenvectors of input symmetric n times n matrix X in increasing order. I heard the computational complexity is O(n^2 r).
n is around 1000-20000, r is around 100-1000. How can I obtain the eigenvalue and eigenvectors within O(nmr)?
I'm not an expert on this, but I would start out trying the methods in the LinearAlgebra stdlib. The LinearAlgebra.eigen function is specialized on the input matrix types SymTridiagonal, Hermitian, Symmetric, and lets you specify how many vectors/values you want:
If you have a dense matrix, A, and want the largest r eigenvalues and vectors:
(evals, evecs) = eigen(Symmetric(A), 1:r)
You can also use eigvals and eigvecs if you just need eigenvalues or eigenvectors. Also check out eigen! if you want to save some memory.
BTW, using Symmetric(A) doesn't create a new matrix, it is just a wrapper around A that tells the compiler that A is symmetrical and only accesses the part of A that is above the diagonal.
If the version in LinearAlgebra is not the fastest in this quite general case, then it should probably be reported on Julia's github. There may be faster implementations for more specialized cases, but for general symmetric dense matrices, the implementation in the stdlib should be expected to be near optimal.

Computationally singular matrix inverse error? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
This error comes up when I am dividing by the matrix inverse.
Error in solve.default(x) :
system is computationally singular: reciprocal condition number = 6.85861e-18 ```
What are the ways to solve this? I am using the matrix.inverse function to find the inverse.
Given a matrix M, I guess it would be safe to use ginv from package MASS to compute the inverse if you want to avoid the error in your post, e.g.,
MASS::ginv(M)

How to get the second smallest eigenvalue of the laplacian with R? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I'm trying to use R to capture the second smallest eigenvalue of the Laplacian of a graph, but I just know how to do it in Matlab. I have searched in the web about it, but I just always find how to use the R-function "eigen"
Does somebody can tell me how to write such a code line, please?
In Matlab, for example, the line that I use to code is:
[~, D] = eigs(lap, 2, 'sa'); %getting the first two eigenvalues of laplacian (lap). 'sa' means Smallest Algebraic
lambda2 = D(2, 2); %getting the second smallest eigenvalue
Thanks in advance for your helpful comments.
A = cbind(c(1,-1,0), c(-1,1,1), c(0.5,0.5,0.5))
ei = eigen(A)
ei$values[length(ei$value)-1]
gives second smallest eigenvalue of matrix A

Fitting repeated measures in R [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
Fitting repeated measures in R, convergence issues. I have the following fit which is one of many datasets and it doesn't converge. I do other sets that do. This dataset and model work in SAS... Could I get some direction in what to do to have this work in R? Things to look at (matrices,option settings,a reference on this topic for r/splus ...).
fit.gls <- gls(resp~grpnum+grpnum/day,data=long, corr=cormat,na.action=na.omit)
Error in glsEstimate(object, control = control) :
computed "gls" fit is singular, rank 62
I have read the following and still trying to work thru it...
Converting Repeated Measures mixed model formula from SAS to R
The problem is the data. gls needs to invert a matrix to work (see Wikipedia for the formula to estimate the covariates). For you particular data set, that matrix is not invertible.
You can allow for singular values to be allowed with the control argument:
fit.gls <- gls(resp~grpnum+grpnum/day,data=long, corr=cormat,na.action=na.omit,
control = list(singular.ok = TRUE))
Be careful with this as you may get bad results! Always check the model fit afterwards.
Look at the help for gls and glsConrol for more details about options.

predict.lars command for lasso regression: what are the "s" and "p" parameters? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
In help(predict.lars) we can read that the parameter s is "a value, or vector of values, indexing the path. Its values depends on the mode= argument. By default (mode="step"), s should take on values between 0 and p (e.g., a step of 1.3 means .3 of the way between step 1 and 2.)"
What does "indexing the path" mean? Also, s must take a value between 1 and p, but what is p? The parameter p is not mentioned elsewhere in the help file.
I know this is basic, but there is not a single question up on SO about predict.lars.
It is easiest to use the mode="norm" option. In this case, s should just be your L1-regularization coefficient (\lambda).
To understand mode=step, you need to know a little more about the LARS algorithm.
One problem that LARS can solve is the L1-regularized regression problem: min ||y-Xw||^2+\lambda|w|, where y are the outputs, X is a matrix of input vectors, and w are the regression weights.
A simplified explanation of how LARS works is that it greedily builds a solution to this problem by adding or removing dimensions from the regression weight vector.
Each of these greedy steps can be interpreted as a solution to a L1 regularized problem with decreasing values of \lambda. The sequence of these steps is known as the path.
So, given the LARS path, to get the solution for a user-supplied \lambda, you iterate along the path until the next element is less than the input \lambda, then you take a partial step (\lambda decreases linearly between each step).

Resources