Is there a way to calculate the determinant of a complex matrix?
F4<-matrix(c(1,1,1,1,1,1i,-1,-1i,1,-1,1,-1,1,-1i,-1,1i),nrow=4)
det(F4)
Error in determinant.matrix(x, logarithm = TRUE, ...) :
determinant not currently defined for complex matrices
library(Matrix)
determinant(Matrix(F4))
Error in Matrix(F4) :
complex matrices not yet implemented in Matrix package
Error in determinant(Matrix(F4)) :
error in evaluating the argument 'x' in selecting a method for function 'determinant'
If you use prod(eigen(F4)$values)
I'd recommend
prod(eigen(F4, only.values=TRUE)$values)
instead.
Note that the qr() is advocated to use iff you are only interested in the
absolute value or rather Mod() :
prod(abs(Re(diag(qr(x)$qr))))
gives the Mod(determinant(x))
{In X = QR, |det(Q)|=1 and the diagonal of R is real (in R at least).}
BTW: Did you note the caveat
Often, computing the determinant is
not what you should be doing
to solve a given problem.
on the help(determinant) page ?
If you know that the characteristic polynomial of a matrix A splits into linear factors, then det(A) is the product of the eigenvalues of A, and you can use eigen value functions like this to work around your problem. I suspect you'll still want something better, but this might be a start.
Related
I have tried, log, sqrt root and arcsine transformation on my data and nothing worked.
I tried to use boxcox and I got the error response variable must be positive
md<-lm(Score1~Location1+Site1+Trial1+Stage1)
summary(md)
plot(md, which = 1)
bc<-boxcox(md, plotit = T, lambda = seq(0.5,1.5, by =0.1))
This is what I ran on R and I got the error message
Any idea on how I can fix my code?
The Box-Cox transformation is defined as BC(y) = (y^lambda - 1)/lambda (and as log(y) for lambda==0). This transformation is not generally well-defined for negative y values (because it requires raising negative values to a power, which generates complex values in most cases).
The car package provides similar transformations that allow negative input, specifically the Yeo-Johnson transformation (see Wikipedia link above) and an adjusted version of the Box-Cox transformation; these are available via car::yjPower() and car::bcnPower() respectively (see the documentation for more details).
(Now that you know the names of some alternatives, you can also search for other R functions/packages that provide them.)
In Matlab there are cond and rcond, and in LAPACK too. Is there any routine in Eigen to find the condition number of a matrix?
I have a Cholesky decomposition of a matrix and I want to check if it is close to singularity, but cannot find a similar function in the docs.
UPDATE:
I think I can use something like this algorithm, which makes use of the triangular factorization. The method by Ilya is useful for more accurate answers, so I will mark it as correct.
Probably easiest way to compute the condition number is using the expression:
cond(A) = max(sigma) / min(sigma)
where sigma is an array of singular values, the result of SVD. Eigen author suggests this code:
JacobiSVD<MatrixXd> svd(A);
double cond = svd.singularValues()(0)
/ svd.singularValues()(svd.singularValues().size()-1);
Other ways are (less efficient)
cond(A) = max(lambda) / min(lambda)
cond(A) = norm2(A) * norm2(A^-1)
where lambda is an array of eigenvalues.
It looks like Cholesky decomposition does not directly help here, but I cant tell for sure at the moment.
You could use the Gershgorin circle theorem to get a rough estimate.
But as Ilya Popov has already pointed out, calculating the eigenvalues/singular values is more reliable. However, it doesn't make sense to calculate all eigenvalues, which gets very expensive. You only need the largest and the smallest eigenvalues, for that you can use the Power method for the largest and Inverse Iteration for the smallest eigenvalue.
Or you could use a library that can do this already, e.g. Spectra.
You can use norms. In my robotics experience, this is computationally faster than singular values:
pseudoInverse(matrix).norm() * matrix.norm()
I found this to be 2.6X faster than singular values for 6x6 matrices. It's also recommended in this book:
B. Siciliano, and O. Khatib, Springer Handbook of Robotics. Berlin: Springer Science and Business Media, 2008, p. 236.
Basically I am trying to solve the following Definite Integral in Maple from Theta=0 to Theta0=45. I am trying to find an actual numerical value but need to find the integral first. I don't know how to ask Maple to help me solve an integral where there are two different values (theta and theta0) within. All I am trying to do is find the period of oscillation of a pendulum but I have been instructed to only use this method and equation.
From the equation d^2θ/dt^2= -g/L sin(θ) we find:
P = 4 sqrt(L/2g) ∫ (0 to θ0) dθ/sqrt[cos(θ)-cos(θ0)]
L= 1
g= 9.8
To simplify the value before the integral I did the following:
>L:=1;
>g:=9.8;
>evalf(4*sqrt(L/(2*g));
>M:=%;
So the integral to solve simplifies to:
P = M ∫ (0 to θ0) dθ/sqrt[cos(θ)-cos(θ0)]
When I try to evaluate the integral by itself I get the error:
"Error, index must evaluate to a name when indexing a module".
I am trying to figure out how Maple wants me to enter in the integral so it will solve it.
I have tried the following as well as similar combinations of variables:
int(1/sqrt[cos(t)-cos(45)],t=0..45);
I can't conceive how to make maple solve the definite integral for me given it is cos(theta)-cos(theta0) in the denominator instead of just one variable. When I try different values for the integral I also get the following error:
Error, index must evaluate to a name when indexing a module
I must be overlooking something considerable to continue getting this error. Thanks in advance for any help or direction! :)
As acer noted in his comment, maple syntax doesn't use square brackets for functions. The proper syntax for your task is:
int(1/sqrt(cos(t)-cos(Pi/4)),t=0..Pi/4);
Notice that maple works in radians, so I replaced your 45 with Pi/4.
If you need a numerical value you can use evalf:
evalf(int(1/sqrt(cos(t)-cos(Pi/4)),t=0..Pi/4));
maple's answer is 2.310196615.
If you need to evaluate with a generic variable theta0, you can define a function as:
myint:=theta0->int(1/sqrt(cos(t)-cos(theta0)),t=0..theta0);
Then just call it as, e.g.,
myint(Pi/4);
and for a numerical evaluation:
evalf(myint(Pi/4));
I am solving simple optimization problem. The data set has 26 columns and over 3000 rows.
The source code looks like
Means <- colMeans(Returns)
Sigma <- cov(Returns)
invSigma1 <- solve(Sigma)
And everything works perfect- but then I want to do the same for shorter period (only 261 rows) and the solve function writes the following error:
solve(Sigma)
Error in solve.default(Sigma) :
Lapack routine dgesv: system is exactly singular
Its weird because when I do the same with some random numbers:
Returns<-matrix(runif(6786,-1,1), nrow=261)
Means <- colMeans(Returns)
Sigma <- cov(Returns)
invSigma <- solve(Sigma)
no error occurs at all. Could someone explain me where could be the problem and how to treat it.
Thank you very much,
Alex
Using solve with a single parameter is a request to invert a matrix. The error message is telling you that your matrix is singular and cannot be inverted.
I guess your code uses somewhere in the second case a singular matrix (i.e. not invertible), and the solve function needs to invert it. This has nothing to do with the size but with the fact that some of your vectors are (probably) colinear.
Lapack is a Linear Algebra package which is used by R (actually it's used everywhere) underneath solve(), dgesv spits this kind of error when the matrix you passed as a parameter is singular.
As an addendum: dgesv performs LU decomposition, which, when using your matrix, forces a division by 0, since this is ill-defined, it throws this error. This only happens when matrix is singular or when it's singular on your machine (due to approximation you can have a really small number be considered 0)
I'd suggest you check its determinant if the matrix you're using contains mostly integers and is not big. If it's big, then take a look at this link.
I can understand your question. The problem is that your matrix is perpendicular. You can see your first number and the last number of your matrix is same.
Disclaimer
This is not strictly a programming question, but most programmers soon or later have to deal with math (especially algebra), so I think that the answer could turn out to be useful to someone else in the future.
Now the problem
I'm trying to check if m vectors of dimension n are linearly independent. If m == n you can just build a matrix using the vectors and check if the determinant is != 0. But what if m < n?
Any hints?
See also this video lecture.
Construct a matrix of the vectors (one row per vector), and perform a Gaussian elimination on this matrix. If any of the matrix rows cancels out, they are not linearly independent.
The trivial case is when m > n, in this case, they cannot be linearly independent.
Construct a matrix M whose rows are the vectors and determine the rank of M. If the rank of M is less than m (the number of vectors) then there is a linear dependence. In the algorithm to determine the rank of M you can stop the procedure as soon as you obtain one row of zeros, but running the algorithm to completion has the added bonanza of providing the dimension of the spanning set of the vectors. Oh, and the algorithm to determine the rank of M is merely Gaussian elimination.
Take care for numerical instability. See the warning at the beginning of chapter two in Numerical Recipes.
If m<n, you will have to do some operation on them (there are multiple possibilities: Gaussian elimination, orthogonalization, etc., almost any transformation which can be used for solving equations will do) and check the result (eg. Gaussian elimination => zero row or column, orthogonalization => zero vector, SVD => zero singular number)
However, note that this question is a bad question for a programmer to ask, and this problem is a bad problem for a program to solve. That's because every linearly dependent set of n<m vectors has a different set of linearly independent vectors nearby (eg. the problem is numerically unstable)
I have been working on this problem these days.
Previously, I have found some algorithms regarding Gaussian or Gaussian-Jordan elimination, but most of those algorithms only apply to square matrix, not general matrix.
To apply for general matrix, one of the best answers might be this:
http://rosettacode.org/wiki/Reduced_row_echelon_form#MATLAB
You can find both pseudo-code and source code in various languages.
As for me, I transformed the Python source code to C++, causes the C++ code provided in the above link is somehow complex and inappropriate to implement in my simulation.
Hope this will help you, and good luck ^^
If computing power is not a problem, probably the best way is to find singular values of the matrix. Basically you need to find eigenvalues of M'*M and look at the ratio of the largest to the smallest. If the ratio is not very big, the vectors are independent.
Another way to check that m row vectors are linearly independent, when put in a matrix M of size mxn, is to compute
det(M * M^T)
i.e. the determinant of a mxm square matrix. It will be zero if and only if M has some dependent rows. However Gaussian elimination should be in general faster.
Sorry man, my mistake...
The source code provided in the above link turns out to be incorrect, at least the python code I have tested and the C++ code I have transformed does not generates the right answer all the time. (while for the exmample in the above link, the result is correct :) -- )
To test the python code, simply replace the mtx with
[30,10,20,0],[60,20,40,0]
and the returned result would be like:
[1,0,0,0],[0,1,2,0]
Nevertheless, I have got a way out of this. It's just this time I transformed the matalb source code of rref function to C++. You can run matlab and use the type rref command to get the source code of rref.
Just notice that if you are working with some really large value or really small value, make sure use the long double datatype in c++. Otherwise, the result will be truncated and inconsistent with the matlab result.
I have been conducting large simulations in ns2, and all the observed results are sound.
hope this will help you and any other who have encontered the problem...
A very simple way, that is not the most computationally efficient, is to simply remove random rows until m=n and then apply the determinant trick.
m < n: remove rows (make the vectors shorter) until the matrix is square, and then
m = n: check if the determinant is 0 (as you said)
m < n (the number of vectors is greater than their length): they are linearly dependent (always).
The reason, in short, is that any solution to the system of m x n equations is also a solution to the n x n system of equations (you're trying to solve Av=0). For a better explanation, see Wikipedia, which explains it better than I can.