R error QP:not a positive definite matrix? - r

Im trying to solve the following problem in R, using the quadprog package:
min: vec %*% p + t(p) %*% mat %*% p
st: p >= 0
where
mat <- matrix(c(1162296,0,0,0,0,1,0,0,951.7089,0,1,0,-951.7089,0,0,1),4)
vec <- c(6341934541.1,175800.1,-356401.7,14398073047.1)
I've used
libary(quadprog)
solve.QP(2*mat,-vec, diag(4), integer(4))
but I keep getting the following error:
Error in solve.QP(2*mat, -vec, diag(4), integer(4)) :
matrix D in quadratic function is not positive definite!
However, cleary
> eigen(mat)$values > 0
[1] TRUE TRUE TRUE TRUE
What am I doing wrong? How come this error keeps showing up?

Your matrix mat is not symmetric. The quadprog package is designed to solve quadratic programs, which by definition, require a symmetric matrix in the highest order term. See here, for example.
To solve this problem as written, you will need to use a general constrained optimization algorithm. For example, you can try constrOptim as so:
# system matrices
mat <- matrix(c(1162296,0,0,0,0,1,0,0,951.7089,0,1,0,-951.7089,0,0,1),4)
vec <- c(6341934541.1,175800.1,-356401.7,14398073047.1)
# an initial value
p0 <- c(1,1,1,1)
# the objective function
objective <- function(p) {
vec %*% p + t(p) %*% mat %*% p
}
# solve -- warning! without additional work you won't know if this is a global minimum solution.
solution <- constrOptim(p0, objective, NULL, diag(4), c(0,0,0,0))

Related

How to solve an objective function having an exponential term with a different base in CVXR?

I am using CVXR to solve a concave objective function. The decision variable (x) is one-dimensional and the objective function is the summation of 2 logarithmic terms in which the second term is exponential with different bases of “a and b” (e.g., a^x, b^x); “a and b” are constants.
My full objective function is:
(-x*sum(ln(y))) + ln((1-x)/((a^(1-x))-(b^(1-x))))
where y is a given 1-D vector of data.
When I add the second term having (a^x and b^x) to the objective function, I keep getting
Error in a^(1 - x): non-numeric argument to binary operator
Is there any atom function in CVXR that can be used to code constant^x?
Here is my code:
library(CVXR)
a <- 7
b <- 0.3
M=1000
x_i # is a given vector of 1-D data
x <- Variable(1)
nominator <- (1-x)
denominator <- (1/((a^(1-x))-(b^(1-x))))
obj <- (-xsum(log(x_i)) + Mlog(nominator/denominator)) # change M to the length of X_i later
constr <- list(x>0)
prob <- Problem(Maximize(obj), constr)
result <- solve(prob)
alpha_hat <- result$getValue(x)
Please tell me what I am doing wrong. I appreciate your help in advance.
do some math
2=e^log2
2^x=(e^log2)^x=e^(log2*x)
So, you can try
denominator <- 1/(exp(log(a)*(1-x)) - exp(log(b)*(1-x)))

Solve indeterminate equation system in R

I have a equation system and I want to solve it using numerical methods. I want to get a close solution given a starting seed. Let me explain.
I have a vector of constants ,X, of values:
X <- (c(1,-2,3,4))
and a vector W of weights:
W <- (c(0.25,0.25,0.25,0.25))
I want that the sum of the components of W will be (sum(W)=1), and the sum of the multiplication of X and W element by element will be a given number N (sum(W*X)=N).
Is there a easy way to do this in R? I have it in Excel, using Solver, but I need to automatize it.
Here is your constant and your target value:
x <- c(1, -2, 3, 4)
n <- 10
You need a function to minimize. The first line contains each of your conditions, and the second line provides a measure of how to combine the errors into a single score. You may want to change the second line. For example, you could make one error term be more heavily weighted than the other using sum(c(1, 5) * errs ^ 2).
fn <- function(w)
{
errs <- c(sum(w) - 1, sum(x * w) - n)
sum(errs ^ 2)
}
The simplest thing is to start with all the weights the same value.
init_w <- rep.int(1 / length(x), length(x))
Use optim to optimize.
optim(init_w, fn)
## $par
## [1] 0.1204827 -1.2438883 1.1023338 1.0212406
##
## $value
## [1] 7.807847e-08
##
## $counts
## function gradient
## 111 NA
##
## $convergence
## [1] 0
##
## $message
## NULL
The par element contains your weights.
There is no unique solution for this problem. If you try other initial values for w you will most likely get different results from optim.
The problem can be formulated as solving an underdetermined system of linear equations.
A <- matrix(c(rep(1,4),x), nrow=2,byrow=TRUE)
b <- matrix(c(1,n), nrow=2)
We seek a solution that satisfies A %*% w = b but which one? Minimum norm solution? Or maybe some other one? There are infinitely many solutions. Solutions can be given using the pseudo-inverse of the matrix A. Use package MASS for this.
library(MASS)
Ag <- ginv(A)
The minimum norm solution is
wmnorm <- Ag %*% b
And check with A %*% wmnorm - b and fn(wmnorm).
See the Wikipedia page System of linear equations
the section Matrix solutions.
The solutions are given by
Az <- diag(nrow=nrow(Ag)) - Ag %*% A
w <- wmnorm + Az %*% z
where z is an arbitrary vector of ncol(Az) elements.
And now generate some solutions and check
xb <- wmnorm
z <- runif(4)
wsol.2 <- xb + Az %*% z
wsol.2
A %*% wsol.2 - b
fn(wsol.2)
z <- runif(4)
wsol.3 <- xb + Az %*% z
wsol.3
A %*% wsol.2 - b
fn(wsol.3)
And you'll see that these two solutions are valid solutions when given as argument to fn. And are quite different from the solution found by optim. You could test this by choosing a different starting point init_w for example by init_w1 <- runif(4)/4.

Calculating 'hat' matrix in R

In calculating the 'hat' matrix in weighted least squares a part of the calculation is
X^T*W*X
However, I am unsure how one would do this in R
See the following example:
x <- matrix(c(1,2,3,4,5,6),nrow=3,ncol=2,byrow=T)
xt <- t(x)
w <- as.vector(c(7,8,9))
xt*w%*%x
Which gives the error:
Error in xt * w %*% x : non-conformable arrays
Is there anything basic I have misunderstood?
EDIT
xt%*%w%*%x
gives the error:
Error in xt %*% w %*% x : non-conformable arguments
w needs to be 3x3 so make use diag to construct w as a matrix with those values on the diagonal instead of using a vector
x <- matrix(c(1,2,3,4,5,6),nrow=3,ncol=2,byrow=T)
xt <- t(x)
w <- diag(c(7,8,9))
xt %*% w %*% x
I am a little rusty on regressions but I think the hatvalues function is what you are looking for. ?hatvalues provides a useful of other diagnostics.
In your R code, w is a vector. It should be a diagonal matrix:
Replace this line:
w <- as.vector(c(7,8,9))
by this:
w <- as.vector(c(7,8,9))*diag(3)

How to check if a matrix has an inverse in the R language

How do you determine if a matrix has an inverse in R?
So is there in R a function that with a matrix input, will return somethin like:
"TRUE" (this matrix has inverse)/"FALSE"(it hasn't ...).
Using abs(det(M)) > threshold as a way of determining if a matrix is invertible is a very bad idea. Here's an example: consider the class of matrices cI, where I is the identity matrix and c is a constant. If c = 0.01 and I is 10 x 10, then det(cI) = 10^-20, but (cI)^-1 most definitely exists and is simply 100I. If c is small enough, det() will underflow and return 0 even though the matrix is invertible. If you want to use determinants to check invertibility, check instead if the modulus of the log determinant is finite using determinant().
You can try using is.singular.matrix function from matrixcalc package.
To install package:
install.packages("matrixcalc")
To load it:
library(matrixcalc)
To create a matrix:
mymatrix<-matrix(rnorm(4),2,2)
To test it:
is.singular.matrix(mymatrix)
If matrix is invertible it returns FALSE, and if matrix is singlar/non-invertible it returns TRUE.
#MAB has a good point. This uses solve(...) to decide if the matrix is invertible.
f <- function(m) class(try(solve(m),silent=T))=="matrix"
x <- matrix(rep(1,25),nc=5) # singular
y <- matrix(1+1e-10*rnorm(25),nc=5) # very nearly singular matrix
z <- 0.001*diag(1,5) # non-singular, but very smalll determinant
f(x)
# [1] FALSE
f(y)
# [1] TRUE
f(z)
# [1] TRUE
In addition to the solution given by #josilber in the comments (i.e. abs(det(M)) > 1e-10) you can also use solve(M) %*% M for a square matrix or ginv in the MASS package will give the generalized inverse of a matrix.
To get TRUE or FALSE you can simply combine any of those methods with tryCatch and any like this:
out <- tryCatch(solve(X) %*% X, error = function(e) e)
any(class(out) == "error")

A function for calculating the eigenvalues of a matrix in R

I want to write a function like eigen() to calculating eigenvalues and eigenvectors of an arbitary matrix. I wrote the following codes for calculation of eigenvalues and I need a function or method to solve the resulted linear equation.
eig <- function(x){
if(nrow(x)!=ncol(x)) stop("dimension error")
ff <- function(lambda){
for(i in 1:nrow(x)) x[i,i] <- x[i,i] - lambda
}
det(x)
}
I need to solve det(x)=0 that is a polynomial linear equation to find the values of lambda. Is there any way?
Here is one solution using uniroot.all:
library(rootSolve)
myeig <- function(mat){
myeig1 <- function(lambda) {
y = mat
diag(y) = diag(mat) - lambda
return(det(y))
}
myeig2 <- function(lambda){
sapply(lambda, myeig1)
}
uniroot.all(myeig2, c(-10, 10))
}
R > x <- matrix(rnorm(9), 3)
R > eigen(x)$values
[1] -1.77461906 -1.21589769 -0.01010515
R > myeig(x)
[1] -1.77462211 -1.21589767 -0.01009019
Computing determinant is such a bad idea as it is not numerically stable. You can easily get Inf etc even for a moderately big matrix. I suggest reading the following answers (read them otherwise you have no idea what my code is doing):
Are eigenvectors returned by R function eigen() wrong?
eigenvectors when A-lx is singular with no solution
then use either of the following
NullSpace(A - diag(lambda, nrow(A)))
nullspace(A - diag(lambda, nrow(A)))
The solution from #liuminzhao won't work if there is two repeated eigenvalues. The function will fail to find the roots, because the characteristic polynomial of the matrix will not change sign (it is zero and does not cross the zero line), which is what rootSolve::uniroot.all() is doing when looking for roots. So you need another way to find a local minima (like optim()). Moreover, it will failed to determine the number of repeated eigenvalues.
A better way is to find the characteristic equation with, which is easily done with pracma::charpoly() and then using polyroot().
par <- pracma::charpoly(M) # find parameters of the CP of matrix M
par <- par[length(par):1] # reverse order for polyroot()
roots <- Re(polyroot(par)) # keep real part of the polyroot()
The pracma::charpoly() is not too complicated in itself, see its source code, starting at line a1 <- a.

Resources