Solve Underdetermined System of Equations for Sparse Solution - linear-algebra

A, C are m x n rectangular matrix.
B is a n x n square matrix.
B is not symmetric.
B, C are known
AB = C.
B is singular.
I could use the moore-penrose inverse of B to get A = CB+.
But that seems to make A have many non-zero elements.
If I want an A (among all possible solutions) that is quite sparse, what solvers can I try?
Should I use BDCSVD, as in here?
Thanks.

Under-determined systems usually have infinite number of solutions. Unless you impose some additional conditions (restrictions), in form of additional equation(s), you won't obtain a single numerical solution.

Related

Questions about SVD, Singular Value Decomposition

I am not a mathematician, so I need to understand what SVD does and WHY more than how it works exactly from the math perspective. (I understand at least what is the decomposition though).
This guy on youtube gave the only human explanation of SVD saying, that the U matrix maps "user to concept correlation" Sigma matrix defines the strength of each concept, and V maps "movie to concept correlation" given that initial matrix M has users in the rows, and movie (ratings) in the columns.
He also mentioned two concept specifically "sci fi" and "romance" movies. See the picture below.
My questions are:
How SVD knows the number of concepts. He as human mentioned two - sci fi, and romance, but in reality in resulting matrices are 3 concepts. (for example matrix U - that one with blue titles - has 3 columns not 2).
How SVD knows what is the concept after all. I mean, what If i shuffle the columns randomly how SVD then knows what is sci fi, what is romance. I mean, I suppose there is no rule, group the concepts together in the column order. What if scifi movie is the first and last one? and not first 3 columns in the initial matrix M?
What is the practical usage of either U, Sigma or V matrices? (Except that you can multiply them to get the initial matrix M)
Is there also any other possible human explanation of SVD than the guy up provided, or it is the only one possible function? Matrices of correlations.
As was pointed out in the comments you may well get better explanations elsewhere. However since the question is still open, here is my tuppence worth.
Throughout I'll suppose that A is mxn where m>=n, ie that A has more rows than columns.
First of all there are many forms of the SVD, differing in the sizes of the matrices. They all share the fundamental properties that
A = U*S*V'
S is diagonal
U and V have orthogonal columns (ie U'*U = I, V'*V = I)
Perhaps the most useful from a theoretical point of view is the 'full fat' svd where we have that U is mxm, S is mxn and V is nxn. However this has rather a lot of elements that don't really contribute to A. For example S being diagonal we can write
S = ( S1 ) (where S1 is nxn )
( 0 )
If we divide up U into
U = ( U1 U2) (where U1 is mxn and U2 is (mx(m-n)))
Then its straightforward to calculate that
U*S = U1*S1
and so we can throw away the last m-n columns of U and the last m-n rows of S, and still recover A.
Moreover some of the diagonal elements of S1 may be 0; suppose in fact that p<n of them are non zero. Then we can write
S1 = ( S2 0)
( 0 0)
And arguing as above for U and analogously for V' we can in fact throw away all but the first p columns of U and all of S but S2, and all but the first p rows of V, and still recover A.
This latter is the form of SVD ('thin') in your question:
U is mxp
S is pxp
V' is pxm
where p is the number of non-zero singular values of A. This is my answer to your 1.
By convention the elements of S decrease as you move down the diagonal. To achieve this the routine that calculates the svd in effect works with a version of A with shuffled columns. This shuffling is undone by incorporating the shuffle in the U and V' output. This is my answer to your 2: however you shuffle A, it will be in effect shuffled again to ensure that the singular values decrease down the diagonal.
I struggle to answer 3, because I suspect that our ideas of 'practical' are rather different.
One thing that I think practical is to find simpler approximations to A. The reconstruction of A can be written
A = Sum{ 1<=i<=p | U[i]*S[i]*V[i]' }
where the S[i] are the diagonal elements of S, U[i] are the columns of U and V[i] those of V
We might want to use a simpler model for A, for example want to simplify it down to just one term. That is, we might wonder how much we would lose by using fewer 'concepts'. The 'thin' svd above has already done this in the sense that it has thrown away all the coluns that make no contribution to A. In an extreme case, we might wonder what we would get if we reduced to just one concept. This approximation is found by taking just the first term of the sum above. This extends to however many terms -- q say -- we want to allow: we just take the first q terms of the sum above.
I'm sorry, I can't answer 4.

Is there any way in R to speed up this matrix product: A' * B * A (B positive semidefinite)?

I wonder whether this operation (in R)
t(A) %*% B %*% A
where B is a positive semidefinite matrix, can be simplified via any algebraic trick in order to make it faster (beyond crossprod(A,B)%*%A, Rcpp)
If you want an algebraic trick all you have is that B is positive-semidefinite, i.e. you can find its square root/eigenvalue decomposition.
How you use that fact depends on what you are actually doing.
For instance, maybe you could work with A in the eigenbasis of B whereby the matrix multiplication you want becomes much more simple. Or perhaps only a few of the eigenvalues of B are significant and you can exclude most of the eigenvectors in your representation of the square root of B.

Quadratic Objective with Quadratic Constraints in R: Rsolnp?

I am trying to minimize a quadratic objective with quadratic constraints, is Rolnp the way to go? I have been reading the documentation and it seems like all the examples use equations instead of vector manipulation. All of my parameters are vectors and matrices and they can be quite large. Here is my problem:
X<-([Cf]+[H])%*%[A]
Y<-([Cf]+[H]-[R])%*%[B]
I want to find H that minimizes Y%*%Dmat%*%t(Y) for a given value of X%*%Dmat%*%t(X)
Cf, R, A, Dmat and B are matrices of constants.
The values for H sohould be between 0 and 1.
Is it possible to use Rsolnp to find the vector H even though the input functions will all return other vectors?
I ended up using AUGLAG and COBYLA to solve this problem.

Efficient use of Choleski decomposition in R

This question is related to this one and this one
I have two full rank matrices A1, A2 each
of dimension p x p and a p-vector y.
These matrices are closely related in the sense that
matrix A2 is a rank one update of matrix A1.
I'm interested in the vector
β2 | (β1, y, A1, A2, A1-1})
where
β2 = (A2' A2)-1(A2'y)
and
β1 = (A1' A1)-1(A1' y)
Now, in a previous question here I have been advised
to estimate β2 by the Choleski approach since the Choleski
decomposition is easy to update using R functions such as chud()
in package SamplerCompare.
Below are two functions to solve linear systems in R, the first one uses
the solve() function and the second one the Choleski approach
(the second one I can efficiently update).
fx01 <- function(ll,A,y) chol2inv(chol(crossprod(A))) %*% crossprod(A,y)
fx03 <- function(ll,A,y) solve(A,y)
p <- 5
A <- matrix(rnorm(p^2),p,p)
y <- rnorm(p)
system.time(lapply(1:1000,fx01,A=A,y=y))
system.time(lapply(1:1000,fx03,A=A,y=y))
My question is: for p small, both functions seems to be comparable
(actually fx01 is even faster). But as I increase p,
fx01 becomes increasingly slower so that for p = 100,
fx03 is three times as fast as fx01.
What is causing the performance deterioration of fx01 and can it
be improved/solved (maybe my implementation of the Choleski is too naive? Shouldn't I be using functions of the Choleski constellation such as backsolve, and if yes, how?
A %*% B is the R lingo for matrix multiplication of A by B.
crossprod(A,B) is the R lingo for A' B (ie transpose of A matrix
multiplying the matrix/vector B).
solve(A,b) solves for x the linear system A x=b.
chol(A) is the Choleski decomposition of a PSD matrix A.
chol2inv computes (X' X)-1 from the (R part) of the QR decomposition of X.
Your 'fx01' implementation is, as you mentioned, somewhat naive and is performing far more work than the 'fx03' approach. In linear algebra (my apologies for the main StackOverflow not supporting LaTeX!), 'fx01' performs:
B := A' A in roughly n^3 flops.
L := chol(B) in roughly 1/3 n^3 flops.
L := inv(L) in roughly 1/3 n^3 flops.
B := L' L in roughly 1/3 n^3 flops.
z := A y in roughly 2n^2 flops.
x := B z in roughly 2n^2 flops.
Thus, the cost looks very similar to 2n^3 + 4n^2, whereas your 'fx03' approach uses the default 'solve' routine, which likely performs an LU decomposition with partial pivoting (2/3 n^3 flops) and two triangle solves (plus pivoting) in 2n^2 flops. Your 'fx01' approach therefore performs three times as much work asymptotically, and this amazingly agrees with your experimental results. Note that if A was real symmetric or complex Hermitian, that an LDL^T or LDL' factorization and solve would only require half as much work.
With that said, I think that you should replace your Cholesky update of A' A with a more stable QR update of A, as I just answered in your previous question. A QR decomposition costs roughly 4/3 n^3 flops and a rank-one update to a QR decomposition is only O(n^2), so this approach only makes sense for general A when there is more than just one related solve that is simply a rank-one modification.

Solving an equation using Matlab

S=solve(strcat('a*gamma(1+(1/b))=',int2str(m)),strcat('a*a*gamma(1+(2/b))=',int2str(c)));
Values of variables m and c are known. How can one solve for a and b?
I guess a and b are arbitrary constants. You can assign it as syms. If you really need to solve for a and b, use two equations two unknowns or the solve() function in matlab.
Try the optimization toolkit if you have it:
f = #(a,b) (a(1)*gamma(1+(1/a(2))) - b(1))^2 + (a(1)^2*gamma(1+(2/a(2)))-b(2))^2;
X = fminsearch(#(a) f(a,b),[1;1])

Resources