How do I solve a system of linear equations using SageMath? - sage

I have system of linear equations, however I DO NOT want the answer to be a number - I want it in terms of the parameters.
ax+by= m
cx+dy= n
I don't have the values for any of the constants, so I for the above equation, I'd just want the answer x = (md-nb)/(ad-bc) and y = (mc-na)/(bc-ad).
How do I do this in Sage?

var('a b c d m n x y')
solve([a*x+b*y==m,c*x+d*y==n],x,y)

Related

R Derivatives of an Inverse

I have an expression that contains several parts. However, for simplicity, consider only the following part as MWE:
Let's assume we have the inverse of a matrix Y that I want to differentiate w.r.t. x.
Y is given as I - (x * b * t(b)), where I is the identity matrix, x is a scalar, and b is a vector.
According to The Matrix Cookbook Equ. 59, the partial derivative of an inverse is:
Normally I would use the function D from the package stats to calculate the derivatives. But that is not possible in this case, because e.g. solve to specify Y as inverse and t() is not in the table of derivatives.
What is the best workaround to circumvent this problem? Are there any other recommended packages that can handle such input?
Example that doesn't work:
f0 <- expression(solve(I - (x * b %*% t(b))))
D(f0, "x")
Example that works:
f0 <- expression(x^3)
D(f0, "x")
3 * x^2
I assume that the question is how to get an explicit expression for the derivative of the inverse of Y with respect to x. In the first section we compute it and in the second section we double check it by computing it numerically and show that the two approaches give the same result.
b and the null space of b are both eigenspaces of Y which we can readily verify by noting that Yb = (1-(b'b)x)b and if z belongs to the nullspace of b then Yz = z. This also shows that the corresponding eigenvalues are 1 - x(b'b) with multiplicity 1 and 1 with multiplicity n-1 (since the nullspace of b has that dimension).
As a result of the fact that we can expand such a matrix into the sum of each eigenvalue times the projection onto its eigenspace we can express Y as the following where bb'/b'b is the projection onto the eigenspace spanned by b and the part pre-multiplying it is the eigenvalue. The remaining terms do not involve x because they involve an eigenvalue of 1 independently of x and the nullspace of b is independent of x as well.
Y = (1-x(b'b))(bb')/(b'b) + terms not involving x
The inverse of Y is formed by taking the reciprocals of the eigenvalues so:
Yinv = 1/(1-x(b'b)) * (bb')/(b'b) + terms not involving x
and the derivative of that wrt x is:
(b'b) / (1 - x(b'b))^2 * (bb')/(b'b)
Cancelling the b'b and writing the derivative in terms of R code:
1/(1 - x*sum(b*b))^2*outer(b, b)
Double check
Using specific values for b and x we can verify it against the numeric derivative as follows:
library(numDeriv)
x <- 1
b <- 1:3
# Y inverse as a function of x
Yinv <- function(x) solve(diag(3) - x * outer(b, b))
all.equal(matrix(jacobian(Yinv, x = 1), 3),
1/(1 - x*sum(b*b))^2*outer(b, b))
## [1] TRUE

How do I minimize a linear least squares function in R?

I'm reading Deep Learning by Goodfellow et al. and am trying to implement gradient descent as shown in Section 4.5 Example: Linear Least Squares. This is page 92 in the hard copy of the book.
The algorithm can be viewed in detail at https://www.deeplearningbook.org/contents/numerical.html with R implementation of linear least squares on page 94.
I've tried implementing in R, and the algorithm as implemented converges on a vector, but this vector does not seem to minimize the least squares function as required. Adding epsilon to the vector in question frequently produces a "minimum" less than the minimum outputted by my program.
options(digits = 15)
dim_square = 2 ### set dimension of square matrix
# Generate random vector, random matrix, and
set.seed(1234)
A = matrix(nrow = dim_square, ncol = dim_square, byrow = T, rlnorm(dim_square ^ 2)/10)
b = rep(rnorm(1), dim_square)
# having fixed A & B, select X randomly
x = rnorm(dim_square) # vector length of dim_square--supposed to be arbitrary
f = function(x, A, b){
total_vector = A %*% x + b # this is the function that we want to minimize
total = 0.5 * sum(abs(total_vector) ^ 2) # L2 norm squared
return(total)
}
f(x,A,b)
# how close do we want to get?
epsilon = 0.1
delta = 0.01
value = (t(A) %*% A) %*% x - t(A) %*% b
L2_norm = (sum(abs(value) ^ 2)) ^ 0.5
steps = vector()
while(L2_norm > delta){
x = x - epsilon * value
value = (t(A) %*% A) %*% x - t(A) %*% b
L2_norm = (sum(abs(value) ^ 2)) ^ 0.5
print(L2_norm)
}
minimum = f(x, A, b)
minimum
minimum_minus = f(x - 0.5*epsilon, A, b)
minimum_minus # less than the minimum found by gradient descent! Why?
On page 94 of the pdf appearing at https://www.deeplearningbook.org/contents/numerical.html
I am trying to find the values of the vector x such that f(x) is minimized. However, as demonstrated by the minimum in my code, and minimum_minus, minimum is not the actual minimum, as it exceeds minimum minus.
Any idea what the problem might be?
Original Problem
Finding the value of x such that the quantity Ax - b is minimized is equivalent to finding the value of x such that Ax - b = 0, or x = (A^-1)*b. This is because the L2 norm is the euclidean norm, more commonly known as the distance formula. By definition, distance cannot be negative, making its minimum identically zero.
This algorithm, as implemented, actually comes quite close to estimating x. However, because of recursive subtraction and rounding one quickly runs into the problem of underflow, resulting in massive oscillation, below:
Value of L2 Norm as a function of step size
Above algorithm vs. solve function in R
Above we have the results of A %% x followed by A %% min_x, with x estimated by the implemented algorithm and min_x estimated by the solve function in R.
The problem of underflow, well known to those familiar with numerical analysis, is probably best tackled by the programmers of lower-level libraries best equipped to tackle it.
To summarize, the algorithm appears to work as implemented. Important to note, however, is that not every function will have a minimum (think of a straight line), and also be aware that this algorithm should only be able to find a local, as opposed to a global minimum.

R programming: Linear Equation

How can I state, in R code, a list of certain possible values for a given equation? For example (this is just a random equation feel free to use any formula suitable):
For positive integers a, b, and c with the formula x^3 + y^2 = z.
How can I test for all possible combinations of x and y less than or equal to 1000 and c to satisfy the formula and check if the variables are also valid inputs?
You can generate all possible values with expand.grid and then subset to the ones meeting your criteria:
vals <- expand.grid(x=seq(1000), y=seq(1000))
subset(vals, x^3 + y^2 == 108)
# x y
# 8003 3 9
# 9002 2 10

plotting matrix equation in R

I'm new to R and I need to plot the quadratic matrix equation:
x^T A x + b^T x + c = 0
in R^2, with A being a 2x2, b a 2x1, and c a constant. The equation is for a boundary that defines classes of points. I need to plot that boundary for x0 = -6...6, x1 = -4...6. My first thought was generate a bunch of points and see where they are zero, but it depends on the increment between the numbers (most likely I'm not going guess what points are zero).
Is there a better way than just generating a bunch of points and seeing where it is zero or multiplying it out? Any help would be much appreciated,
Thank you.
Assuming you have a symmetric matrix A,
eg
# A = | a b/2 |
# | b/2 c |
and your equation represents a conic section, you can use the conics package
What you need is a vector of coefficients c(a,b,c,d,e,f) representing
a.x^2 + b*x*y + c*y^2 + d*x + e*y + f
In your case, say you have
A <- matrix(c(2,1,1,2))
B <- c(-20,-28)
C <- 10
# create the vector
v <- append(c(diag(A),B,C),A[lower.tri(A)]*2), 1)
conicPlot(v)
You could easily wrap the multiplication out into a simple function
# note this does no checking for symmetry or validity of arguments
expand.conic <- function(A, B, C){
append(c(diag(A),B,C),A[lower.tri(A)]*2), 1)
}

Implementing additional constraints in R's nnls

I am using the R interface to the Lawson-Hanson NNLS implementation of an algorithm for non-negative linear least squares that solves ||A x - b||^2 with the constraint that all elements of vector x ≥ 0. This works fine but I would like to add further constrains. Of interest to me are:
Also minimize "energy" of x:
||A x - b||^2 + m*||x||^2
Minimize "energy in the x derivative"
||A x - b||^2 + m ||H x||^2, where H is the sum of identity and a matrix with -1 on the first off-diagonal
Most generally, minimize ||A x - b||^2 + m ||H x - f||^2.
Is there are a way to coax nnls to do this by some clever way of restating the problems 1.-3. Above? The reason I have hope for such a thing is that there is a little-throw away comment in a paper by Whitall et al (sorry for the paywall) that claims that "fortunately, NNLS can be adopted from the original form above to accommodate something in problem 3".
I take it m is a scalar, right? Consider the simple case m=1; you can generalize for other values of m by letting H* = sqrt(m) H and f* = sqrt(m) f and using the solution method given here.
So now you're trying to minimise ||A x - b||^2 + ||H x - f||^2.
Let A* = [A' | H']' and let b* = [b' | f']' (i.e. stack up A on top of H and b on top of f) and solve the original problem of
non-negative linear least squares on ||A* x - b*||^2 with the constraint that all elements of vector x ≥ 0 .

Resources