Implementing additional constraints in R's nnls - r

I am using the R interface to the Lawson-Hanson NNLS implementation of an algorithm for non-negative linear least squares that solves ||A x - b||^2 with the constraint that all elements of vector x ≥ 0. This works fine but I would like to add further constrains. Of interest to me are:
Also minimize "energy" of x:
||A x - b||^2 + m*||x||^2
Minimize "energy in the x derivative"
||A x - b||^2 + m ||H x||^2, where H is the sum of identity and a matrix with -1 on the first off-diagonal
Most generally, minimize ||A x - b||^2 + m ||H x - f||^2.
Is there are a way to coax nnls to do this by some clever way of restating the problems 1.-3. Above? The reason I have hope for such a thing is that there is a little-throw away comment in a paper by Whitall et al (sorry for the paywall) that claims that "fortunately, NNLS can be adopted from the original form above to accommodate something in problem 3".

I take it m is a scalar, right? Consider the simple case m=1; you can generalize for other values of m by letting H* = sqrt(m) H and f* = sqrt(m) f and using the solution method given here.
So now you're trying to minimise ||A x - b||^2 + ||H x - f||^2.
Let A* = [A' | H']' and let b* = [b' | f']' (i.e. stack up A on top of H and b on top of f) and solve the original problem of
non-negative linear least squares on ||A* x - b*||^2 with the constraint that all elements of vector x ≥ 0 .

Related

While loop in R, need a more efficient code

I have written an R code to solve the following equations jointly. These are closed-form solutions that require numerical procedure.
I further divided the numerator and denominator of (B) by N to get arithmetic means.
Here is my code:
y=cbind(Sta,Zta,Ste,Zte) # combine the variables
St=as.matrix(y[,c(1,3)])
Stm=c(mean(St[,1]), mean(St[,2])); # Arithmetic means of St's
Zt=as.matrix(y[,c(2,4)])
Ztm=c(mean(Zt[,1]), mean(Zt[,2])); # Arithmetic means of Zt's
theta=c(-20, -20); # starting values for thetas
tol=c(10^-4, 10^-4);
err=c(0,0);
epscon=-0.1
while (abs(err) > tol | phicon<0) {
### A
eps = ((mean(y[,2]^2))+mean(y[,4]^2))/(-mean(y[,1]*y[,2])+theta[1]*mean(y[,2])-mean(y[,3]*y[,4])+theta[2]*mean(y[,4]))
### B
thetan = Stm + (1/eps)*Ztm
err=thetan-theta
theta=thetan
epscon=1-eps
print(c(ebs,theta))
}
Iteration does not stop as the second condition of while loop is not met, the solution is a positive epsilon. I would like to get a negative epsilon. This, I guess requires a grid search or a range of starting values for the Thetas.
Can anyone please help code this process differently and more efficiently? Or help correct my code if there are flaws in it.
Thank you
If I am right, using linearity your equations have the form
ΘA = a + b / ε
ΘB = c + d / ε
1/ε = e ΘA + f ΘB + g
This is an easy 3x3 linear system.

How do I minimize a linear least squares function in R?

I'm reading Deep Learning by Goodfellow et al. and am trying to implement gradient descent as shown in Section 4.5 Example: Linear Least Squares. This is page 92 in the hard copy of the book.
The algorithm can be viewed in detail at https://www.deeplearningbook.org/contents/numerical.html with R implementation of linear least squares on page 94.
I've tried implementing in R, and the algorithm as implemented converges on a vector, but this vector does not seem to minimize the least squares function as required. Adding epsilon to the vector in question frequently produces a "minimum" less than the minimum outputted by my program.
options(digits = 15)
dim_square = 2 ### set dimension of square matrix
# Generate random vector, random matrix, and
set.seed(1234)
A = matrix(nrow = dim_square, ncol = dim_square, byrow = T, rlnorm(dim_square ^ 2)/10)
b = rep(rnorm(1), dim_square)
# having fixed A & B, select X randomly
x = rnorm(dim_square) # vector length of dim_square--supposed to be arbitrary
f = function(x, A, b){
total_vector = A %*% x + b # this is the function that we want to minimize
total = 0.5 * sum(abs(total_vector) ^ 2) # L2 norm squared
return(total)
}
f(x,A,b)
# how close do we want to get?
epsilon = 0.1
delta = 0.01
value = (t(A) %*% A) %*% x - t(A) %*% b
L2_norm = (sum(abs(value) ^ 2)) ^ 0.5
steps = vector()
while(L2_norm > delta){
x = x - epsilon * value
value = (t(A) %*% A) %*% x - t(A) %*% b
L2_norm = (sum(abs(value) ^ 2)) ^ 0.5
print(L2_norm)
}
minimum = f(x, A, b)
minimum
minimum_minus = f(x - 0.5*epsilon, A, b)
minimum_minus # less than the minimum found by gradient descent! Why?
On page 94 of the pdf appearing at https://www.deeplearningbook.org/contents/numerical.html
I am trying to find the values of the vector x such that f(x) is minimized. However, as demonstrated by the minimum in my code, and minimum_minus, minimum is not the actual minimum, as it exceeds minimum minus.
Any idea what the problem might be?
Original Problem
Finding the value of x such that the quantity Ax - b is minimized is equivalent to finding the value of x such that Ax - b = 0, or x = (A^-1)*b. This is because the L2 norm is the euclidean norm, more commonly known as the distance formula. By definition, distance cannot be negative, making its minimum identically zero.
This algorithm, as implemented, actually comes quite close to estimating x. However, because of recursive subtraction and rounding one quickly runs into the problem of underflow, resulting in massive oscillation, below:
Value of L2 Norm as a function of step size
Above algorithm vs. solve function in R
Above we have the results of A %% x followed by A %% min_x, with x estimated by the implemented algorithm and min_x estimated by the solve function in R.
The problem of underflow, well known to those familiar with numerical analysis, is probably best tackled by the programmers of lower-level libraries best equipped to tackle it.
To summarize, the algorithm appears to work as implemented. Important to note, however, is that not every function will have a minimum (think of a straight line), and also be aware that this algorithm should only be able to find a local, as opposed to a global minimum.

Remainder sequences

I would like to compute the remainder sequence of two polynomials as used by GCD. If I understood the Wikipedia article about Pseudo-remainder sequence, one way to compute it is to use Euclid's algorithm:
gcd(a, b) := if b = 0 then a else gcd(b, rem(a, b))
meaning I will collect that rem() parts. If however the coefficients are integers, the intermediate fractions grow very quickly so then there are the so-called "Pseudo-remainder sequences" which try to keep the coefficients in small integers.
My question is, if I understood correctly (did I?), the two above sequences differ only by constant factor but when I try to run the following example I get different results, why? The first remainder sequence differs by -2, ok, but why is the second sequence so different? I presume subresultants() works correctly, but why does that g % (f % g) not work?
f = Poly(x**2*y + x**2 - 5*x*y + 2*x + 1, x, y)
g = Poly(2*x**2 - 12*x + 1, x)
print
print subresultants(f, g)[2]
print subresultants(f, g)[3]
print
print f % g
print g % (f % g)
which results in
Poly(-2*x*y - 16*x + y - 1, x, y, domain='ZZ')
Poly(-9*y**2 - 54*y + 225, x, y, domain='ZZ')
Poly(x*y + 8*x - 1/2*y + 1/2, x, y, domain='QQ')
Poly(2*x**2 - 12*x + 1, x, y, domain='QQ')
the two above sequences differ only by constant factor
For polynomials of one variable, they do. For multivariate polynomials, they don't.
The division of multivariable polynomials is a somewhat tricky business: result depends on the chosen order of monomials (by default, sympy uses lexicographic order). When you ask it to divide 2*x**2 - 12*x + 1 by x*y + 8*x - 1/2*y + 1/2, it observes that the leading monomial of the denominator is x*y, and there is no monomial in the numerator that is divisible by x*y. So the quotient is zero, and everything is a remainder.
The computation of subresultants (as it's implemented in sympy) treats polynomials in x,y as single-variable polynomials in x whose coefficients happen to come from the ring of polynomials in y. It is certain to produce a sequence of subresultants whose degree with respect to x keeps decreasing until it reaches 0: the last polynomial of the sequence will not have x in it. The degree with respect to y may (and does) go up, since the algorithm has no problem multiplying the terms by any polynomials in y in order to get x to drop out.
The upshot is that both computations work correctly, they just do different things.

Vectorizing code to calculate (squared) Mahalanobis Distiance

EDIT 2: this post seems to have been moved from CrossValidated to StackOverflow due to it being mostly about programming, but that means by fancy MathJax doesn't work anymore. Hopefully this is still readable.
Say I want to to calculate the squared Mahalanobis distance between two vectors x and y with covariance matrix S. This is a fairly simple function defined by
M2(x, y; S) = (x - y)^T * S^-1 * (x - y)
With python's numpy package I can do this as
# x, y = numpy.ndarray of shape (n,)
# s_inv = numpy.ndarray of shape (n, n)
diff = x - y
d2 = diff.T.dot(s_inv).dot(diff)
or in R as
diff <- x - y
d2 <- t(diff) %*% s_inv %*% diff
In my case, though, I am given
m by n matrix X
n-dimensional vector mu
n by n covariance matrix S
and want to find the m-dimensional vector d such that
d_i = M2(x_i, mu; S) ( i = 1 .. m )
where x_i is the ith row of X.
This is not difficult to accomplish using a simple loop in python:
d = numpy.zeros((m,))
for i in range(m):
diff = x[i,:] - mu
d[i] = diff.T.dot(s_inv).dot(diff)
Of course, given that the outer loop is happening in python instead of in native code in the numpy library means it's not as fast as it could be. $n$ and $m$ are about 3-4 and several hundred thousand respectively and I'm doing this somewhat often in an interactive program so a speedup would be very useful.
Mathematically, the only way I've been able to formulate this using basic matrix operations is
d = diag( X' * S^-1 * X'^T )
where
x'_i = x_i - mu
which is simple to write a vectorized version of, but this is unfortunately outweighed by the inefficiency of calculating a 10-billion-plus element matrix and only taking the diagonal... I believe this operation should be easily expressible using Einstein notation, and thus could hopefully be evaluated quickly with numpy's einsum function, but I haven't even begun to figure out how that black magic works.
So, I would like to know: is there either a nicer way to formulate this operation mathematically (in terms of simple matrix operations), or could someone suggest some nice vectorized (python or R) code that does this efficiently?
BONUS QUESTION, for the brave
I don't actually want to do this once, I want to do it k ~ 100 times. Given:
m by n matrix X
k by n matrix U
Set of n by n covariance matrices each denoted S_j (j = 1..k)
Find the m by k matrix D such that
D_i,j = M(x_i, u_j; S_j)
Where i = 1..m, j = 1..k, x_i is the ith row of X and u_j is the jth row of U.
I.e., vectorize the following code:
# s_inv is (k x n x n) array containing "stacked" inverses
# of covariance matrices
d = numpy.zeros( (m, k) )
for j in range(k):
for i in range(m):
diff = x[i, :] - u[j, :]
d[i, j] = diff.T.dot(s_inv[j, :, :]).dot(diff)
First off, it seems like maybe you're getting S and then inverting it. You shouldn't do that; it's slow and numerically inaccurate. Instead, you should get the Cholesky factor L of S so that S = L L^T; then
M^2(x, y; L L^T)
= (x - y)^T (L L^T)^-1 (x - y)
= (x - y)^T L^-T L^-1 (x - y)
= || L^-1 (x - y) ||^2,
and since L is triangular L^-1 (x - y) can be computed efficiently.
As it turns out, scipy.linalg.solve_triangular will happily do a bunch of these at once if you reshape it properly:
L = np.linalg.cholesky(S)
y = scipy.linalg.solve_triangular(L, (X - mu[np.newaxis]).T, lower=True)
d = np.einsum('ij,ij->j', y, y)
Breaking that down a bit, y[i, j] is the ith component of L^-1 (X_j - \mu). The einsum call then does
d_j = \sum_i y_{ij} y_{ij}
= \sum_i y_{ij}^2
= || y_j ||^2,
like we need.
Unfortunately, solve_triangular won't vectorize across its first argument, so you should probably just loop there. If k is only about 100, that's not going to be a significant issue.
If you are actually given S^-1 rather than S, then you can indeed do this with einsum more directly. Since S is quite small in your case, it's also possible that actually inverting the matrix and then doing this would be faster. As soon as n is a nontrivial size, though, you're throwing away a lot of numerical accuracy by doing this.
To figure out what to do with einsum, write everything in terms of components. I'll go straight to the bonus case, writing S_j^-1 = T_j for notational convenience:
D_{ij} = M^2(x_i, u_j; S_j)
= (x_i - u_j)^T T_j (x_i - u_j)
= \sum_k (x_i - u_j)_k ( T_j (x_i - u_j) )_k
= \sum_k (x_i - u_j)_k \sum_l (T_j)_{k l} (x_i - u_j)_l
= \sum_{k l} (X_{i k} - U_{j k}) (T_j)_{k l} (X_{i l} - U_{j l})
So, if we make arrays X of shape (m, n), U of shape (k, n), and T of shape (k, n, n), then we can write this as
diff = X[np.newaxis, :, :] - U[:, np.newaxis, :]
D = np.einsum('jik,jkl,jil->ij', diff, T, diff)
where diff[j, i, k] = X_[i, k] - U[j, k].
Dougal nailed this one with an excellent and detailed answer, but thought I'd share a small modification that I found increases efficiency in case anyone else is trying to implement this. Straight to the point:
Dougal's method was as follows:
def mahalanobis2(X, mu, sigma):
L = np.linalg.cholesky(sigma)
y = scipy.linalg.solve_triangular(L, (X - mu[np.newaxis,:]).T, lower=True)
return np.einsum('ij,ij->j', y, y)
A mathematically equivalent variant I tried is
def mahalanobis2_2(X, mu, sigma):
# Cholesky decomposition of inverse of covariance matrix
# (Doing this in either order should be equivalent)
linv = np.linalg.cholesky(np.linalg.inv(sigma))
# Just do regular matrix multiplication with this matrix
y = (X - mu[np.newaxis,:]).dot(linv)
# Same as above, but note different index at end because the matrix
# y is transposed here compared to above
return np.einsum('ij,ij->i', y, y)
Ran both versions head-to-head 20x using identical random inputs and recorded the times (in milliseconds). For X as a 1,000,000 x 3 matrix (mu and sigma 3 and 3x3) I get:
Method 1 (min/max/avg): 30/62/49
Method 2 (min/max/avg): 30/47/37
That's about a 30% speedup for the 2nd version. I'm mostly going to be running this in 3 or 4 dimensions but to see how it scaled I tried X as 1,000,000 x 100 and got:
Method 1 (min/max/avg): 970/1134/1043
Method 2 (min/max/avg): 776/907/837
which is about the same improvement.
I mentioned this in a comment on Dougal's answer but adding here for additional visibility:
The first pair of methods above take a single center point mu and covariance matrix sigma and calculate the squared Mahalanobis distance to each row of X. My bonus question was to do this multiple times with many sets of mu and sigma and output a two-dimensional matrix. The set of methods above can be used to accomplish this with a simple for loop, but Dougal also posted a more clever example using einsum.
I decided to compare these methods with each other by using them to solve the following problem: Given k d-dimensional normal distributions (with centers stored in rows of k by d matrix U and covariance matrices in the last two dimensions of the k by d by d array S), find the density at the n points stored in rows of the n by d matrix X.
The density of a multivariate normal distribution is a function of the squared Mahalanobis distance of the point to the mean. Scipy has an implementation of this as scipy.stats.multivariate_normal.pdf to use as a reference. I ran all three methods against each other 10x using identical random parameters each time, with d=3, k=96, n=5e5. Here are the results, in points/sec:
[Method]: (min/max/avg)
Scipy: 1.18e5/1.29e5/1.22e5
Fancy 1: 1.41e5/1.53e5/1.48e5
Fancy 2: 8.69e4/9.73e4/9.03e4
Fancy 2 (cheating version): 8.61e4/9.88e4/9.04e4
where Fancy 1 is the better of the two methods above and Fancy2 is Dougal's 2nd solution. Since the Fancy 2 needs to calculate the inverses of all the covariance matrices I also tried a "cheating version" where it was passed these as a parameter, but it looks like that didn't make a difference. I had planned on including the non-vectorized implementation but that was so slow it would have taken all day.
What we can take away from this is that using Dougal's first method is about 20% faster than however Scipy does it. Unfortunately despite its cleverness the 2nd method is only about 60% as fast as the first. There are probably some other optimizations that can be done but this is already fast enough for me.
I also tested how this scaled with higher dimensionality. With d=100, k=96, n=1e4:
Scipy: 7.81e3/7.91e3/7.86e3
Fancy 1: 1.03e4/1.15e4/1.08e4
Fancy 2: 3.75e3/4.10e3/3.95e3
Fancy 2 (cheating version): 3.58e3/4.09e3/3.85e3
Fancy 1 seems to have an even bigger advantage this time. Also worth noting that Scipy threw a LinAlgError 8/10 times, probably because some of my randomly-generated 100x100 covariance matrices were close to singular (which may mean that the other two methods are not as numerically stable, I did not actually check the results).

Convert Linear scale to Logarithmic

I have a linear scale that ranges form 0.1 to 10 with increments of change at 0.1:
|----------[]----------|
0.1 5.0 10
However, the output really needs to be:
|----------[]----------|
0.1 1.0 10 (logarithmic scale)
I'm trying to figure out the formula needed to convert the 5 (for example) to 1.0.
Consequently, if the dial was shifted halfway between 1.0 and 10 (real value on linear scale being 7.5), what would the resulting logarithmic value be? Been thinking about this for hours, but I have not worked with this type of math in quite a few years, so I am really lost. I understand the basic concept of log10X = 10y, but that's pretty much it.
The psuedo-value of 5.0 would become 10 (or 101) while the psuedo-value of 10 would be 1010. So how to figure the pseudo-value and resulting logarithmic value of, let's say, the 7.5?
Let me know if addition information is needed.
Thanks for any help provided; this has beaten me.
Notation
As is the convention both in mathematics and programming, the "log" function is taken to be base-e. The "exp" function is the exponential function. Remember that these functions are inverses we take the functions as:
exp : ℝ → ℝ+, and
log : ℝ+ → ℝ.
Solution
You're just solving a simple equation here:
y = a exp bx
Solve for a and b passing through the points x=0.1, y=0.1 and x=10, y=10.
Observe that the ratio y1/y2 is given by:
y1/y2 = (a exp bx1) / (a exp bx2) = exp b(x1-x2)
Which allows you to solve for b
b = log (y1/y2) / (x1-x2)
The rest is easy.
b = log (10 / 0.1) / (10 - 0.1) = 20/99 log 10 ≈ 0.46516870565536284
a = y1 / exp bx1 ≈ 0.09545484566618341
More About Notation
In your career you will find people who use the convention that the log function uses base e, base 10, and even base 2. This does not mean that anybody is right or wrong. It is simply a notational convention and everybody is free to use the notational convention that they prefer.
The convention in both mathematics and computer programming is to use base e logarithm, and using base e simplifies notation in this case, which is why I chose it. It is not the same as the convention used by calculators such as the one provided by Google and your TI-84, but then again, calculators are for engineers, and engineers use different notation than mathematicians and programmers.
The following programming languages include a base-e log function in the standard library.
C log() (and C++, by inclusion)
Java Math.log()
JavaScript Math.log()
Python math.log() (including Numpy)
Fortran log()
C#, Math.Log()
R
Maxima (strictly speaking a CAS, not a language)
Scheme's log
Lisp's log
In fact, I cannot think of a single programming language where log() is anything other than the base-e logarithm. I'm sure such a programming language exists.
I realize this answer is six years too late, but it might help someone else.
Given a linear scale whose values range from x0 to x1, and a logarithmic scale whose values range from y0 to y1, the mapping between x and y (in either direction) is given by the relationship shown in equation 1:
x - x0 log(y) - log(y0)
------- = ----------------- (1)
x1 - x0 log(y1) - log(y0)
where,
x0 < x1
{ x | x0 <= x <= x1 }
y0 < y1
{ y | y0 <= y <= y1 }
y1/y0 != 1 ; i.e., log(y1) - log(y0) != 0
y0, y1, y != 0
EXAMPLE 1
The values on the linear x-axis range from 10 to 12, and the values on the logarithmic y-axis range from 300 to 3000. Given y=1000, what is x?
Rearranging equation 1 to solve for 'x' yields,
log(y) - log(y0)
x = (x1 - x0) * ----------------- + x0
log(y1) - log(y0)
log(1000) - log(300)
= (12 - 10) * -------------------- + 10
log(3000) - log(300)
≈ 11
EXAMPLE 2
Given the values in your question, the values on the linear x-axis range from 0.1 to 10, and the values on the logarithmic y-axis range from 0.1 to 10, and the log base is 10. Given x=7.5, what is y?
Rearranging equation 1 to solve for 'y' yields,
x - x0
log(y) = ------- * (log(y1) - log(y0)) + log(y0)
x1 - x0
/ x - x0 \
y = 10^| ------- * (log(y1) - log(y0)) + log(y0) |
\ x1 - x0 /
/ 7.5 - 0.1 \
= 10^| --------- * (log(10) - log(0.1)) + log(0.1) |
\ 10 - 0.1 /
/ 7.5 - 0.1 \
= 10^| --------- * (1 - (-1)) + (-1) |
\ 10 - 0.1 /
≈ 3.13
:: EDIT (11 Oct 2020) ::
For what it's worth, the number base 'n' can be any real-valued positive number. The examples above use logarithm base 10, but the logarithm base could be 2, 13, e, pi, etc. Here's a spreadsheet I created that performs the calculations for any real-valued positive number base. The "solution" cells are colored yellow and have thick borders. In these figures, I picked at random the logarithm base n=13—i.e., z = log13(y).
Figure 1. Spreadsheet values.
Figure 2. Spreadsheet formulas.
Figure 3. Mapping of X and Y values.

Resources