EDIT 2: this post seems to have been moved from CrossValidated to StackOverflow due to it being mostly about programming, but that means by fancy MathJax doesn't work anymore. Hopefully this is still readable.
Say I want to to calculate the squared Mahalanobis distance between two vectors x and y with covariance matrix S. This is a fairly simple function defined by
M2(x, y; S) = (x - y)^T * S^-1 * (x - y)
With python's numpy package I can do this as
# x, y = numpy.ndarray of shape (n,)
# s_inv = numpy.ndarray of shape (n, n)
diff = x - y
d2 = diff.T.dot(s_inv).dot(diff)
or in R as
diff <- x - y
d2 <- t(diff) %*% s_inv %*% diff
In my case, though, I am given
m by n matrix X
n-dimensional vector mu
n by n covariance matrix S
and want to find the m-dimensional vector d such that
d_i = M2(x_i, mu; S) ( i = 1 .. m )
where x_i is the ith row of X.
This is not difficult to accomplish using a simple loop in python:
d = numpy.zeros((m,))
for i in range(m):
diff = x[i,:] - mu
d[i] = diff.T.dot(s_inv).dot(diff)
Of course, given that the outer loop is happening in python instead of in native code in the numpy library means it's not as fast as it could be. $n$ and $m$ are about 3-4 and several hundred thousand respectively and I'm doing this somewhat often in an interactive program so a speedup would be very useful.
Mathematically, the only way I've been able to formulate this using basic matrix operations is
d = diag( X' * S^-1 * X'^T )
where
x'_i = x_i - mu
which is simple to write a vectorized version of, but this is unfortunately outweighed by the inefficiency of calculating a 10-billion-plus element matrix and only taking the diagonal... I believe this operation should be easily expressible using Einstein notation, and thus could hopefully be evaluated quickly with numpy's einsum function, but I haven't even begun to figure out how that black magic works.
So, I would like to know: is there either a nicer way to formulate this operation mathematically (in terms of simple matrix operations), or could someone suggest some nice vectorized (python or R) code that does this efficiently?
BONUS QUESTION, for the brave
I don't actually want to do this once, I want to do it k ~ 100 times. Given:
m by n matrix X
k by n matrix U
Set of n by n covariance matrices each denoted S_j (j = 1..k)
Find the m by k matrix D such that
D_i,j = M(x_i, u_j; S_j)
Where i = 1..m, j = 1..k, x_i is the ith row of X and u_j is the jth row of U.
I.e., vectorize the following code:
# s_inv is (k x n x n) array containing "stacked" inverses
# of covariance matrices
d = numpy.zeros( (m, k) )
for j in range(k):
for i in range(m):
diff = x[i, :] - u[j, :]
d[i, j] = diff.T.dot(s_inv[j, :, :]).dot(diff)
First off, it seems like maybe you're getting S and then inverting it. You shouldn't do that; it's slow and numerically inaccurate. Instead, you should get the Cholesky factor L of S so that S = L L^T; then
M^2(x, y; L L^T)
= (x - y)^T (L L^T)^-1 (x - y)
= (x - y)^T L^-T L^-1 (x - y)
= || L^-1 (x - y) ||^2,
and since L is triangular L^-1 (x - y) can be computed efficiently.
As it turns out, scipy.linalg.solve_triangular will happily do a bunch of these at once if you reshape it properly:
L = np.linalg.cholesky(S)
y = scipy.linalg.solve_triangular(L, (X - mu[np.newaxis]).T, lower=True)
d = np.einsum('ij,ij->j', y, y)
Breaking that down a bit, y[i, j] is the ith component of L^-1 (X_j - \mu). The einsum call then does
d_j = \sum_i y_{ij} y_{ij}
= \sum_i y_{ij}^2
= || y_j ||^2,
like we need.
Unfortunately, solve_triangular won't vectorize across its first argument, so you should probably just loop there. If k is only about 100, that's not going to be a significant issue.
If you are actually given S^-1 rather than S, then you can indeed do this with einsum more directly. Since S is quite small in your case, it's also possible that actually inverting the matrix and then doing this would be faster. As soon as n is a nontrivial size, though, you're throwing away a lot of numerical accuracy by doing this.
To figure out what to do with einsum, write everything in terms of components. I'll go straight to the bonus case, writing S_j^-1 = T_j for notational convenience:
D_{ij} = M^2(x_i, u_j; S_j)
= (x_i - u_j)^T T_j (x_i - u_j)
= \sum_k (x_i - u_j)_k ( T_j (x_i - u_j) )_k
= \sum_k (x_i - u_j)_k \sum_l (T_j)_{k l} (x_i - u_j)_l
= \sum_{k l} (X_{i k} - U_{j k}) (T_j)_{k l} (X_{i l} - U_{j l})
So, if we make arrays X of shape (m, n), U of shape (k, n), and T of shape (k, n, n), then we can write this as
diff = X[np.newaxis, :, :] - U[:, np.newaxis, :]
D = np.einsum('jik,jkl,jil->ij', diff, T, diff)
where diff[j, i, k] = X_[i, k] - U[j, k].
Dougal nailed this one with an excellent and detailed answer, but thought I'd share a small modification that I found increases efficiency in case anyone else is trying to implement this. Straight to the point:
Dougal's method was as follows:
def mahalanobis2(X, mu, sigma):
L = np.linalg.cholesky(sigma)
y = scipy.linalg.solve_triangular(L, (X - mu[np.newaxis,:]).T, lower=True)
return np.einsum('ij,ij->j', y, y)
A mathematically equivalent variant I tried is
def mahalanobis2_2(X, mu, sigma):
# Cholesky decomposition of inverse of covariance matrix
# (Doing this in either order should be equivalent)
linv = np.linalg.cholesky(np.linalg.inv(sigma))
# Just do regular matrix multiplication with this matrix
y = (X - mu[np.newaxis,:]).dot(linv)
# Same as above, but note different index at end because the matrix
# y is transposed here compared to above
return np.einsum('ij,ij->i', y, y)
Ran both versions head-to-head 20x using identical random inputs and recorded the times (in milliseconds). For X as a 1,000,000 x 3 matrix (mu and sigma 3 and 3x3) I get:
Method 1 (min/max/avg): 30/62/49
Method 2 (min/max/avg): 30/47/37
That's about a 30% speedup for the 2nd version. I'm mostly going to be running this in 3 or 4 dimensions but to see how it scaled I tried X as 1,000,000 x 100 and got:
Method 1 (min/max/avg): 970/1134/1043
Method 2 (min/max/avg): 776/907/837
which is about the same improvement.
I mentioned this in a comment on Dougal's answer but adding here for additional visibility:
The first pair of methods above take a single center point mu and covariance matrix sigma and calculate the squared Mahalanobis distance to each row of X. My bonus question was to do this multiple times with many sets of mu and sigma and output a two-dimensional matrix. The set of methods above can be used to accomplish this with a simple for loop, but Dougal also posted a more clever example using einsum.
I decided to compare these methods with each other by using them to solve the following problem: Given k d-dimensional normal distributions (with centers stored in rows of k by d matrix U and covariance matrices in the last two dimensions of the k by d by d array S), find the density at the n points stored in rows of the n by d matrix X.
The density of a multivariate normal distribution is a function of the squared Mahalanobis distance of the point to the mean. Scipy has an implementation of this as scipy.stats.multivariate_normal.pdf to use as a reference. I ran all three methods against each other 10x using identical random parameters each time, with d=3, k=96, n=5e5. Here are the results, in points/sec:
[Method]: (min/max/avg)
Scipy: 1.18e5/1.29e5/1.22e5
Fancy 1: 1.41e5/1.53e5/1.48e5
Fancy 2: 8.69e4/9.73e4/9.03e4
Fancy 2 (cheating version): 8.61e4/9.88e4/9.04e4
where Fancy 1 is the better of the two methods above and Fancy2 is Dougal's 2nd solution. Since the Fancy 2 needs to calculate the inverses of all the covariance matrices I also tried a "cheating version" where it was passed these as a parameter, but it looks like that didn't make a difference. I had planned on including the non-vectorized implementation but that was so slow it would have taken all day.
What we can take away from this is that using Dougal's first method is about 20% faster than however Scipy does it. Unfortunately despite its cleverness the 2nd method is only about 60% as fast as the first. There are probably some other optimizations that can be done but this is already fast enough for me.
I also tested how this scaled with higher dimensionality. With d=100, k=96, n=1e4:
Scipy: 7.81e3/7.91e3/7.86e3
Fancy 1: 1.03e4/1.15e4/1.08e4
Fancy 2: 3.75e3/4.10e3/3.95e3
Fancy 2 (cheating version): 3.58e3/4.09e3/3.85e3
Fancy 1 seems to have an even bigger advantage this time. Also worth noting that Scipy threw a LinAlgError 8/10 times, probably because some of my randomly-generated 100x100 covariance matrices were close to singular (which may mean that the other two methods are not as numerically stable, I did not actually check the results).
Related
I have a matrix $X$ and I would like to find its first principal component and the corresponding loadings. I would like to do this without computing the covariance matrix of $X$. How can I do so?
This is the standard version, which uses the eigendecomposition of the covariance matrix.
using LinearAlgebra: eigen
using Statistics: mean
function find_principal_component(X)
n = size(X, 1)
B = X .- mapslices(mean, X, dims=[1]) # Center columns of X
evalues, V = eigen(B'B / (n - 1)) # EigenDecomposition of Covariance Matrix
PC = V[:, argmax(evalues)] # Grab principal component and compute loading
return B * PC, PC
end
Alternatively, one could use the power method, which still uses the covariance matrix
function power_method(X, niter=50)
pc = randn(size(X, 2))
pc /= norm(pc)
M = X'X
for i in 1:niter
pc = M * pc
pc /= norm(pc)
end
return X * pc, pc
end
I would like something like the power method, but without needing to compute the covariance matrix, which can be quite costly.
Possible solution
I noticed something interesting. Let r_t be the principal component vector at time t. The idea of the power method is to start with a random r_t and multiply it by X' X many times to stretch it towards the principal component. In other words r_{t+1} = X' X r_t
Once we have the principal component r_t then the loadings are simply \ell_t = X r_t. This means we can write r_{t+1} = X^\top \ell_t
One could therefore start with r_t and \ell_t initialized randomly and then do
r_{t+1} = normalize(X^\top \ell_t)\\
\ell_{t+1} = X r_{t+1}
In general, you may find singular value decompositions more useful for this.
The definition of the singular value decomposition is
B = U Σ V'
This means that
B'B = V Σ² V'
As a result, you code can avoid the computation of B'B. More importantly, the singular values are always real and thus you don't have to worry about whether B'B will be exactly symmetric.
Even better, Arpack.svds allows you to compute just the largest few singular values.
Here is a version of your code that uses SVD instead of eigen decomposition:
using LinearAlgebra: eigen
using Statistics: mean
using Arpack: svds
function find_principal_component(X)
n = size(X, 1)
# Center columns of X
B = X .- mapslices(mean, X, dims=[1])
# Decomposition of Covariance Matrix
svd,_ = svds(B / (n - 1), nsv=1)
# Grab principal component and compute loading
PC = svd.V[:, 1]
return B * PC, PC
end
Running this on a large sparse matrix (100k x 1k, 1M non-zeros) gives this speed:
julia> #time find_principal_component(sprandn(100_000, 1_000, 0.01))
25.529426 seconds (18.45 k allocations: 3.015 GiB, 0.02% gc time)
([0.014242904195824286, 0.10635817357717596, -0.010142643763158442, ...])
and on a large non-sparse example (1M x 100 entries):
julia> #time find_principal_component(randn(1_000_000, 100))
4.922949 seconds (1.31 k allocations: 2.280 GiB, 0.02% gc time)
([-0.06629858174095249, 0.6996443876327108, -1.1783642870384952, ...])
Try using KrylovKit.jl. Specifically, eigsolve(X, howmany=1, which=:LM]) will give you the eigen value with largest magnitude and the associated eigenvector. Docs are at https://jutho.github.io/KrylovKit.jl/stable/man/eig/
I'm trying to implement a modified version of a Kalman filter, in which I have a n-dimensional Normal prior on my vector of n hidden variables, and then a sequence of m independent data vectors which are distributed with different-but-known covariance matrices according to the hidden variables.
More formally:
The generalised solution for the posterior distribution of any given hidden variable conditional on all previous observed variables is:
So it is fairly "easy" to compute the final posterior distribution – all you have to do is apply the above transformations iteratively, starting with your prior and using the known covariance matrices and observed values at each step i. So a potential way to code this, assuming I have a list Sigma with the known covariance matrices, a matrix O with the observed value vectors, and the other variables stored in Lambda, Delta, and mu, is:
inv_Sigma = solve (Sigma [[1]])
inv_Lambda_plus_Delta = solve (Delta)
Delta_i = solve (inv_Sigma + inv_Lambda_plus_Delta)
mu_i = Delta_i %*% (inv_Sigma %*% O [, 1] + inv_Lambda_plus_Delta %*% mu)
if (n > 1) {
for (i in seq (2, n)) {
inv_Sigma = solve (Sigma [[i]])
inv_Lambda_plus_Delta = solve (Lambda + Delta_i)
Delta_i = solve (inv_Sigma + inv_Lambda_plus_Delta)
mu_i = Delta_i %*% (inv_Sigma %*% O [, i] + inv_Lambda_plus_Delta %*% mu_i)
}
}
However, as the dimension of the vectors grows, having to calculate these matrix inverses iteratively over and over and over gets increasingly computationally expensive, to the point where (for the problem I want to solve) it's prohibitive. Is there a more efficient way to do this calculation, either with some already-existing function or package or with a more efficient implementation of matrix inversion? Is a simulation-based solution better for my problem?
I'm reading Deep Learning by Goodfellow et al. and am trying to implement gradient descent as shown in Section 4.5 Example: Linear Least Squares. This is page 92 in the hard copy of the book.
The algorithm can be viewed in detail at https://www.deeplearningbook.org/contents/numerical.html with R implementation of linear least squares on page 94.
I've tried implementing in R, and the algorithm as implemented converges on a vector, but this vector does not seem to minimize the least squares function as required. Adding epsilon to the vector in question frequently produces a "minimum" less than the minimum outputted by my program.
options(digits = 15)
dim_square = 2 ### set dimension of square matrix
# Generate random vector, random matrix, and
set.seed(1234)
A = matrix(nrow = dim_square, ncol = dim_square, byrow = T, rlnorm(dim_square ^ 2)/10)
b = rep(rnorm(1), dim_square)
# having fixed A & B, select X randomly
x = rnorm(dim_square) # vector length of dim_square--supposed to be arbitrary
f = function(x, A, b){
total_vector = A %*% x + b # this is the function that we want to minimize
total = 0.5 * sum(abs(total_vector) ^ 2) # L2 norm squared
return(total)
}
f(x,A,b)
# how close do we want to get?
epsilon = 0.1
delta = 0.01
value = (t(A) %*% A) %*% x - t(A) %*% b
L2_norm = (sum(abs(value) ^ 2)) ^ 0.5
steps = vector()
while(L2_norm > delta){
x = x - epsilon * value
value = (t(A) %*% A) %*% x - t(A) %*% b
L2_norm = (sum(abs(value) ^ 2)) ^ 0.5
print(L2_norm)
}
minimum = f(x, A, b)
minimum
minimum_minus = f(x - 0.5*epsilon, A, b)
minimum_minus # less than the minimum found by gradient descent! Why?
On page 94 of the pdf appearing at https://www.deeplearningbook.org/contents/numerical.html
I am trying to find the values of the vector x such that f(x) is minimized. However, as demonstrated by the minimum in my code, and minimum_minus, minimum is not the actual minimum, as it exceeds minimum minus.
Any idea what the problem might be?
Original Problem
Finding the value of x such that the quantity Ax - b is minimized is equivalent to finding the value of x such that Ax - b = 0, or x = (A^-1)*b. This is because the L2 norm is the euclidean norm, more commonly known as the distance formula. By definition, distance cannot be negative, making its minimum identically zero.
This algorithm, as implemented, actually comes quite close to estimating x. However, because of recursive subtraction and rounding one quickly runs into the problem of underflow, resulting in massive oscillation, below:
Value of L2 Norm as a function of step size
Above algorithm vs. solve function in R
Above we have the results of A %% x followed by A %% min_x, with x estimated by the implemented algorithm and min_x estimated by the solve function in R.
The problem of underflow, well known to those familiar with numerical analysis, is probably best tackled by the programmers of lower-level libraries best equipped to tackle it.
To summarize, the algorithm appears to work as implemented. Important to note, however, is that not every function will have a minimum (think of a straight line), and also be aware that this algorithm should only be able to find a local, as opposed to a global minimum.
I usually have trouble inputing functions in R but they are always simple functions that I manage to work it out. However now I have a very complicated problem at hand that requires functions that has unknowns, summation and a matrix. And I am clueless where to begin. (This is not my homework question, just trying to work out something using a different method, hoping it works)
So I want to input a function:
A=∑i=1 N exp ^ [ ∑j=1 M Matrix ij * unknownj ]
and then minimize the function:
B= log A - ∑j=1 M unknown j * C j
so my goal is to find the j unknown parameters that minimizes function B.
But this is very complicated. You do not have to give me an answer directly. You can use another example to answer my question indirectly. Any help/tips/guidance is appreciated.
Let's see if we can break the problem into smaller things:
Let's name some variables first:
Let Q be an matrix with N rows and M columns
Let x be a (column) vector of length M (for a moment, think it's not an "unknown")
Let C be a (column) vector of length M
Notice that both A and B will be "scalars" (or, in R parlance, 1x1 vectors).
Hint: In R, you can do matrix multiplication using the %*% operator. See Quick-R: Matrix algebra.
Working on function A
Q %*% x is the product inside the sum which is inside the exponential function, so:
A <- function (Q, x) {
y <- Q %*% x # This will be a (column) vector of length `N`
return(sum(exp(y)) # This will be a scalar (more precisely, a 1x1 vector)
}
Not so hard, is it?
Working on function B
B <- function(Q, C, x) {
y <- sum(x * C) # or, since both x and C are column vectors:
# y <- t(x) %*% C
a <- A(Q, x)
return(log(a) - y)
}
So, that's how you would input the functions.
As for the optimization, I suggest you take a look to the optimx package; you'll need to supply starting values for vector x.
I'm looking for a well-optimized function that accepts an n X n distance matrix and returns an n X k matrix with the indices of the k nearest neighbors of the ith datapoint in the ith row.
I find a gazillion different R packages that let you do KNN, but they all seem to include the distance computations along with the sorting algorithm within the same function. In particular, for most routines the main argument is the original data matrix, not a distance matrix. In my case, I'm using a nonstandard distance on mixed variable types, so I need to separate the sorting problem from the distance computations.
This is not exactly a daunting problem -- I obviously could just use the order function inside a loop to get what I want (see my solution below), but this is far from optimal. For example, the sort function with partial = 1:k when k is small (less than 11) goes much faster, but unfortunately returns only sorted values rather than the desired indices.
Try to use FastKNN CRAN package (although it is not well documented). It offers k.nearest.neighbors function where an arbitrary distance matrix can be given. Below you have an example that computes the matrix you need.
# arbitrary data
train <- matrix(sample(c("a","b","c"),12,replace=TRUE), ncol=2) # n x 2
n = dim(train)[1]
distMatrix <- matrix(runif(n^2,0,1),ncol=n) # n x n
# matrix of neighbours
k=3
nn = matrix(0,n,k) # n x k
for (i in 1:n)
nn[i,] = k.nearest.neighbors(i, distMatrix, k = k)
Notice: You can always check Cran packages list for Ctrl+F='knn'
related functions:
https://cran.r-project.org/web/packages/available_packages_by_name.html
For the record (I won't mark this as the answer), here is a quick-and-dirty solution. Suppose sd.dist is the special distance matrix. Suppose k.for.nn is the number of nearest neighbors.
n = nrow(sd.dist)
knn.mat = matrix(0, ncol = k.for.nn, nrow = n)
knd.mat = knn.mat
for(i in 1:n){
knn.mat[i,] = order(sd.dist[i,])[1:k.for.nn]
knd.mat[i,] = sd.dist[i,knn.mat[i,]]
}
Now knn.mat is the matrix with the indices of the k nearest neighbors in each row, and for convenience knd.mat stores the corresponding distances.