Compute a discrete Fourier Transform matrix in Julia? - julia

In Julia, I would like to randomly generate a discrete fourier transform matrix of size n by n. I am currently not sure how to how to do such. Does anyone perhaps know a way to do this in Julia?

As said in the comments, you can use the FFTW.jl package for this purpose:
julia> using FFTW
julia> n = 5;
julia> rnd = rand(1:100, n, n);
julia> fft(rnd)
5×5 Matrix{ComplexF64}:
1216.0+0.0im 65.8754+10.3181im 106.125+119.409im 106.125-119.409im 65.8754-10.3181im
160.529-95.3957im 177.376-31.8946im -28.6976+150.325im 52.8237+139.038im 82.2517-165.542im
-91.0288-22.1566im 136.676+28.1im -42.8763-97.2573im -97.7517+4.15021im 8.19756-13.5548im
-91.0288+22.1566im 8.19756+13.5548im -97.7517-4.15021im -42.8763+97.2573im 136.676-28.1im
160.529+95.3957im 82.2517+165.542im 52.8237-139.038im -28.6976-150.325im 177.376+31.8946im
And for a Real datatype, you can use the rfft function:
julia> let n = 5
rnd = rand(n, n)
rfft(rnd)
end
3×5 Matrix{ComplexF64}:
10.54+0.0im 1.15104+0.522166im -0.449373-0.686863im -0.449373+0.686863im 1.15104-0.522166im
-1.2319+0.3485im -0.622914-0.649385im 1.39743-0.733653im 1.66696+0.694317im -1.59092-0.578805im
0.501205+0.962713im 0.056338-0.207403im 0.0156042-0.181913im -1.87067-1.66951im -0.672603-0.969665im
It might rise the question that why the result is a 3x5 matrix:
According to the official doc about the rfft function:
"Multidimensional FFT of a real array A, exploiting the fact that the transform has conjugate symmetry in order to save roughly half the computational time and storage costs compared with fft. If A has size (n_1, ..., n_d), the result has size (div(n_1,2)+1, ..., n_d)."
It's also possible to first create a random nxn matrix with the eltype of ComplexF64 and perform the fft on it; For this create the rnd variable like rand(ComplexF64, n, n) in the above let block, and replace the rfft with fft function.

Related

How can I find first principal component (and loadings) fast without using covariance matrix?

I have a matrix $X$ and I would like to find its first principal component and the corresponding loadings. I would like to do this without computing the covariance matrix of $X$. How can I do so?
This is the standard version, which uses the eigendecomposition of the covariance matrix.
using LinearAlgebra: eigen
using Statistics: mean
function find_principal_component(X)
n = size(X, 1)
B = X .- mapslices(mean, X, dims=[1]) # Center columns of X
evalues, V = eigen(B'B / (n - 1)) # EigenDecomposition of Covariance Matrix
PC = V[:, argmax(evalues)] # Grab principal component and compute loading
return B * PC, PC
end
Alternatively, one could use the power method, which still uses the covariance matrix
function power_method(X, niter=50)
pc = randn(size(X, 2))
pc /= norm(pc)
M = X'X
for i in 1:niter
pc = M * pc
pc /= norm(pc)
end
return X * pc, pc
end
I would like something like the power method, but without needing to compute the covariance matrix, which can be quite costly.
Possible solution
I noticed something interesting. Let r_t be the principal component vector at time t. The idea of the power method is to start with a random r_t and multiply it by X' X many times to stretch it towards the principal component. In other words r_{t+1} = X' X r_t
Once we have the principal component r_t then the loadings are simply \ell_t = X r_t. This means we can write r_{t+1} = X^\top \ell_t
One could therefore start with r_t and \ell_t initialized randomly and then do
r_{t+1} = normalize(X^\top \ell_t)\\
\ell_{t+1} = X r_{t+1}
In general, you may find singular value decompositions more useful for this.
The definition of the singular value decomposition is
B = U Σ V'
This means that
B'B = V Σ² V'
As a result, you code can avoid the computation of B'B. More importantly, the singular values are always real and thus you don't have to worry about whether B'B will be exactly symmetric.
Even better, Arpack.svds allows you to compute just the largest few singular values.
Here is a version of your code that uses SVD instead of eigen decomposition:
using LinearAlgebra: eigen
using Statistics: mean
using Arpack: svds
function find_principal_component(X)
n = size(X, 1)
# Center columns of X
B = X .- mapslices(mean, X, dims=[1])
# Decomposition of Covariance Matrix
svd,_ = svds(B / (n - 1), nsv=1)
# Grab principal component and compute loading
PC = svd.V[:, 1]
return B * PC, PC
end
Running this on a large sparse matrix (100k x 1k, 1M non-zeros) gives this speed:
julia> #time find_principal_component(sprandn(100_000, 1_000, 0.01))
25.529426 seconds (18.45 k allocations: 3.015 GiB, 0.02% gc time)
([0.014242904195824286, 0.10635817357717596, -0.010142643763158442, ...])
and on a large non-sparse example (1M x 100 entries):
julia> #time find_principal_component(randn(1_000_000, 100))
4.922949 seconds (1.31 k allocations: 2.280 GiB, 0.02% gc time)
([-0.06629858174095249, 0.6996443876327108, -1.1783642870384952, ...])
Try using KrylovKit.jl. Specifically, eigsolve(X, howmany=1, which=:LM]) will give you the eigen value with largest magnitude and the associated eigenvector. Docs are at https://jutho.github.io/KrylovKit.jl/stable/man/eig/

Extremely Sparse Integer Quadratic Programming

I am working on an optimization problem with a huge number of variables (upwards of hundreds of millions). Each of them should be a 0-1 binary variable.
I can write it in the form (maximize x'Qx) where Q is positive semi-definite, and I am using Julia, so the package COSMO.jl seems like a great fit. However, there is a ton of sparsity in my problem. Q is 0 except on approximately sqrt(|Q|) entries, and for the constraints there are approximately sqrt(|Q|) linear constraints on the variables.
I can describe this system pretty easily using SparseArrays, but it appears the most natural way to input problems into COSMO uses standard arrays. Is there a way I can take advantage of the sparsity in this massive problem?
While there is no sample code in your perhaps this could help:
JuMP works with sparse arrays so perhaps the easiest thing could be just use it in the construction of the goal function:
julia> using JuMP, SparseArrays, COSMO
julia> m = Model(with_optimizer(COSMO.Optimizer));
julia> q = sprand(Bool, 20, 20,0.05) # for readability I use a binary q
20×20 SparseMatrixCSC{Bool, Int64} with 21 stored entries:
⠀⠀⠀⡔⠀⠀⠀⠀⡀⠀
⠀⠀⠂⠀⠠⠀⠀⠈⠑⠀
⠀⠀⠀⠀⠀⠤⠀⠀⠀⠀
⠀⢠⢀⠄⠆⠀⠂⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠄⠀⠀⠌
julia> #variable(m, x[1:20], Bin);
julia> x'*q*x
x[1]*x[14] + x[14]*x[3] + x[15]*x[8] + x[16]*x[5] + x[18]*x[4] + x[18]*x[13] + x[19]*x[14] + x[20]*x[11]
You can see that the equation gets correctly reduced.
Indeed you could check the performance with a very sparse q having 100M elements:
julia> q = sprand(10000, 10000,0.000001)
10000×10000 SparseMatrixCSC{Float64, Int64} with 98 stored entries:
...
julia> #variable(m,z[1:10000], Bin);
julia> #btime $z'*$q*$z
1.276 ms (51105 allocations: 3.95 MiB)
You can see that you are just getting the expected performance when constructing the goal function.

Construct a bijective function to map arbitrary integer from [1, n] to [1, n] randomly

I want to construct a bijective function f(k, n, seed) from [1,n] to [1,n] where 1<=k<=n and 1<=f(k, n, seed)<=n for each given seed and n. The function actually should return a value from a random permutation of 1,2,...,n. The randomness is decided by the seed. Different seed may corresponds to different permutation. I want the function f(k, n, seed)'s time complexity to be O(1) for each 1<=k<=n and any given seed.
Anyone knows how can I construct such a function? The randomness is allowed to be pseudo-randomness. n can be very large (e.g. >= 1e8).
No matter how you do it, you will always have to store a list of numbers still available or numbers already used ... A simple possibility would be the following
const avail = [1,2,3, ..., n];
let random = new Random(seed)
function f(k,n) {
let index = random.next(n - k);
let result = avail[index]
avail[index] = avail[n-k];
}
The assumptions for this are the following
the array avail is 0-indexed
random.next(x) creates an random integer i with 0 <= i < x
the first k to call the function f with is 0
f is called for contiguous k 0, 1, 2, 3, ..., n
The principle works as follows:
avail holds all numbers still available for the permution. When you take a random index, the element at that index is the next element of the permutation. Then instead of slicing out that element from the array, which is quite expensive, you just replace the currently selected element with the last element in the avail array. In the next iteration you (virtually) decrease the size of the avail array by 1 by decreasing the upper limit for the random by one.
I'm not sure, how secure this random permutation is in terms of distribution of the values, ie for instance it may happen that a certain range of numbers is more likely to be in the beginning of the permuation or in the end of the permutation.
A simple, but not very 'random', approach would be to use the fact that, if a is relatively prime to n (ie they have no common factors), then
x-> (a*x + b)%n
is a permutation of {0,..n-1} to {0,..n-1}. To find the inverse of this, you can use the extended euclidean algorithm to find k and l so that
1 = gcd(a,n) = k*a+l*n
for then the inverse of the map above is
y -> (k*x + c) mod n
where c = -k*b mod n
So you could choose a to be a 'random' number in {0,..n-1} that is relatively prime to n, and b to be any number in {0,..n-1}
Note that you'll need to do this in 64 bit arithmetic to avoid overflow in computing a*x.

Linear regression and matrix division in Julia

The well known formula for OLS is (X'X)^(-1)X'y where X is nxK and y is nx1.
One way to implement this in Julia is (X'*X)\X'*y.
But I found that X\y gives the almost same output up to the tiny computational error.
Do they always compute the same thing (as long as n>k)? If so, which one should I use?
When X is square, there is a unique solution and LU-factorization (with pivoting) is a numerically-stable way to calculate this. That is the algorithm that backslash uses in this case.
When X is not square, which is the case in most regression problems, then there is no unique solution but there is a unique least square solution. The QR factorization method for solving Xβ = y is a numerically stable method for generating the least square solution, and in this case X\y uses the QR-factorization and thus gives the OLS solution.
Notice the words numerically stable. While (X'*X)\X'*y will theoretically always give the same result as backslash, in practice backslash (with the correct factorization choice) will be more precise. This is because the factorization algorithms are implemented to be numerically stable. Because of the change for floating point errors to accumulate when doing (X'*X)\X'*y, it's not recommended that you use this form for any real numerical work.
Instead, (X'*X)\X'*y is somewhat equivalent to an SVD factorization which is the most nuemrically stable algorithm, but also the most expensive (in fact, it's basically writing out the Moore-Penrose pseudoinverse which is how an SVD factorization is used to solve a linear system). To directly do an SVD factorization using a pivoted SVD, do svdfact(X) \ y on v0.6 or svd(X) \ y on v0.7. Doing this directly is more stable than (X'*X)\X'*y. Note that qrfact(X) \ y or qr(X) \ y (v0.7) is for QR. See the factorizations portion of the documentation for more details on all of the choices.
Following the documentation the result of X\y is (there notation \(A, B) is used not X and y):
For rectangular A the result is the minimum-norm least squares solution
This is your case I guess as you assume n>k (so your matrix is not square). So you can safely use X\y. Actually it is better to use it than the standard formula as you will get a result even if rank of X is less than min(n,k), whereas standard formula (X'*X)^(-1)*X'*y will fail or produce numerically unstable result if X'*X is nearly singular.
If X would be square (this is not your case) then we have a bit different rule in the documentation:
For input matrices A and B, the result X is such that A*X == B when A is square
This means that the \ algorithm would produce an error if your matrix were singular or produce numerically unstable results if the matrix were nearly singular (in practice most often lu function that is called internally for general dense matrices may throw SingularException).
If you want a catch-all solution (for square and non square matrices) then qr(X, Val(true)) \ y can be used.
Short answer: No, use the first one (the well-known one).
Long answer:
The linear regression model is Xβ = y, and it's easily to derive β = X \ y, which is your second method. However, in most time (when X is not invertible), this is wrong, since you cannot simply left multiply X^-1. The correct way is to solve β = argmin{‖y - Xβ‖^2} instead, which leads to the first method.
To show they are not always the same, simple construct a case where X is not invertible:
julia> X = rand(10, 10)
10×10 Array{Float64,2}:
0.938995 0.32773 0.740556 0.300323 0.98479 0.48808 0.748006 0.798089 0.864154 0.869864
0.973832 0.99791 0.271083 0.841392 0.743448 0.0951434 0.0144092 0.785267 0.690008 0.494994
0.356408 0.312696 0.543927 0.951817 0.720187 0.434455 0.684884 0.72397 0.855516 0.120853
0.849494 0.989129 0.165215 0.76009 0.0206378 0.259737 0.967129 0.733793 0.798215 0.252723
0.364955 0.466796 0.227699 0.662857 0.259522 0.288773 0.691278 0.421251 0.593215 0.542583
0.126439 0.574307 0.577152 0.664301 0.60941 0.742335 0.459951 0.516649 0.732796 0.990509
0.430213 0.763126 0.737171 0.433884 0.85549 0.163837 0.997908 0.586575 0.257428 0.33239
0.28398 0.162054 0.481452 0.903363 0.780502 0.994575 0.131594 0.191499 0.702596 0.0967979
0.42463 0.142 0.705176 0.0481886 0.728082 0.709598 0.630134 0.139151 0.423227 0.942262
0.197805 0.526095 0.562136 0.648896 0.805806 0.168869 0.200355 0.557305 0.69514 0.227137
julia> y = rand(10, 1)
10×1 Array{Float64,2}:
0.7751785556478308
0.24185992335144801
0.5681904264574333
0.9134364924569847
0.20167825754443536
0.5776727022413637
0.05289808385359085
0.5841180308242171
0.2862768657856478
0.45152080383822746
julia> ((X' * X) ^ -1) * X' * y
10×1 Array{Float64,2}:
-0.3768345891121706
0.5900885565174501
-0.6326640292669291
-1.3922334538787071
0.06182039005215956
1.0342060710792016
0.045791973670925995
0.7237081408801955
1.4256831037950832
-0.6750765481219443
julia> X \ y
10×1 Array{Float64,2}:
-0.37683458911228906
0.5900885565176254
-0.6326640292676649
-1.3922334538790346
0.061820390052523294
1.0342060710793235
0.0457919736711274
0.7237081408802206
1.4256831037952566
-0.6750765481220102
julia> X[2, :] = X[1, :]
10-element Array{Float64,1}:
0.9389947787349187
0.3277301697101178
0.7405555185711721
0.30032257202572477
0.9847899425069042
0.48807977638742295
0.7480061513093117
0.79808859136911
0.8641540973071822
0.8698636291189576
julia> ((X' * X) ^ -1) * X' * y
10×1 Array{Float64,2}:
0.7456524759867015
0.06233042922132548
2.5600126098899256
0.3182206475232786
-2.003080524452619
0.272673133766017
-0.8550165639656011
0.40827327221785403
0.2994419115664999
-0.37876151249955264
julia> X \ y
10×1 Array{Float64,2}:
3.852193379477664e15
-2.097948470376586e15
9.077766998701864e15
5.112094484728637e15
-5.798433818338726e15
-2.0446050874148052e15
-3.300267174800096e15
2.990882423309131e14
-4.214829360472345e15
1.60123572911982e15

Vectorizing code to calculate (squared) Mahalanobis Distiance

EDIT 2: this post seems to have been moved from CrossValidated to StackOverflow due to it being mostly about programming, but that means by fancy MathJax doesn't work anymore. Hopefully this is still readable.
Say I want to to calculate the squared Mahalanobis distance between two vectors x and y with covariance matrix S. This is a fairly simple function defined by
M2(x, y; S) = (x - y)^T * S^-1 * (x - y)
With python's numpy package I can do this as
# x, y = numpy.ndarray of shape (n,)
# s_inv = numpy.ndarray of shape (n, n)
diff = x - y
d2 = diff.T.dot(s_inv).dot(diff)
or in R as
diff <- x - y
d2 <- t(diff) %*% s_inv %*% diff
In my case, though, I am given
m by n matrix X
n-dimensional vector mu
n by n covariance matrix S
and want to find the m-dimensional vector d such that
d_i = M2(x_i, mu; S) ( i = 1 .. m )
where x_i is the ith row of X.
This is not difficult to accomplish using a simple loop in python:
d = numpy.zeros((m,))
for i in range(m):
diff = x[i,:] - mu
d[i] = diff.T.dot(s_inv).dot(diff)
Of course, given that the outer loop is happening in python instead of in native code in the numpy library means it's not as fast as it could be. $n$ and $m$ are about 3-4 and several hundred thousand respectively and I'm doing this somewhat often in an interactive program so a speedup would be very useful.
Mathematically, the only way I've been able to formulate this using basic matrix operations is
d = diag( X' * S^-1 * X'^T )
where
x'_i = x_i - mu
which is simple to write a vectorized version of, but this is unfortunately outweighed by the inefficiency of calculating a 10-billion-plus element matrix and only taking the diagonal... I believe this operation should be easily expressible using Einstein notation, and thus could hopefully be evaluated quickly with numpy's einsum function, but I haven't even begun to figure out how that black magic works.
So, I would like to know: is there either a nicer way to formulate this operation mathematically (in terms of simple matrix operations), or could someone suggest some nice vectorized (python or R) code that does this efficiently?
BONUS QUESTION, for the brave
I don't actually want to do this once, I want to do it k ~ 100 times. Given:
m by n matrix X
k by n matrix U
Set of n by n covariance matrices each denoted S_j (j = 1..k)
Find the m by k matrix D such that
D_i,j = M(x_i, u_j; S_j)
Where i = 1..m, j = 1..k, x_i is the ith row of X and u_j is the jth row of U.
I.e., vectorize the following code:
# s_inv is (k x n x n) array containing "stacked" inverses
# of covariance matrices
d = numpy.zeros( (m, k) )
for j in range(k):
for i in range(m):
diff = x[i, :] - u[j, :]
d[i, j] = diff.T.dot(s_inv[j, :, :]).dot(diff)
First off, it seems like maybe you're getting S and then inverting it. You shouldn't do that; it's slow and numerically inaccurate. Instead, you should get the Cholesky factor L of S so that S = L L^T; then
M^2(x, y; L L^T)
= (x - y)^T (L L^T)^-1 (x - y)
= (x - y)^T L^-T L^-1 (x - y)
= || L^-1 (x - y) ||^2,
and since L is triangular L^-1 (x - y) can be computed efficiently.
As it turns out, scipy.linalg.solve_triangular will happily do a bunch of these at once if you reshape it properly:
L = np.linalg.cholesky(S)
y = scipy.linalg.solve_triangular(L, (X - mu[np.newaxis]).T, lower=True)
d = np.einsum('ij,ij->j', y, y)
Breaking that down a bit, y[i, j] is the ith component of L^-1 (X_j - \mu). The einsum call then does
d_j = \sum_i y_{ij} y_{ij}
= \sum_i y_{ij}^2
= || y_j ||^2,
like we need.
Unfortunately, solve_triangular won't vectorize across its first argument, so you should probably just loop there. If k is only about 100, that's not going to be a significant issue.
If you are actually given S^-1 rather than S, then you can indeed do this with einsum more directly. Since S is quite small in your case, it's also possible that actually inverting the matrix and then doing this would be faster. As soon as n is a nontrivial size, though, you're throwing away a lot of numerical accuracy by doing this.
To figure out what to do with einsum, write everything in terms of components. I'll go straight to the bonus case, writing S_j^-1 = T_j for notational convenience:
D_{ij} = M^2(x_i, u_j; S_j)
= (x_i - u_j)^T T_j (x_i - u_j)
= \sum_k (x_i - u_j)_k ( T_j (x_i - u_j) )_k
= \sum_k (x_i - u_j)_k \sum_l (T_j)_{k l} (x_i - u_j)_l
= \sum_{k l} (X_{i k} - U_{j k}) (T_j)_{k l} (X_{i l} - U_{j l})
So, if we make arrays X of shape (m, n), U of shape (k, n), and T of shape (k, n, n), then we can write this as
diff = X[np.newaxis, :, :] - U[:, np.newaxis, :]
D = np.einsum('jik,jkl,jil->ij', diff, T, diff)
where diff[j, i, k] = X_[i, k] - U[j, k].
Dougal nailed this one with an excellent and detailed answer, but thought I'd share a small modification that I found increases efficiency in case anyone else is trying to implement this. Straight to the point:
Dougal's method was as follows:
def mahalanobis2(X, mu, sigma):
L = np.linalg.cholesky(sigma)
y = scipy.linalg.solve_triangular(L, (X - mu[np.newaxis,:]).T, lower=True)
return np.einsum('ij,ij->j', y, y)
A mathematically equivalent variant I tried is
def mahalanobis2_2(X, mu, sigma):
# Cholesky decomposition of inverse of covariance matrix
# (Doing this in either order should be equivalent)
linv = np.linalg.cholesky(np.linalg.inv(sigma))
# Just do regular matrix multiplication with this matrix
y = (X - mu[np.newaxis,:]).dot(linv)
# Same as above, but note different index at end because the matrix
# y is transposed here compared to above
return np.einsum('ij,ij->i', y, y)
Ran both versions head-to-head 20x using identical random inputs and recorded the times (in milliseconds). For X as a 1,000,000 x 3 matrix (mu and sigma 3 and 3x3) I get:
Method 1 (min/max/avg): 30/62/49
Method 2 (min/max/avg): 30/47/37
That's about a 30% speedup for the 2nd version. I'm mostly going to be running this in 3 or 4 dimensions but to see how it scaled I tried X as 1,000,000 x 100 and got:
Method 1 (min/max/avg): 970/1134/1043
Method 2 (min/max/avg): 776/907/837
which is about the same improvement.
I mentioned this in a comment on Dougal's answer but adding here for additional visibility:
The first pair of methods above take a single center point mu and covariance matrix sigma and calculate the squared Mahalanobis distance to each row of X. My bonus question was to do this multiple times with many sets of mu and sigma and output a two-dimensional matrix. The set of methods above can be used to accomplish this with a simple for loop, but Dougal also posted a more clever example using einsum.
I decided to compare these methods with each other by using them to solve the following problem: Given k d-dimensional normal distributions (with centers stored in rows of k by d matrix U and covariance matrices in the last two dimensions of the k by d by d array S), find the density at the n points stored in rows of the n by d matrix X.
The density of a multivariate normal distribution is a function of the squared Mahalanobis distance of the point to the mean. Scipy has an implementation of this as scipy.stats.multivariate_normal.pdf to use as a reference. I ran all three methods against each other 10x using identical random parameters each time, with d=3, k=96, n=5e5. Here are the results, in points/sec:
[Method]: (min/max/avg)
Scipy: 1.18e5/1.29e5/1.22e5
Fancy 1: 1.41e5/1.53e5/1.48e5
Fancy 2: 8.69e4/9.73e4/9.03e4
Fancy 2 (cheating version): 8.61e4/9.88e4/9.04e4
where Fancy 1 is the better of the two methods above and Fancy2 is Dougal's 2nd solution. Since the Fancy 2 needs to calculate the inverses of all the covariance matrices I also tried a "cheating version" where it was passed these as a parameter, but it looks like that didn't make a difference. I had planned on including the non-vectorized implementation but that was so slow it would have taken all day.
What we can take away from this is that using Dougal's first method is about 20% faster than however Scipy does it. Unfortunately despite its cleverness the 2nd method is only about 60% as fast as the first. There are probably some other optimizations that can be done but this is already fast enough for me.
I also tested how this scaled with higher dimensionality. With d=100, k=96, n=1e4:
Scipy: 7.81e3/7.91e3/7.86e3
Fancy 1: 1.03e4/1.15e4/1.08e4
Fancy 2: 3.75e3/4.10e3/3.95e3
Fancy 2 (cheating version): 3.58e3/4.09e3/3.85e3
Fancy 1 seems to have an even bigger advantage this time. Also worth noting that Scipy threw a LinAlgError 8/10 times, probably because some of my randomly-generated 100x100 covariance matrices were close to singular (which may mean that the other two methods are not as numerically stable, I did not actually check the results).

Resources