Fast matrix determinant calculation with specific structure - r

I have a k*k squared matrix with diagonal elements x>0 and all other elements y>0. The values of k, x, y are all subject to change.
Now I need the determinant of this matrix. I know there won't be a closed-form formula for it, but is there a way to calculate it faster than the commonly used LU-decomposition which takes O(K^3) time complexity (considering its special structure)?
(I am using R as my coding language, and the built-in det() function in R uses the LU-decomposition.)

Related

Diagonalize a matrix to compute matrix power?

I am trying to calculate P^100 where P is my transition matrix. I want to do this by diagonalizing P so that way we have P = Q*D*Q^-1.
Of course, if I can get P to be of this form, then I can easily calculate P^100 = Q*D^100*Q^-1 (where * denotes matrix multiplication).
I discovered that if you just do P^5 that all you'll get in return is a matrix where each of your entries of P were raised to the 5th power, rather than the fifth power of the matrix (P*P*P*P*P).
I found a question on here that asks how to check if a matrix is diagonalizable but not how to explicitly construct the diagonalization of a matrix. In MATLAB it's super easy but well, I'm using R and not MATLAB.
The eigen() function will compute eigenvalues and eigenvectors for you (the matrix of eigenvectors is Q in your expression, diag() of the eigenvalues is D).
You could also use the %^% operator in the expm package, or functions from other packages described in the answers to this question.
The advantages of using someone else's code are that it's already been tested and debugged, and may use faster or more robust algorithms (e.g., it's often more efficient to compute the matrix power by composing powers of two of the matrix rather than doing the eigenvector computations). The advantage of writing your own method is that you'll understand it better.

optimize matrix of variables in R

I have a square matrix of values, Q, and a same-sized diagonal matrix of variables, W, and I want to take exp(W*Q) (where here * is matrix multiplication of course). This effectively scales the ith row in Q by the [i,i] element of W. My objective function will be to minimize (c-exp(W*Q)[y,z])^2, where c is some constant I have and [y,z] just says I'm choosing the [y,z] element of the matrix, where I am choosing a particular y and z.
I'm trying to use the optim() function in R, but to do so I need to create the diagonal matrix of variables W. Is it possible to do this in R? Or alternatively, is there another function I can use to accomplish this?

Computational complexity of n-dimensional Discrete Fourier Transform?

The computational complexity of n-dimensional Fast Fourier Transform was discussed here and (as the former's duplicate) here.
The computational complexity of a 1-dimensional Discrete Fourier Transform is O(N^2), N is the data set size.
Could you please tell us what is the computational complexity of the n-dimensional Discrete Fourier Transform consisting {N1, N2 ... Nn} points along each dimension?
The FFT itself is also a DFT (with some constraints). Will assume that you mean the naive summation method.
Re-writing the 1D DFT in integral form (the continuous version):
A particular value of f-tilde is equivalent to a single element in your DFT array. When the integral is discretized (i.e. converted a finite sum), there are N terms in the sum. This gives O(N) for each element and hence O(N^2) overall.
In case you were wondering, writing in this form allows for more compact notation for a general n-D DFT:
When this is discretized, we can see that for each element there are n sums, each over one of the dimensions and of length N. There are N ^ n values in the input "array", so the complexity is:

Perform sum of vectors in CUDA/thrust

So I'm trying to implement stochastic gradient descent in CUDA, and my idea is to parallelize it similar to the way that is described in the paper Optimal Distributed Online Prediction Using Mini-Batches
That implementation is aimed at MapReduce distributed environments so I'm not sure if it's optimal when using GPUs.
In short the idea is: at each iteration, calculate the error gradients for each data point in a batch (map), take their average by sum/reducing the gradients, and finally perform the gradient step updating the weights according to the average gradient. The next iteration starts with the updated weights.
The thrust library allows me to perform a reduction on a vector allowing me for example to sum all the elements in a vector.
My question is: How can I sum/reduce an array of vectors in CUDA/thrust?
The input would be an array of vectors and the output would be a vector that is the sum of all the vectors in the array (or, ideally, their average).
Converting my comment into this answer:
Let's say each vector has length m and the array has size n.
An "array of vectors" is then the same as a matrix of size n x m.
If you change your storage format from this "array of vectors" to a single vector of size n * m, you can use thrust::reduce_by_key to sum each row of this matrix separately.
The sum_rows example shows how to do this.

Apply a transformation matrix over time

I have an initial frame and a bounding box around some information. I have a transformation matrix T, for which I want to use to transform this bounding box.
I could easily apply the transformation and draw it in the output frame, but I would like to apply the transformation over a sequence of x frames, can anyone suggest a way to do this?
Aly
Building on #egor-n comment, you could compute R = T^{1/x} and compute your bounding box on frame i+1 from the one at frame i by
B_{i+1} = R * B_{i}
with B_{0} your initial bounding box. Depending on the precise form of T, we could discuss how to compute R.
There are methods for affine transforms - to make decomposition of affine transform matrix to product of translation, rotation, scaling and shear matrices, and linear interpolation of parameters of every matrix (for example, rotation angle for R and so on). Example
But for homography matrix there is no single solution, as described here, so one can find some "good" approximation (look at complex math in that article). Probably, some limitations for possible transforms could simplify the problem.
Here's something a little different you could try. Let M be the matrix representing the final transformation. You could try interpolating between I (the identity matrix, with 1's on the diagonal and 0's elsewhere) using the formula
M(t) = exp(t * ln(M))
where t is time from 0 to 1, M(0) = I, M(1) = M, exp is the exponential function for matrices given by the usual infinite series, and ln is the similar natural logarithm function for matrices given by the usual infinite series.
The correctness of the formula depends on the type of transformation represented by M and the type of transformations allowed in intermediate steps. The formula should work for rigid motions. For other types of transformations, various bad things might happen, including divergence of the logarithm series. Other formulas can be used in other cases; let me know if you're using transformations other than rigid motions and I can give some other formulas.
The exponential and logarithm functions may be available in a matrix library. If not, they can be easily implemented as partial sums of infinite series.
The above method should give the same result as some quaternion methods in the case of rotations. The quaternion methods are probably faster when they're available.
UPDATE
I see you mention elsewhere that your transformation is a homography (perspectivity), so the method I suggested above for rigid motions won't work. Instead you could use a different, but related method outlined in ftp://ftp.cs.huji.ac.il/users/aristo/papers/SYGRAPH2005/sig05.pdf. It goes as follows: represent your transformation by a matrix in one higher dimension. Scale the matrix so that its determinant is equal to 1. Call the resulting matrix G. You want to interpolate from the identity matrix I to G, going through perspectivities.
In what follows, let M^T be the transpose of M. Let the function expp be defined by
expp(M) = exp(-M^T) * exp(M+M^T)
You need to find the inverse of that function at G; in other words you need to solve the equation
expp(M) = G
where G is your transformation matrix with determinant 1. Call the result M = logp(G). That equation can be solved by standard numerical techniques, or you can use Matlab or other math software. It's somewhat time-consuming and complicated to do, but you only have to do it once.
Then you calculate the series of transformations by
G(t) = expp(t * logp(G))
where t varies from 0 to 1 in steps of 1/k, where k is the number of frames you want.
You could parameterize the transform over some number of frames by adding a variable with a domain greater than zero but less than 1.
Let t be the frame number
Let T be the total number of frames
Let P be the original location and orientation of the object
Let theta be the total rotation angle
and translation be the vector [x,y]'
The transform in 2D becomes:
T(P|t) = R(t)*P +(t*[x,y]')/T
where R(t) = {{Cos((theta*t)/T),-Sin((theta*t)/T)},{Sin((theta*t)/T),Cos((theta*t)/T)}}
So that at frame t_n you apply the transform T(t) to the position of the object at time t_0 = 0 (which is equivalent to no transform)

Resources