Apply a transformation matrix over time - math

I have an initial frame and a bounding box around some information. I have a transformation matrix T, for which I want to use to transform this bounding box.
I could easily apply the transformation and draw it in the output frame, but I would like to apply the transformation over a sequence of x frames, can anyone suggest a way to do this?
Aly

Building on #egor-n comment, you could compute R = T^{1/x} and compute your bounding box on frame i+1 from the one at frame i by
B_{i+1} = R * B_{i}
with B_{0} your initial bounding box. Depending on the precise form of T, we could discuss how to compute R.

There are methods for affine transforms - to make decomposition of affine transform matrix to product of translation, rotation, scaling and shear matrices, and linear interpolation of parameters of every matrix (for example, rotation angle for R and so on). Example
But for homography matrix there is no single solution, as described here, so one can find some "good" approximation (look at complex math in that article). Probably, some limitations for possible transforms could simplify the problem.

Here's something a little different you could try. Let M be the matrix representing the final transformation. You could try interpolating between I (the identity matrix, with 1's on the diagonal and 0's elsewhere) using the formula
M(t) = exp(t * ln(M))
where t is time from 0 to 1, M(0) = I, M(1) = M, exp is the exponential function for matrices given by the usual infinite series, and ln is the similar natural logarithm function for matrices given by the usual infinite series.
The correctness of the formula depends on the type of transformation represented by M and the type of transformations allowed in intermediate steps. The formula should work for rigid motions. For other types of transformations, various bad things might happen, including divergence of the logarithm series. Other formulas can be used in other cases; let me know if you're using transformations other than rigid motions and I can give some other formulas.
The exponential and logarithm functions may be available in a matrix library. If not, they can be easily implemented as partial sums of infinite series.
The above method should give the same result as some quaternion methods in the case of rotations. The quaternion methods are probably faster when they're available.
UPDATE
I see you mention elsewhere that your transformation is a homography (perspectivity), so the method I suggested above for rigid motions won't work. Instead you could use a different, but related method outlined in ftp://ftp.cs.huji.ac.il/users/aristo/papers/SYGRAPH2005/sig05.pdf. It goes as follows: represent your transformation by a matrix in one higher dimension. Scale the matrix so that its determinant is equal to 1. Call the resulting matrix G. You want to interpolate from the identity matrix I to G, going through perspectivities.
In what follows, let M^T be the transpose of M. Let the function expp be defined by
expp(M) = exp(-M^T) * exp(M+M^T)
You need to find the inverse of that function at G; in other words you need to solve the equation
expp(M) = G
where G is your transformation matrix with determinant 1. Call the result M = logp(G). That equation can be solved by standard numerical techniques, or you can use Matlab or other math software. It's somewhat time-consuming and complicated to do, but you only have to do it once.
Then you calculate the series of transformations by
G(t) = expp(t * logp(G))
where t varies from 0 to 1 in steps of 1/k, where k is the number of frames you want.

You could parameterize the transform over some number of frames by adding a variable with a domain greater than zero but less than 1.
Let t be the frame number
Let T be the total number of frames
Let P be the original location and orientation of the object
Let theta be the total rotation angle
and translation be the vector [x,y]'
The transform in 2D becomes:
T(P|t) = R(t)*P +(t*[x,y]')/T
where R(t) = {{Cos((theta*t)/T),-Sin((theta*t)/T)},{Sin((theta*t)/T),Cos((theta*t)/T)}}
So that at frame t_n you apply the transform T(t) to the position of the object at time t_0 = 0 (which is equivalent to no transform)

Related

Fast matrix determinant calculation with specific structure

I have a k*k squared matrix with diagonal elements x>0 and all other elements y>0. The values of k, x, y are all subject to change.
Now I need the determinant of this matrix. I know there won't be a closed-form formula for it, but is there a way to calculate it faster than the commonly used LU-decomposition which takes O(K^3) time complexity (considering its special structure)?
(I am using R as my coding language, and the built-in det() function in R uses the LU-decomposition.)

Is this dct (FFTW.jl) behavior in julia normal?

I'm trying to do some exercises of Compressed Sensing on Julia, but i realize that the discrete cosine transformation (using FFTW.jl) of an identity matrix doesn't looks as the result of other programming languages (aka. Mathematica and Matlab).
For example in Julia
using Plots, FFTW, LinearAlgebra
n = 100
Psi = dct(Matrix(1.0I,n,n))
heatmap(Psi)
results in this matrix (which is essentially an identity matrix with some noise)
But in Matlab
imagesc(dct(eye(100,100),'Type',2))
this is the result (as expected)
Finally in Mathematica
MatrixPlot[N[FourierDCTMatrix[100, 2]], PlotLegends -> Automatic]
returns this
Why Julia behaves so differently?
And is this normal?
Matlab (and I guess Mathematica), does dct of each column in your matrix. FFTW performs a 2-dimensional dct when the input is two-dimensional. The same happens for fft.
If you want column-wise transformation, you can specify the dimension:
Psi1 = dct(Matrix(1.0I,n,n), 1); # along first dimension
heatmap(Psi1)
Notice that the direction of the y-axis is opposite for Plots.jl relative to Matlab.
(BTW, you can also just write I(n) or 1.0I(n) instead of Matrix(1.0I,n,n))
This is something that sets Julia apart from some other languages. It tends to treat matrices as matrices, and not as just a collection of vectors or a bunch of scalars. For example exp(M) and log(M) for matrices not operate elementwise, but will calculate the matrix exponential and matrix logarithm according to their linear algebra definitions.

Unique orthogonal component of a vector with respect to a set of vectors

I would like to obtain the orthogonal component of a vector v with respect to a set of vectors {u_1, u_2, ... u_N}. I can do this via a Gram-Schmidt style approach:
v_orth = v
for i=1:N
v_orth = v_orth - proj(v_orth, u_i)
return v_orth/norm(v_orth)
However, the output of this algorithm is dependent on the order of the u_i. Is there a method that is independent of the order of the u_i? For example, one that returns the unique v_orth with minimal deviation from v?
Your procedure won't necessarily produce a vector that is orthogonal to all the us.
For example if there were just 2 us and the us weren't orthogonal. Then after the first step you will have a vector orthogonal to u1, but then the second step will (unless by chance the vector is already orthogonal to u2) subtract a multiple of u2 from the vector, and since u1.u2 != 0, the result will no longer be orthogonal to u1.
You need to first find a set of vectors w_1..w_M say (eg by modified Gram-Schmidt) that are orthogonal and span the same space as the us. Them if you use your procedure with the ws instead of the us you will compute a vector that is orthogonal to all the ws, and hence to all the us. Moreover (rounding errors apart) this will not depend on the order of the vs.

Calculate Normals from Heightmap

I am trying to convert an heightmap into a matrix of normals using central differencing which will later correspond to the steepness of a giving point.
I found several links with correct results but without explaining the math behind.
T
L O R
B
From this link I realised I can just do:
Vec3 normal = Vec3(2*(R-L), 2*(B-T), -4).Normalize();
The thing is that I don't know where the 2* and -4 comes from.
In this explanation of central differencing I see that we should divide that value by 2, but I still don't know how to connect all of this.
What I really want to know is the linear algebra definition behind this.
I have an heightmap, I want to measure the central differences and I want to obtain the normal vector to use later to measure the steepness.
PS: the Z-axis is the height.
From vector calculus, the normal of a surface is given by the gradient operator:
A height map h(x, y) is a special form of the function f:
For a discretized height map, assuming that the grid size is 1, the first-order approximations to the two derivative terms above are given by:
Since the x step from L to R is 2, and same for y. The above is exactly the formula you had, divided through by 4. When this vector is normalized, the factor of 4 is canceled.
(No linear algebra was harmed in the writing of this answer)

Decompose complex matrix transformation into a series of simple transformations?

I wonder if it is possible (and if it is then how) to re-present an arbitrary M3 matrix transformation as a sequence of simpler transformations (such as translate, scale, skew, rotate)
In other words: how to calculate MTranslate, MScale, MRotate, MSkew matrices from the MComplex so that the following equation would be true:
MComplex = MTranslate * MScale * MRotate * MSkew (or in an other order)
Singular Value Decomposition (see also this blog and this PDF). It turns an arbitrary matrix into a composition of 3 matrices: orthogonal + diagonal + orthogonal. The orthogonal matrices are rotation matrices; the diagonal matrix represents skewing along the primary axes = scaling.
The translation throws a monkey wrench into the game, but what you should do is take out the translation part of the matrix so you have a 3x3 matrix, run SVD on that to give you the rotation+skewing, then add the translation part back in. That way you'll have a rotation + scale + rotation + translate composition of 4 matrices. It's probably possible to do this in 3 matrices (rotation + scaling along some set of axes + translation) but I'm not sure exactly how... maybe a QR decomposition (Q = orthogonal = rotation, but I'm not sure if the R is skew-only or has a rotational part.)
Yes, but the solution will not be unique. Also you should rather put translation at the end (the order of the rest doesn't matter)
For any given square matrix A there exists infinitely many matrices B and C so that A = B*C. Choose any invertible matrix B (which means that B^-1 exists or det(B) != 0) and now C = B^-1*A.
So for your solution first decompose MC into MT and MS*MR*MSk*I, choosing MT to be some invertible transposition matrix. Then decompose the rest into MS and MR*MSk*I so that MS is arbitrary scaling matrix. And so on...
Now if at the end of the fun I is an identity matrix (with 1 on diagonal, 0 elsewhere) you're good. If it is not, start over, but choose different matrices ;-)
In fact, using the method above symbolically you can create set of equations that will yield you a parametrized formulas for all of these matrices.
How useful these decompositions would be for you, well - that's another story.
If you type this into Mathematica or Maxima they'll compute this for you in no time.

Resources