I just read an article on www.songho.ca which indicates that a projection matrix is defined by:
[2n/(r-l) 0 (r+l)/(r-l) 0 ]
[0 2n/(t-b) (t+b)/(t-b) 0 ]
[0 0 -(f+n)/(f-n) -2*n*f/(f-n) ]
[0 0 -1 0 ]
where:
n: near
f: far
l: left
r: right
t: top
b: bottom
I have also read on www.geeks3d.com of an alternate definition given by:
[w 0 0 0]
[0 h 0 0]
[0 0 q -1]
[0 0 qn 0]
where:
w=(2*near)/(width * aspect)
h = 2near/height
q=-(far+near)/(far-near)
qn=-2*(far*near) / (far-near)
Why are there differences in M[0][2] and M[1][2] (excluding one is the transposed of other)? Do they generate the same result? Which one is posible to use in GLSL without any transpose?
The first matrix allows you to arbitrarily position the left, right, top and bottom clip plane positions. The second one always gives you a centred, symmetric frustum, which is kind of limiting. For example when you're doing stereoscopic rendering you want to slightly shift the left and right plane.
BTW, which one is posible to use in GLSL without any transpose?
This has nothing to do with GLSL. You can use either. The transpose you're referring to stems from the way matrices are represented internally in OpenGL and interfaces to the outside world.
Anyway, you should not hardcode matrices into shader source code, but supply them through a Uniform.
Update
OpenGL orders its matrices column major, i.e.
0 4 8 c
1 5 9 d
2 6 a e
3 7 b f
Related
I am using the Symbolics.jl package and trying to invert a matrix. However, the output is giving 'true' instead of 1. Here is the code:
using Symbolics
#variables x
mat=[x 0 0
0 x 0
0 0 x]
And the result I get is
inv(mat)= [true / x 0 0
0 true / x 0
0 0 true / x]
Any thought on why this is happening?
First of, this is not wrong as inv(mat) * mat will still give you the identity matrix. the problem is that the identity matrix in Num (the data type representing Symbolic.jl's variable is the diagonal matrix with true in its diagonal. This can be checked by looking at
Matrix{Num}(I, 3, 3).
The inverse is calculated by solving the system AX = I and the true is created when the dense identity matrix is created.
This is due to the definition of I, namely const I = UniformScaling(true). My guess (and correct me if i'm wrong) is that this is for maximum compatibility. Normally this gets translated to a 1 in integer types but Num is an exception.
Disclaimer: This question is not the same question as other projection matrix questions.
So Projection Matrices are 4x4 Matrices that are multiplied with 4D vectors to flatten them onto a 2D plane. Like this one:
1 0 0 0
0 1 0 0
0 0 0 0
0 0 1 0
But in the explanation, it says that the x and y coordinates of the vector are divided by Z. But I don't understand how this works because each part of the matrix that is multiplied by Z is 0. A comment in another question on this subject said, "The hardware does this for you." And I didn't quite get what it meant by that. Thank you in advance!
I was confounded by this nomenclature issue, too. Here is a bit better explanation in regards to Vulkan: https://matthewwellings.com/blog/the-new-vulkan-coordinate-system/
After the programmable vertex stage a set of fixed function vertex operations are run. During this process your homogeneous coordinates in clip space are divided by wc
Clearly, calling those matrices projection matrices is very misleading if the actual perspective correction isn't actually done by them. :)
I have two symmetric matrices A, B and a vector X. The dimension of A is n by n, the dimension of B is n by n, the dimension of X is n by 1. Let the element at ith row and jth column of matrix A denoted by A[i,j].
Since A is symmetric, only each column of the upper triangular matrix of A is saved. The matrix A is saved as an array:
Vector_A = [A[1,1],
A[1,2], A[2,2],
A[1,3], A[2,3], A[3,3],
A[1,4], A[2,4], A[3,4], A[4,4],
...,
A[1,n], A[2,n], ..., A[n,n]]
The matrix B is saved in the same format as matrix A. Now I would like to calculate ABA without transforming Vector_A, Vector_B back to matrix A, B. Since ABA is also symmetric, I would like to save the ABA in the same way as an array. How can I do it in Julia?
I would also like to calculate X'AX without transforming Vector_A back to matrix A where X' denotes transpose(X). How can I do it in Julia?
You need to implement your own data structures that inherit from the the AbstractMatrix type.
For example this could be done as:
struct SymmetricM{T} <: AbstractMatrix{T}
data::Vector{T}
end
So we have a symmetric matrix that is using only a vector for its data storage.
Now you need to implement functions so it actually behaves like a matrix so you can let the Julia magic work.
We start by providing the size of our new matrix datatype.
function Base.size(m::SymmetricM)
n = ((8*length(m.data)+1)^0.5-1)/2
nr = round(Int, n)
#assert n ≈ nr "The vector length must match the number of triang matrix elements"
(nr,nr)
end
In this code nr will be calculate every time to checkbounds is done on matrix. Perhaps in your production implementation you might want to move it to be a field of SymmetricM. You would scarify some elasticity and store 8 bytes more but would gain on the speed.
Now the next function we need is to calculate position of the vector on the base of matrix indices. Here is one possible implementation.
function getix(idx)::Int
n = size(m)[1]
row, col = idx
#assume left/lower triangular
if col > row
row = col
col = idx[1]
end
(row-1)*row/2 + col
end
Having that now we can implement getindex and setindex functions:
#inline function Base.getindex(m::SymmetricM, idx::Vararg{Int,2})
#boundscheck checkbounds(m, idx...)
m.data[getix(idx)]
end
#inline function Base.getindex(m::SymmetricM{T}, v::T, idx::Vararg{Int,2}) where T
#boundscheck checkbounds(m, idx...)
m.data[getix(idx)] = v
end
Now let us test this thing:
julia> m = SymmetricM(collect(1:10))
4×4 SymmetricM{Int64}:
1 2 4 7
2 3 5 8
4 5 6 9
7 8 9 10
You can see that we have provide elements of only one triangle (be it the lower or upper - they are the same) - and we got the full matrix!
This is indeed a fully valid Julia matrix so all matrix algebra should work on it:
julia> m * SymmetricM(collect(10:10:100))
4×4 Array{Int64,2}:
700 840 1010 1290
840 1020 1250 1630
1010 1250 1580 2120
1290 1630 2120 2940
Note that the result of multiplication is a Matrix rather than SymmetricM - to get a SymmetricM you need to overload the * operator to accept 2 SymmetricM arguments. For illustrative purposes let us show a custom operator overloading with the minus sign -:
import Base.-
-(m1::SymmetricM, m2::SymmetricM) = SymmetricM(m1.data .- m2.data)
And now you will see that substraction of SymmetricM is going to return another SymmetricM:
julia> m-m
4×4 SymmetricM{Int64}:
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
In this way you can build a full triangular matrix algebra system in Julia.
Note that however the getix function has an if statement so access to SymmetricM elements without using the data field will be much slower than those of a regular matrix so perhaps you should try to overload as many operators as is required for your project.
I'd like to split a sequence into k parts, and optimize the homogeneity of these sub-parts.
Example : 0 0 0 0 0 1 1 2 3 3 3 2 2 3 2 1 0 0 0
Result : 0 0 0 0 0 | 1 1 2 | 3 3 3 2 2 3 2 | 1 0 0 0 when you ask for 4 parts (k = 4)
Here, the algorithm did not try to split in fixed-length parts, but instead tried to make sure elements in the same parts are as homogeneous as possible.
What algorithm should I use ? Is there an implementation of it in R ?
Maybe you can use Expectation-maximization algorithm. Your points would be (value, position). In your example, this would be something like:
With the E-M algorithm, the result would be something like (by hand):
This is the desired output, so you can consider using this, and if it really works in all your scenarios. An annotation, you must assign previously the number of clusters you want, but I think it's not a problem for you, as you have set out your question.
Let me know if this worked ;)
Edit:
See this picture, is what you talked about. With k-means you should control the delta value, this is, how the position increment, to have its value to the same scale that value. But with E-M this doesn't matter.
Edit 2:
Ok I was not correct, you need to control the delta value. It is not the same if you increment position by 1 or by 3: (two clusters)
Thus, as you said, this algorithm could decide to cluster points that are not neighbours if their position is far but their value is close. You need to guarantee this not to happen, with a high increment of delta. I think that with a increment of 2 * (max - min) values of your sequence this wouldn't happen.
Now, your points would have the form (value, delta * position).
I have an n-partite (undirected) graph, given as an adjacency matrix, for instance this one here:
a b c d
a 0 1 1 0
b 0 0 0 1
c 0 0 0 1
d 0 0 0 0
I would like to know if there is a set of matrix operations that I can apply to this matrix, which will result in a matrix that "lists" all paths (of length n, i.e. through all the partitions) in this graph. For the above example, there are paths a->b->d and a->c->d. Hence, I would like to get the following matrix as a result:
a b c d
1 1 0 1
1 0 1 1
The first path contains nodes a,b,d and the second one nodes a,c,d. If necessary, the result matrix may have some all-0 lines, as here:
a b c d
1 1 0 1
0 0 0 0
1 0 1 1
0 0 0 0
Thanks!
P.S. I have looked at algorithms for computing the transitive closure, but these usually only tell if there is a path between two nodes, and not directly which nodes are on that path.
One thing you can do is to compute the nth power of you matrix A. The result will tell you how many paths there of length n from any one vertex to any other.
Now if you're interested in knowing all of the vertices along the path, I don't think that using purely matrix operations is the way to go. Bearing in mind that you have an n-partite graph, I would set up a data structure as follows: (Bear in mind that space costs will be expensive for all but small values.)
Each column will have one entry of each of the nodes in our graph. The n-th column will contain 1 in if this node is reachable on the n-th iteration from our designated start vertex or start set, and zero otherwise. Each column entry will also contain a list of back pointers to the vertices in the n-1 column which led to this vertex in the nth column. (This is like the viterbi algorithm, except that we have to maintain a list of backpointers for each entry rather than just one.) The complexity of doing this is (m^2)*n, where m is the number of vertices in the graph, and n is the length of the desired path.
I'm a little bit confused by your top matrix: with an undidrected graph, I would expect the adjacency matrix to be symmetric.
No, There is no pure matrix way to generate all paths. Please use pure combinatorial algorithms.
'One thing you can do is to compute the nth power of you matrix A. The result will tell you how many paths there of length n from any one vertex to any other.'
The power of matriax generates walks not paths.