I am using the Symbolics.jl package and trying to invert a matrix. However, the output is giving 'true' instead of 1. Here is the code:
using Symbolics
#variables x
mat=[x 0 0
0 x 0
0 0 x]
And the result I get is
inv(mat)= [true / x 0 0
0 true / x 0
0 0 true / x]
Any thought on why this is happening?
First of, this is not wrong as inv(mat) * mat will still give you the identity matrix. the problem is that the identity matrix in Num (the data type representing Symbolic.jl's variable is the diagonal matrix with true in its diagonal. This can be checked by looking at
Matrix{Num}(I, 3, 3).
The inverse is calculated by solving the system AX = I and the true is created when the dense identity matrix is created.
This is due to the definition of I, namely const I = UniformScaling(true). My guess (and correct me if i'm wrong) is that this is for maximum compatibility. Normally this gets translated to a 1 in integer types but Num is an exception.
Related
I have two symmetric matrices A, B and a vector X. The dimension of A is n by n, the dimension of B is n by n, the dimension of X is n by 1. Let the element at ith row and jth column of matrix A denoted by A[i,j].
Since A is symmetric, only each column of the upper triangular matrix of A is saved. The matrix A is saved as an array:
Vector_A = [A[1,1],
A[1,2], A[2,2],
A[1,3], A[2,3], A[3,3],
A[1,4], A[2,4], A[3,4], A[4,4],
...,
A[1,n], A[2,n], ..., A[n,n]]
The matrix B is saved in the same format as matrix A. Now I would like to calculate ABA without transforming Vector_A, Vector_B back to matrix A, B. Since ABA is also symmetric, I would like to save the ABA in the same way as an array. How can I do it in Julia?
I would also like to calculate X'AX without transforming Vector_A back to matrix A where X' denotes transpose(X). How can I do it in Julia?
You need to implement your own data structures that inherit from the the AbstractMatrix type.
For example this could be done as:
struct SymmetricM{T} <: AbstractMatrix{T}
data::Vector{T}
end
So we have a symmetric matrix that is using only a vector for its data storage.
Now you need to implement functions so it actually behaves like a matrix so you can let the Julia magic work.
We start by providing the size of our new matrix datatype.
function Base.size(m::SymmetricM)
n = ((8*length(m.data)+1)^0.5-1)/2
nr = round(Int, n)
#assert n ≈ nr "The vector length must match the number of triang matrix elements"
(nr,nr)
end
In this code nr will be calculate every time to checkbounds is done on matrix. Perhaps in your production implementation you might want to move it to be a field of SymmetricM. You would scarify some elasticity and store 8 bytes more but would gain on the speed.
Now the next function we need is to calculate position of the vector on the base of matrix indices. Here is one possible implementation.
function getix(idx)::Int
n = size(m)[1]
row, col = idx
#assume left/lower triangular
if col > row
row = col
col = idx[1]
end
(row-1)*row/2 + col
end
Having that now we can implement getindex and setindex functions:
#inline function Base.getindex(m::SymmetricM, idx::Vararg{Int,2})
#boundscheck checkbounds(m, idx...)
m.data[getix(idx)]
end
#inline function Base.getindex(m::SymmetricM{T}, v::T, idx::Vararg{Int,2}) where T
#boundscheck checkbounds(m, idx...)
m.data[getix(idx)] = v
end
Now let us test this thing:
julia> m = SymmetricM(collect(1:10))
4×4 SymmetricM{Int64}:
1 2 4 7
2 3 5 8
4 5 6 9
7 8 9 10
You can see that we have provide elements of only one triangle (be it the lower or upper - they are the same) - and we got the full matrix!
This is indeed a fully valid Julia matrix so all matrix algebra should work on it:
julia> m * SymmetricM(collect(10:10:100))
4×4 Array{Int64,2}:
700 840 1010 1290
840 1020 1250 1630
1010 1250 1580 2120
1290 1630 2120 2940
Note that the result of multiplication is a Matrix rather than SymmetricM - to get a SymmetricM you need to overload the * operator to accept 2 SymmetricM arguments. For illustrative purposes let us show a custom operator overloading with the minus sign -:
import Base.-
-(m1::SymmetricM, m2::SymmetricM) = SymmetricM(m1.data .- m2.data)
And now you will see that substraction of SymmetricM is going to return another SymmetricM:
julia> m-m
4×4 SymmetricM{Int64}:
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
In this way you can build a full triangular matrix algebra system in Julia.
Note that however the getix function has an if statement so access to SymmetricM elements without using the data field will be much slower than those of a regular matrix so perhaps you should try to overload as many operators as is required for your project.
The SymTridiagonal data type in Julia is not letting me assign non-diagonal values to anything other than zero. I get this error: ArgumentError: cannot set off-diagonal entry (2, 1).
I need to assign non-diagonal values because I am trying to implement the ImplicitSymmetricQRStep algorithm which needs to do that in the process.
It is indeed not possible to set the off diagonal values of SymTridiagonal matrix - why this decision was taken I cannot say.
I see now two alternatives:
1) In Julia the fields of a structure are not hidden, so it is possible to change the value that way. This is dangerous though, as the internal structure of that matrix might change in future versions without any warnings. Here is an example of how you would do that:
using LinearAlgebra: SymTridiagonal
a = SymTridiagonal([1 2 0; 2 1 2; 0 2 1)] # 1 on diagonal, 2 on off diagonals
a.ev[1] = 4 # a[1, 2] == 4 and a[2, 1] == 4
2) You could also use the Tridiagonal matrix type, that is also in the LinearAlgebra package; this type allows one to set the off diagonal entries. Then you just have to make sure yourself that you don't violate the symmetric properties of that matrix i.e if you set a[i, j] then you also have to set a[j, i] to the same value.
I'm trying to construct the identity matrix in Julia 1.1. After looking at the documentation I found that I could compute a 4x4 Identity matrix as follows:
julia> Id4 =1* Matrix(I, 4, 4)
4×4 Array{Int64,2}:
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
Is this the most julianic way of coding it or is there a better/shorter way, as it is an often used matrix?
Given using LinearAlgebra, the most julianic way of expressing the identity matrix is:
I
This answer may seem trite, but it is also kind of profound. The whole point of the operator I is that in the vast majority of cases where users want an identity matrix, it is not necessary to actually instantiate that matrix.
Let's say you want a 1000x1000 identity matrix. Why waste time building the entire matrix, when you could just use I, noting that sizeof(I) evaluates to 1 (ie the size of the object is 1 byte). All functions in base Julia (including LinearAlgebra) understand what I is, and can use it appropriately without having to waste time building the actual matrix it represents first.
Now, it may be the case that for some reason you need to specify the type of the elements of your identity matrix. Note:
julia> I
UniformScaling{Bool}
true*I
so in this case, you are using a notional identity matrix with a diagonal of true and off-diagonal of false. This is sufficient in many cases, even if your other matrices are Int or Float64. Internally, Julia will use methods that specialize on the types. However, if you want to specify your identity matrix to contain integers or floats, use:
julia> 1I
UniformScaling{Int64}
1*I
julia> 1.0I
UniformScaling{Float64}
1.0*I
Note that sizeof(1I) evaluates to 8, indicating the notional Int64 types of the members of that matrix.
Also note that you can use e.g. 5I if you want a notional matrix with 5 on the diagonal and 0 elsewhere.
In some cases (and these cases are much rarer than many might think), you may need to actually build the matrix. In this case, you can use e.g.:
Matrix(1I, 3, 3) # Identity matrix of Int type
Matrix(1.0I, 3, 3) # Identity matrix of Float64 type
Matrix(I, 3, 3) # Identity matrix of Bool type
Bogumił has also pointed out in the comments that if you are uncomfortable with implying the type of the output in the first argument of the constructors above, you can also use the (slightly more verbose):
Matrix{Int}(I, 3, 3) # Identity matrix of Int type
Matrix{Float64}(I, 3, 3) # Identity matrix of Float64 type
Matrix{Bool}(I, 3, 3) # Identity matrix of Bool type
and specify the type explicitly.
But really, the only times you would probably need to do this are as follows:
When you want to input an identity matrix into a function in a package written in such a way that the input must be a concrete matrix type.
When you want to start out with an identity matrix but then mutate it in place into something else via one or several transformations.
For instance:
> TRUE * 0.5
0.5
> FALSE * 0.5
0
I don't know if the secret here is the * character itself or the way R encodes logical statements, but I can't understand why the results.
R has a fairly loose type system and rather freely does coercion, hopefully when it is sensible. When coerced to numeric by *, logical values become 0 (FALSE) and 1 (TRUE), your expression gets evaluated with the usual mathematical convention of all values times 0 equal 0, and all values times 1 equal the value. The one exception to that rule in the numeric domain is Inf * 0 returns NaN. Character values have no "destination"-type when composed with "*", so "1"*TRUE throws an error.
I just read an article on www.songho.ca which indicates that a projection matrix is defined by:
[2n/(r-l) 0 (r+l)/(r-l) 0 ]
[0 2n/(t-b) (t+b)/(t-b) 0 ]
[0 0 -(f+n)/(f-n) -2*n*f/(f-n) ]
[0 0 -1 0 ]
where:
n: near
f: far
l: left
r: right
t: top
b: bottom
I have also read on www.geeks3d.com of an alternate definition given by:
[w 0 0 0]
[0 h 0 0]
[0 0 q -1]
[0 0 qn 0]
where:
w=(2*near)/(width * aspect)
h = 2near/height
q=-(far+near)/(far-near)
qn=-2*(far*near) / (far-near)
Why are there differences in M[0][2] and M[1][2] (excluding one is the transposed of other)? Do they generate the same result? Which one is posible to use in GLSL without any transpose?
The first matrix allows you to arbitrarily position the left, right, top and bottom clip plane positions. The second one always gives you a centred, symmetric frustum, which is kind of limiting. For example when you're doing stereoscopic rendering you want to slightly shift the left and right plane.
BTW, which one is posible to use in GLSL without any transpose?
This has nothing to do with GLSL. You can use either. The transpose you're referring to stems from the way matrices are represented internally in OpenGL and interfaces to the outside world.
Anyway, you should not hardcode matrices into shader source code, but supply them through a Uniform.
Update
OpenGL orders its matrices column major, i.e.
0 4 8 c
1 5 9 d
2 6 a e
3 7 b f