I have I math problem, I want to find the blue Vector from the picture below. (in x, y coordinate)
Both yellow and green are normalized. Red vector x,y value can be between 0 to 1;
I manage to find in which direction the vector blue is facing:
greenVector = CrossProduct(yellow Vector, Vector.z); //get the green vector
float dir = DotProduct(red Vector, greenVector);
If (dir < 0) -> return (-greenVector);
else if (dir < 0) -> return (greenVector);
else -> return (Vector.Zero);
But it return a normalized vector only... I want the length of the vector too.
The DotProduct of the red Vector and the green Vector (green vector is normalized) gives you the length of the red vector along the green vector direction. So you have all the information you need.
Return dir * greenVector
No need for any if statements. You definitely do not want to try to reverse the direction of the green vector. If the red vector is pointing 'away' from the green vector, dir will be negative for you already. Same for the zero vector, if the red vector is orthogonal to the green vector, dir will be zero and a zero vector will be returned anyway.
Related
I'm trying to create a function which has as its input a vector where each element is a positive integer and returns each element of the input vector which is a cube number.
This is my code so far:
positivecube<- function(x){
b<-round(x^(1/3), digits=6)
b%%1==0 &&
b!=1
return(x[b])
}
I'm struggling to remove the 1s from the final vector and to also convert them back to the original cube number. Any help would be great, thank you
You could return the subset of x where the absolute difference between the cube root and the cube root rounded to the nearest integer is very small:
positivecube <- function(x) x[abs(round(x^(1/3)) - x^(1/3)) < 1e-8]
positivecube(1:100)
#> [1] 1 8 27 64
Here is a image:
I have two vectors : os, oe
the range between them is always from os (start) to oe (end).
So in this image the range between is a angle of 270°.
Then I have two vector to check: oa, ob
As you can see the vector oa should be within the range formed by osoe and the vector ob should be outside.
I am wondering if there is a way to do the check using only vector math (such as cross product dot product).
I tried to use cross product with clockwise/counter clockwise check but it seems like when the angle in between is larger then 180°, things get complex.
Any advice will be appreciated, thanks :)
I denote vector to point p as op.
Calculate cross product
c_se = cross(os, oe)
If c_se>=0 (angle in 0..180 range), then you have to check whether
cross(os, op) >= 0 AND cross(op, oe) >= 0
If c_se < 0 (angle in 180..360 range), then you have to check whether
NOT (cross(oe, op) >= 0 AND cross(op, os) >= 0)
I have 2 3D vectors. (objects with X, Y and Z float values)
In my diagram below, I would like to determine the length of the green line.
This is the distance along Vector 1 that Vector 2 is. Or, the distance from the origin to the end of a line on Vector 1 which is at 90' to Vector 1 and passes thorough the point at the end of Vector 2.
I am doing this in Unity3D so I have access to quite a few helper methods that enable me to get the length of a Vector3 and so on very easily.
The length is obviously
norm(v2)*cos(angle(v1,v2))
and since
cos(angle(v1,v2))=abs(dot(v1,v2))/norm(v1)/norm(v2)
the final formula is
abs(dot(v1,v2))/norm(v1)
One could also say that
e1 = v1/norm(v1)
is the unit vector in the direction of v1, and that the green vector is
dot(e1,v2)*e1
resulting in the same length formula.
This is projection of Vector2 onto Vector1 direction. The simplest way (I think) to find it - using scalar product
D = |V2| * DotProduct(V2, V1) / (|V2| * |V1|) = DotProduct(V2, V1) / |V1|
where |V1| is the length of V1 vector
Im not sure but I think this is what you wanted
Vector3 distance = Vector3.Lerp(Vector3.zero, vector_1, vector_2.sqrMagnitude / vector_1.sqrMagnitude);
http://docs.unity3d.com/ScriptReference/Vector3-sqrMagnitude.html
http://docs.unity3d.com/ScriptReference/Vector3.Lerp.html
I have two matrices, A and B both are 100*100. Both these matrices have positive and negative numbers. I want to create a matrix C with the same dimensions and the elements in this matrix are dependent on the elements at the same position in matrices A and B.
For example if I have 22 and 1 in position [1,1] in matrix A and B, I want to assign a value 1 at the same position in matrix C because both A and B have values above 0. Similarly for every values in C they are all dependent on whether values in matrices A AND B (in the same position) are above 0 or not. This is how my code looks like at the moment,
C<-matrix(0,100,100) #create a matrix with same dimensions and populated with 0
C[A>0 && B>0] = 1
My matrix A satisfies the condition A>0 as there are some negative and some positive values, matrix B also satisfies the condition B>0 as some values are negative and some positive. However my code does not result in a matrix C with values of 0 and 1, even when I know there are some positions which meet the requirement of both matrix A and B being above 0. Instead the matrix C always contains 0 for some reason.
Could any one let me know what I am doing wrong and how do I correct it or perhaps a different way to achieve this? Thanks
Does C[A>0 & B>0] = 1 work? && returns a single value, but & is vectorized so it will work on each cell individually.
This may not be the most efficient way to do it, but it works.
C <- matrix(0, 100, 100)
for (i in seq_along(C))
if (A[i] > 0 && B[i] > 0)
C[i] <- 1
When you create a sequence along a matrix using seq_along(), it goes through all elements in column-major order. (Thereby avoiding a double for loop.) And since the elements of A, B, and C match up, this should give you what you want.
I'm comparing two different linear math libraries for 3D graphics using matrices. Here are two similar Translate functions from the two libraries:
static Matrix4<T> Translate(T x, T y, T z)
{
Matrix4 m;
m.x.x = 1; m.x.y = 0; m.x.z = 0; m.x.w = 0;
m.y.x = 0; m.y.y = 1; m.y.z = 0; m.y.w = 0;
m.z.x = 0; m.z.y = 0; m.z.z = 1; m.z.w = 0;
m.w.x = x; m.w.y = y; m.w.z = z; m.w.w = 1;
return m;
}
(c++ library from SO user prideout)
static inline void mat4x4_translate(mat4x4 T, float x, float y, float z)
{
mat4x4_identity(T);
T[3][0] = x;
T[3][1] = y;
T[3][2] = z;
}
(linmath c library from SO user datenwolf)
I'm new to this stuff but I know that the order of matrix multiplication depends a lot on whether you are using a column-major or row-major format.
To my eyes, these two are using the same format, in that in both the first index is treated as the row, the second index is the column. That is, in both the x y z are applied to the same first index. This would imply to me row-major, and thus matrix multiplication is left associative (for example, you'd typically do a rotate * translate in that order).
I have used the first example many times in a left associative context and it has been working as expected. While I have not used the second, the author says it is right-associative, yet I'm having trouble seeing the difference between the formats of the two.
To my eyes, these two are using the same format, in that in both the first index is treated as the row, the second index is the column.
The looks may be deceiving, but in fact the first index in linmath.h is the column. C and C++ specify that in a multidimensional array defined like this
sometype a[n][m];
there are n times m elements of sometype in succession. If it is row or column major order solely depends on how you interpret the indices. Now OpenGL defines 4×4 matrices to be indexed in the following linear scheme
0 4 8 c
1 5 9 d
2 6 a e
3 7 b f
If you apply the rules of C++ multidimensional arrays you'd add the following column row designation
----> n
| 0 4 8 c
| 1 5 9 d
V 2 6 a e
m 3 7 b f
Which remaps the linear indices into 2-tuples of
0 -> 0,0
1 -> 0,1
2 -> 0,2
3 -> 0,3
4 -> 1,0
5 -> 1,1
6 -> 1,2
7 -> 1,3
8 -> 2,0
9 -> 2,1
a -> 2,2
b -> 2,3
c -> 3,0
d -> 3,1
e -> 3,2
f -> 3,3
Okay, OpenGL and some math libraries use column major ordering, fine. But why do it this way and break with the usual mathematical convention that in Mi,j the index i designates the row and j the column? Because it is make things look nicer. You see, matrix is just a bunch of vectors. Vectors that can and usually do form a coordinate base system.
Have a look at this picture:
The axes X, Y and Z are essentially vectors. They are defined as
X = (1,0,0)
Y = (0,1,0)
Z = (0,0,1)
Moment, does't that up there look like a identity matrix? Indeed it does and in fact it is!
However written as it is the matrix has been formed by stacking row vectors. And the rules for matrix multiplication essentially tell, that a matrix formed by row vectors, transforms row vectors into row vectors by left associative multiplication. Column major matrices transform column vectors into column vectors by right associative multiplication.
Now this is not really a problem, because left associative can do the same stuff as right associative can, you just have to swap rows for columns (i.e. transpose) everything and reverse the order of operands. However left<>right row<>column are just notational conventions in which we write things.
And the typical mathematical notation is (for example)
v_clip = P · V · M · v_local
This notation makes it intuitively visible what's going on. Furthermore in programming the key character = usually designates assignment from right to left. Some programming languages are more mathematically influenced, like Pascal or Delphi and write it :=. Anyway with row major ordering we'd have to write it
v_clip = v_local · M · V · P
and to the majority of mathematical folks this looks unnatural. Because, technically M, V and P are in fact linear operators (yes they're also matrices and linear transforms) and operators always go between the equality / assignment and the variable.
So that's why we use column major format: It looks nicer. Technically it could be done using row major format as well. And what does this have to do with the memory layout of matrices? Well, When you want to use a column major order notation, then you want direct access to the base vectors of the transformation matrices, without having them to extract them element by element. With storing numbers in a column major format, all it takes to access a certain base vector of a matrix is a simple offset in linear memory.
I can't speak for the code example of the other library, but I'd strongly assume, that it treats first index as the slower incrementing index as well, which makes it work in column major if subjected to the notations of OpenGL. Remember: column major & right associativity == row major & left associativity.
The fragments posted are not enough to answer the question. They could be row-major matrices stored in row order, or column-major matrices stored in column order.
It may be more obvious if you look at how a vector is treated when multiplied with an appropriate matrix. In a row-major system, you would expect the vector to be treated as a single row matrix, whereas in a column-major system it would similarly be a single column matrix. That then dictates how a vector and a matrix may be multiplied. You can only multiply a vector with a matrix as either a single column on the right, or a single row on the left.
The GL convention is column-major, so a vector is multiplied to the right.
D3D is row-major, so vectors are rows and are multiplied to the left.
This needs to be taken into account when concatenating transforms, so that they are applied in the correct order.
i.e:
GL:
V' = CAMERA * WORLD * LOCAL * V
D3D:
V' = V * LOCAL * WORLD * CAMERA
However they choose to store their matrices such that the in-memory representations are actually the same (until we get into shaders and some representations need to be transposed...)