I have a question that might be obvious, but I have problems resolving it...
I have 2 coordinate systems . Let's say
Oxyz with x(1,0,0), y(0,1,0), and z(0,0,1) and O(0,0,0). And Px'y'z' with P and x' y' z' known.
My goal here is to create the rotation Matrix that permit me to pass from Oxyz to Px'y'z'.
What I did is :
calculate Xangle between vector Ox and Px' (by using the formula :
Theta = cos( Ox . Px' / ||Ox|| * ||Px'|| ) ). And create the XMatrix using theta :
|1 0 0 |
|0 cos(Theta) -sin(Theta) |
|0 sin(Theta) cos(Theta) |
I do Exactly the same for Yangle and Zangle, then creates the 2 corresponding Matrix.
After all of this, I create MFinal = XMatrix * YMatrix * ZMatrix.
Is it right? Can it works in any world?
Thanks for all! :)
Best regards
Your method does not work because you do not measure the angle in the plane orthogonal to the axes. The component of the difference vectors that is parallel to the rotation axis will not change by rotation.
Anyway, there is a much simpler method. Assume that the rotation matrix is R. Then you want the original x-axis be mapped to the new x-axis:
x' = R x = R (1, 0, 0)^T
Since the base coordinate system is the canonical system, the last expression evaluates to the first column of R and you get:
x' = R[0]
And this is pretty easy to solve.
So all you need to do is put the local axes as column vectors into R. If you want to include a translation, just do the same with the local origin and the fourth column.
Related
I have a random 3-d walk where you can go either positive x,y,z by one step. After 1000 steps you most likely end up on concordant 333,333,333. I have a visual of 2000 trials. I want to plot the ending points onto a 3-d histogram. I believe it should look like a 3-d Gaussian. My problem is that if you take the ending x-y coordinates directly and plot them, you get an oval shape. I think that's kind of expected. These ending points lie on the plane formed by the end of the random walks. Here is a picture of the histogram Here is a picture of a 2-d plot of the x-y ending points. How to I transform the ending points from my "cone" to accurate x-y coordinates. I think the 2-d would look like a circle. Thank you
If you want a graph that looks more like a circle, consider plotting against the ending plane (which you describe) instead of x and y. If you transform your results to use coordinates like an isometric projection:
x' = x - (y + z)/2
y' = y - (x + z)/2
z' = z - (x + y)/2
e.g.:
z'
|
|
/ \
/ \
x' y'
then convert that to a 2d version that pleases you.
x'' = sqrt(3) * (y' - x')/2
y'' = z' - (x' + y')/2
If you want to understand how this was accomplished, just think of looking directly down from the vector (k, k, k) to the origin from infinitely far away, with 'up' pointed towards the z-axis. You would see (1,0,0), (0,1,0), and (0,0,1) forming an equilateral triangle, and simple geometry can give you the coordinates. You can even skip the first xyz -> xyz' steps, that was to attempt to make it easier to understand, but might make it seem more confusing.
You can also search for "isometric projection", or take a look at this chart:
and this online calculator:
https://planetcalc.com/8316/
I am given 3 values y0, y1, y2. They are supposed to be evenly spaced, say x0 = -0.5, x1 = 0.5, x2 = 1.5. And to be able to draw a spline through all of them, the derivatives at all points are said to be dy/dx = 0.
Now the result of rendering two Catmull-Rom-Splines (which is done via GLSL fragment shader, including a nonlinear transformation) looks pretty rigit. I.e. where the curve bends, it does so smoothly, though, but the bending area is very small. Zooming out makes the bends look too sharp.
I wanted to switch to TCB-Splines (aka. Kochanek-Bartels Splines), as those provide a tension parameter - thus I hoped I could smooth the look. But I realized that all TCB-Parameters applied to a zero tangent won't do any good.
Any ideas how I could get a smoother looking curve?
The tangent vector for a 2d parametric curve f(t)=(x(t), y(t)) is defined as f'(t)=(dx(t)/dt, dy(t)/dt). When you require your curve to have dy/dx = 0 at some points, it simply means the tangent vector at those points will go horizontally (i.e., dy/dt = 0). It does not necessarily mean the tangent vector itself is a zero vector. So, you should still be able to use TCB spline to do whatever you want to do.
Obviously nobody had a good answer, but as it's my job, I found a solution: The Points are evenly spaced, and the idea is to make transitions smoother. Now it's given, that the tangents are zero at all given Points, so it is most likely that close to the points we get the strongest curvature y''(x). This means, we'd like to stretch these "areas around the points".
Considering that currently we use Catmull-Rom-Splines, sectioned between the points. That makes y(x) => y(t) , t(x) = x-x0.
This t(x) needs to be stretched around the 0- and the 1-areas. So the cosine function jumped into my mind:
Replacing t(x) = x-x0 with t(x) = 0.5 * (1.0 - cos( PI * ( x-x0 ) ) did the job for me.
Short explanation:
cosine in the range [0,PI] runs smoothly from 1 to -1.
we want to run from 0 to 1, though
so flip it: 1-cos() -> now it runs from 0 to 2
halve that: 0.5*xxx -> now it runs from 0 to 1
Another problem was to find the correct tangents. Normally, calculating such a spline using Matrix-Vector-Math, you simply derive your t-vector to get the tangents, so deriving [t³ t² t 1] yields [3t² 2t 1 0]. But here, t is not simple. Using this I found the right derived vector:
| 0.375*PI*sin(PI*t)(1-cos(PI*t))² |
| 0.500*PI*sin(PI*t)(1-cos(PI*t)) |
| 0.500*PI*sin(PI*t) |
| 0 |
I have an object at coordinates (0, 0, 0) and I want to translate this object in any direction a predetermined distance, for example 5.
How do I to find the final coordinates? (not iteratively checking)
You can achieve exactly the same as T.Kiley said just picking a totally random vector, it will have a random direction and random magnitude. You can then normalize that vector and multiply it by 5 (or the desired magnitude).
I believe what you are asking is how do you generate a point that is 5 away from the origin (0,0,0). In general, you can use the parametric equations of a sphere to generate these points by first picking two random numbers in the range [0, 2pi] and [0, pi] respectively, then your point is
x = r * cos(theta) * sin(phi)
y = r * sin(theta) * sin(phi)
z = r * cos(phi)
Where theta is the first random number, phi is the second and r is your distance from the origin.
In Unity, it is even easier as you can use Random.onUnitSphere to give you a point exactly 1 away from the origin. Then just multiply by 5, e.g.:
finalPosition = Random.onUnitSphere * r
Again, where r in your example would be 5.
This is not a homework. I am asking to see if problem is classical (trivial) or non-trivial. It looks simple on a surface, and I hope it is truly a simple problem.
Have N points (N >= 2) with
coordinates Xn, Yn on a surface of
2D solid body.
Solid body has some small rotation (below Pi/180)
combined with small shifts (below 1% of distance between any 2 points of N). Possibly some small deformation too (<<0.001%)
Same N points have new coordinates named XXn, YYn
Calculate with best approximation the location of center of rotation as point C with coordinates XXX, YYY.
Thank you
If you know correspondence (i.e. you know which points are the same before and after the transformation), and you choose to allow scaling, then the problem is a set of linear equations. If you have 2 or more points then you can find a least-squares solution with little difficulty.
For initial points (xi,yi) and transformed points (xi',yi') you have equations of the form
xi' = a xi + b yi + c
yi' =-b xi + a yi + d
which you can rearrange into a linear system
A x = y
where
A = | x1 y1 1 0 |
| y1 -x1 0 1 |
| x2 y2 1 0 |
| y2 -x2 0 1 |
| ... |
| xn yn 1 0 |
| yn -xn 0 1 |
x = | a |
| b |
| c |
| d |
y = | x1' |
| y1' |
| x2' |
| y2' |
| ... |
| xn' |
| yn' |
the standard "least-squares" form of which is
A^T A x = A^T y
and has the solution
x = (A^T A)^-1 A^T y
with A^T as the transpose of A and A^-1 as the inverse of A. Normally you would use an SVD or QR decomposition to compute the solution as they ought to be more stable and less computationally intensive than the inverse.
Once you've found x (and so the four elements of the transformation a, b, c and d) then the various elements of the transformation are given by
scale = sqrt(a*a+b*b)
rotation = atan2(b,a)
translation = (c,d)/scale
If you don't include scaling then the system is non-linear, and requires an iterative solution (but isn't too difficult to solve). If you do not know correspondence then the problem is substantially harder, for small transformations something like iterated closest point works, for large transformations it's a lot harder.
Edit: I forgot to include the centre of rotation. A rotation theta about an arbitrary point p is a sequence
translate(p) rotate(theta) translate(-p)
if you expand it all out as an affine transformation (essentially what we have above) then the translation terms come to
dx = px - cos(theta)*px + sin(theta)*py
dy = py - sin(theta)*px - cos(theta)*py
we know theta (rotation), dx (c) and dy (d) from the equations above. With a little bit of fiddling we can solve for px and py
px = 0.5*(dx - sin(theta)*dy/(1-cos(theta)))
py = 0.5*(dy + sin(theta)*dx/(1-cos(theta)))
You'll notice that the equations are undefined if theta is zero, because there is no centre of rotation when no rotation is performed.
I think I have all that correct, but I don't have time to double check it all right now.
Look up the "Kabsch Algorithm". It is a general-purpose algorithm for creating rotation matrices using N known pairs. Kabsch created it to assist denoising stereo photographs. You rotate a feature in picture A to picture B, and if it is not in the target position, the feature is noise.
http://en.wikipedia.org/wiki/Kabsch_algorithm
See On calculating the finite centre of rotation for
rigid planar motion for a relatively simple solution. I say "relatively simple" because it still uses things like psuedo-inverses and SVD (singular value decomposition). And here's a wikipedia article on Instant centre of rotation. And another paper: ESTIMATION OF THE FINITE CENTER OF ROTATION IN PLANAR MOVEMENTS.
If you can handle stiffer stuff, try Least Squares Estimation of Transformation Parameters Between Two Point Patterns.
First of all, the problem is non-trivial.
A "simple" solition. It works best when the polygon resembles circle, and points are distributed evenly.
iterate through N
For both old and new dataset, find the 2 farthest points of the point N.
So now you have the triangle before and after the transformation. Use the clockwise direction from the center of each triangle to number its vertices as [0] (=the N-th point in the original dataset), [1], and [2] (the 2 farthest points).
Calculate center of rotation, and deformation (both x and y) of this triangle. If the deformation is more then your 0.001% - drop the data for this triangle, otherwise save it.
Calculate the average for the centers of rotation.
The right solution: define the function Err(Point BEFORE[N], Point AFTER[N], double TFORM[3][3]), where BEFORE - constant old data points, AFTER - constant new data points, TFORM[3][3] affine transformation matrix, Err(...) function that returns the scalar error value, 0.0 when the TFORM translated BEFORE to exact AFTER, or some >0.0 error value. Then use any numeric math you want to find the minimum of the Err(TFORM): e.g. gradient search.
Calculate polygon centers O1 and O2. Determine line formulae for O1 with (X0, Y0) and O2 with (XX0, YY0). Find intersection of lines to get C.
If I understand your problem correctly, this could be solved in this way:
find extremities (furthest points, probably on several axises)
scale either one to match
their rotation should now be trivial (?)
Choose any 2 points on the body, P1, P2, before and after rotation. Find vectors between these before and after points. Cross these vectors with a vector normal to the plane of rotation. This results in two new vectors, the intersection of the lines formed by the initial points and these two new vectors is the center of the rotation.
{
if P1after = P1before return P1after
if P2after = P2before return P2after
Vector V1 = P1after - P1before
Vector V2 = P2after - P2before
normal = Vn // can be messy to create for arbitrary 3d orientation but is simple if you know orientation, for instance, normal = (0,0,1) for an object in the x,y plane)
Vector VL1 = V1 x Vn //Vector V1 cross product with Vn
Vector VL2 = V2 x Vn
return intersectLines(P1after,VL1,P2after,VL2) //Center of rotation is intersection of two lines
}
intersectLines(Point P1, Vector V1, Point P2, Vector V2)
{
//intersect two lines using point, direction form of a line
//returns a Point
}
From this site: http://www.toymaker.info/Games/html/vertex_shaders.html
We have the following code snippet:
// transformations provided by the app, constant Uniform data
float4x4 matWorldViewProj: WORLDVIEWPROJECTION;
// the format of our vertex data
struct VS_OUTPUT
{
float4 Pos : POSITION;
};
// Simple Vertex Shader - carry out transformation
VS_OUTPUT VS(float4 Pos : POSITION)
{
VS_OUTPUT Out = (VS_OUTPUT)0;
Out.Pos = mul(Pos,matWorldViewProj);
return Out;
}
My question is: why does the struct VS_OUTPUT have a 4 dimensional vector as its position? Isn't position just x, y and z?
Because you need the w coordinate for perspective calculation. After you output from the vertex shader than DirectX performs a perspective divide by dividing by w.
Essentially if you have 32768, -32768, 32768, 65536 as your output vertex position then after w divide you get 0.5, -0.5, 0.5, 1. At this point the w can be discarded as it is no longer needed. This information is then passed through the viewport matrix which transforms it to usable 2D coordinates.
Edit: If you look at how a matrix multiplication is performed using the projection matrix you can see how the values get placed in the correct places.
Taking the projection matrix specified in D3DXMatrixPerspectiveLH
2*zn/w 0 0 0
0 2*zn/h 0 0
0 0 zf/(zf-zn) 1
0 0 zn*zf/(zn-zf) 0
And applying it to a random x, y, z, 1 (Note for a vertex position w will always be 1) vertex input value you get the following
x' = ((2*zn/w) * x) + (0 * y) + (0 * z) + (0 * w)
y' = (0 * x) + ((2*zn/h) * y) + (0 * z) + (0 * w)
z' = (0 * x) + (0 * y) + ((zf/(zf-zn)) * z) + ((zn*zf/(zn-zf)) * w)
w' = (0 * x) + (0 * y) + (1 * z) + (0 * w)
Instantly you can see that w and z are different. The w coord now just contains the z coordinate passed to the projection matrix. z contains something far more complicated.
So .. assume we have an input position of (2, 1, 5, 1) we have a zn (Z-Near) of 1 and a zf (Z-Far of 10) and a w (width) of 1 and a h (height) of 1.
Passing these values through we get
x' = (((2 * 1)/1) * 2
y' = (((2 * 1)/1) * 1
z' = ((10/(10-1) * 5 + ((10 * 1/(1-10)) * 1)
w' = 5
expanding that we then get
x' = 4
y' = 2
z' = 4.4
w' = 5
We then perform final perspective divide and we get
x'' = 0.8
y'' = 0.4
z'' = 0.88
w'' = 1
And now we have our final coordinate position. This assumes that x and y ranges from -1 to 1 and z ranges from 0 to 1. As you can see the vertex is on-screen.
As a bizarre bonus you can see that if |x'| or |y'| or |z'| is larger than |w'| or z' is less than 0 that the vertex is offscreen. This info is used for clipping the triangle to the screen.
Anyway I think thats a pretty comprehensive answer :D
Edit2: Be warned i am using ROW major matrices. Column major matrices are transposed.
Rotation is specified by a 3 dimensional matrix and translation by a vector. You can perform both transforms in a "single" operation by combining them into a single 4 x 3 matrix:
rx1 rx2 rx3 tx1
ry1 ry2 ry3 ty1
rz1 rz2 rz3 tz1
However as this isn't square there are various operations that can't be performed (inversion for one). By adding an extra row (that does nothing):
0 0 0 1
all these operations become possible (if not easy).
As Goz explains in his answer by making the "1" a non identity value the matrix becomes a perspective transformation.
Clipping is an important part of this process, as it helps to visualize what happens to the geometry. The clipping stage essentially discards any point in a primitive that is outside of a 2-unit cube centered around the origin (OK, you have to reconstruct primitives that are partially clipped but that doesn't matter here).
It would be possible to construct a matrix that directly mapped your world space coordinates to such a cube, but gradual movement from the far plane to the near plane would be linear. That is to say that a move of one foot (towards the viewer) when one mile away from the viewer would cause the same increase in size as a move of one foot when several feet from the camera.
However, if we have another coordinate in our vector (w), we can divide the vector component-wise by w, and our primitives won't exhibit the above behavior, but we can still make them end up inside the 2-unit cube above.
For further explanations see http://www.opengl.org/resources/faq/technical/depthbuffer.htm#0060 and http://en.wikipedia.org/wiki/Transformation_matrix#Perspective_projection.
A simple answer would be to say that if you don't tell the pipeline what w is then you haven't given it enough information about your projection. This can be verified directly without understanding what the pipeline does with it...
As you probably know the 4x4 matrix can be split into parts based on what each part does. The 3x3 matrix at the top left is altered when you do rotation or scale operations. The fourth column is altered when you do a translation. If you ever inspect a perspective matrix, it alters the bottom row of the matrix. If you then look at how a Matrix-Vector multiplication is done, you see that the bottom row of the matrix ONLY affects the resultant w component of the vector. So if you don't tell the pipeline about w it won't have all your information.