I've been trying to figure out the 2D rotation value as seen from orthographic "top" view for a 3D object with XYZ rotation values in Maya. Maybe another way to ask this could be: I want to figure out the 2D rotation of a 3D obj's direction.
Here is a simple image to illustrate my question:
I've tried methods like getting the twist value of an object using quaternion (script pasted below), to this post I've found: Component of a quaternion rotation around an axis.
If I set the quaternion's X and Z values to zero, this method works half way. I can get the correct 2D rotation even when obj is rotated in both X and Y axis, but when rotated in all 3 axis, the result is wrong.
I am pretty new to all the quaternion and vector calculations, so I've been having difficulty trying to wrap my head around it.
;)
def quaternionTwist(q, axisVec):
axisVec.normalize()
# Get the plane the axisVec is a normal of
orthonormal1, orthonormal2 = findOrthonormals(axisVec)
transformed = rotateByQuaternion(orthonormal1, q)
# Project transformed vector onto plane
flattened = transformed - ((transformed * axisVec) * axisVec)
flattened.normalize()
# Get angle between original vector and projected transform to get angle around normal
angle = math.acos(orthonormal1 * flattened)
return math.degrees(angle)
q = getMQuaternion(obj)
# Zero out X and Y since we are only interested in Y axis.
q.x = 0
q.z = 0
up = om2.MVector.kYaxisVector
angle = quaternionTwist(q, up)
Can you get the (x,y,z) coordinates of the rotated vector? Once you have them use the (x,y) values to find the angle with atan2(y,x).
I'm not familiar with the framework you're using, but if it does what it seems, I think you're almost there. Just don't zero out the X and Z components of the quaternion before calling quaternionTwist().
The quaternions q1 = (x,y,z,w) and q2 = (0, y, 0, w) don't represent the same rotation about the y-axis, especially since q2 written this way becomes unnormalized, so what you're really comparing is (x,y,z,w) with (0, y/|q2|, 0, w/|q2|) where |q2| = sqrt(y^2 + w^2).
Here is a working code for Maya using John Alexiou's answer:
matrix = dagPath.inclusiveMatrix() #OpenMaya dagPath for an object
axis = om2.MVector.kZaxisVector
v = (axis * matrix).normal()
angle = math.atan2(v.x, v.z) #2D angle on XZ plane
Related
I am trying to programmatically visualise a vector point but I want to clarify my output result.
If a vector p = i = [1,0,0] rotate by 90 degree about the x-axis, then quaternion q is: q = cos(45) + [1,0,0]*sin(45) = 0.707 + 0.707*i.
pn = qpq-1;
Now calculate pn: (0.707+0.707*i)(i)(0.707-0.707*i) = i.
So, the rotated vector pn = [1,0,0]. Which is p=pn.
Is p=pn correct? If it is can anyone explain it? or is this a special property of quaternions?
In the example you provided, you basically rotate a vector around itself (i.e. the axis of rotation is equal to the rotated vector, in this case [1,0,0]). As said in the comments, rotating a vector around itself, leaves it intact, regardless of the rotation angle.
Try your example where the rotated vector is along the y-axis [0,1,0], and the rotation axis is [1,0,0]. Maybe this will help you visualize some basic rotations.
Also, be mindful that a rotation of vector v using a unit quaternion q is given by:
Imaginary{q * [0, v_x, v_y, v_z] * conjugate(q)}
i am scratching my head for some time now how to do this.
I have two defined vectors in 3d space. Say vector X at (0,0,0) and vector Y at (3,3,3). I will get a random point on a line between those two vectors. And around this point i want to form a circle ( some amount of points ) perpendicular to the line between the X and Y at given radius.
Hopefuly its clear what i am looking for. I have looked through many similar questions, but just cant figure it out based on those. Thanks for any help.
Edit:
(Couldnt put everything into comment so adding it here)
#WillyWonka
Hi, thanks for your reply, i had some moderate success with implementing your solution, but has some trouble with it. It works most of the time, except for specific scenarios when Y point would be at positions like (20,20,20). If it sits directly on any axis its fine.
But as soon as it gets into diagonal the distance between perpendicular point and origin gets smaller for some reason and at very specific diagonal positions it kinda flips the perpendicular points.
IMAGE
Here is the code for you to look at
public Vector3 X = new Vector3(0,0,0);
public Vector3 Y = new Vector3(0,0,20);
Vector3 A;
Vector3 B;
List<Vector3> points = new List<Vector3>();
void FindPerpendicular(Vector3 x, Vector3 y)
{
Vector3 direction = (x-y);
Vector3 normalized = (x-y).normalized;
float dotProduct1 = Vector3.Dot(normalized, Vector3.left);
float dotProduct2 = Vector3.Dot(normalized, Vector3.forward);
float dotProduct3 = Vector3.Dot(normalized, Vector3.up);
Vector3 dotVector = ((1.0f - Mathf.Abs(dotProduct1)) * Vector3.right) +
((1.0f - Mathf.Abs(dotProduct2)) * Vector3.forward) +
((1.0f - Mathf.Abs(dotProduct3)) * Vector3.up);
A = Vector3.Cross(normalized, dotVector.normalized);
B = Vector3.Cross(A, normalized);
}
What you want to do first is to find the two orthogonal basis vectors of the plane perpendicular to the line XY, passing through the point you choose.
You first need to find a vector which is perpendicular to XY. To do this:
Normalize the vector XY first
Dot XY with the X-axis
If this is very small (for numerical stability let's say < 0.1) then it must be parallel/anti-parallel to the X-axis. We choose the Y axis.
If not then we choose the X-axis
For whichever chosen axis, cross it with XY to get one of the basis vectors; cross this with XY again to get the second vector.
Normalize them (not strictly necessary but very useful)
You now have two basis vectors to calculate your circle coordinates, call them A and B. Call the point you chose P.
Then any point on the circle can be parametrically calculated by
Q(r, t) = P + r * (A * cos(t) + B * sin(t))
where t is an angle (between 0 and 2π), and r is the circle's radius.
How do you find the 3 euler angles between 2 3D vectors?
When I have one Vector and I want to get its rotation, this link can be usually used: Calculate rotations to look at a 3D point?
But how do I do it when calculating them according to one another?
As others have already pointed out, your question should be revised. Let's call your vectors a and b. I assume that length(a)==length(b) > 0 otherwise I cannot answer the question.
Calculate the cross product of your vectors v = a x b; v gives the axis of rotation. By computing the dot product, you can get the cosine of the angle you should rotate with cos(angle)=dot(a,b)/(length(a)length(b)), and with acos you can uniquely determine the angle (#Archie thanks for pointing out my earlier mistake). At this point you have the axis angle representation of your rotation.
The remaining work is to convert this representation to the representation you are looking for: Euler angles. Conversion Axis-Angle to Euler is a way to do it, as you have found it. You have to handle the degenerate case when v = [ 0, 0, 0], that is, when the angle is either 0 or 180 degrees.
I personally don't like Euler angles, they screw up the stability of your app and they are not appropriate for interpolation, see also
Strange behavior with android orientation sensor
Interpolating between rotation matrices
At first you would have to subtract vector one from vector two in order to get vector two relative to vector one. With these values you can calculate Euler angles.
To understand the calculation from vector to Euler intuitively, lets imagine a sphere with the radius of 1 and the origin at its center. A vector represents a point on its surface in 3D coordinates. This point can also be defined by spherical 2D coordinates: latitude and longitude, pitch and yaw respectively.
In order "roll <- pitch <- yaw" calculation can be done as follows:
To calculate the yaw you calculate the tangent of the two planar axes (x and z) considering the quadrant.
yaw = atan2(x, z) *180.0/PI;
Pitch is quite the same but as its plane is rotated along with yaw the 'adjacent' is on two axis. In order to find its length we will have to use the Pythagorean theorem.
float padj = sqrt(pow(x, 2) + pow(z, 2));
pitch = atan2(padj, y) *180.0/PI;
Notes:
Roll can not be calculated as a vector has no rotation around its own axis. I usually set it to 0.
The length of your vector is lost and can not be converted back.
In Euler the order of your axes matters, mix them up and you will get different results.
It took me a lot of time to find this answer so I would like to share it with you now.
first, you need to find the rotation matrix, and then with scipy you can easily find the angles you want.
There is no short way to do this.
so let's first declare some functions...
import numpy as np
from scipy.spatial.transform import Rotation
def normalize(v):
return v / np.linalg.norm(v)
def find_additional_vertical_vector(vector):
ez = np.array([0, 0, 1])
look_at_vector = normalize(vector)
up_vector = normalize(ez - np.dot(look_at_vector, ez) * look_at_vector)
return up_vector
def calc_rotation_matrix(v1_start, v2_start, v1_target, v2_target):
"""
calculating M the rotation matrix from base U to base V
M # U = V
M = V # U^-1
"""
def get_base_matrices():
u1_start = normalize(v1_start)
u2_start = normalize(v2_start)
u3_start = normalize(np.cross(u1_start, u2_start))
u1_target = normalize(v1_target)
u2_target = normalize(v2_target)
u3_target = normalize(np.cross(u1_target, u2_target))
U = np.hstack([u1_start.reshape(3, 1), u2_start.reshape(3, 1), u3_start.reshape(3, 1)])
V = np.hstack([u1_target.reshape(3, 1), u2_target.reshape(3, 1), u3_target.reshape(3, 1)])
return U, V
def calc_base_transition_matrix():
return np.dot(V, np.linalg.inv(U))
if not np.isclose(np.dot(v1_target, v2_target), 0, atol=1e-03):
raise ValueError("v1_target and v2_target must be vertical")
U, V = get_base_matrices()
return calc_base_transition_matrix()
def get_euler_rotation_angles(start_look_at_vector, target_look_at_vector, start_up_vector=None, target_up_vector=None):
if start_up_vector is None:
start_up_vector = find_additional_vertical_vector(start_look_at_vector)
if target_up_vector is None:
target_up_vector = find_additional_vertical_vector(target_look_at_vector)
rot_mat = calc_rotation_matrix(start_look_at_vector, start_up_vector, target_look_at_vector, target_up_vector)
is_equal = np.allclose(rot_mat # start_look_at_vector, target_look_at_vector, atol=1e-03)
print(f"rot_mat # start_look_at_vector1 == target_look_at_vector1 is {is_equal}")
rotation = Rotation.from_matrix(rot_mat)
return rotation.as_euler(seq="xyz", degrees=True)
Finding the XYZ Euler rotation angles from 1 vector to another might give you more than one answer.
Assuming what you are rotation is the look_at_vector of some kind of shape and you want this shape to stay not upside down and still look at the target_look_at_vector
if __name__ == "__main__":
# Example 1
start_look_at_vector = normalize(np.random.random(3))
target_look_at_vector = normalize(np.array([-0.70710688829422, 0.4156269133090973, -0.5720613598823547]))
phi, theta, psi = get_euler_rotation_angles(start_look_at_vector, target_look_at_vector)
print(f"phi_x_rotation={phi}, theta_y_rotation={theta}, psi_z_rotation={psi}")
Now if you want to have a specific role rotation to your shape, my code also supports that!
you just need to give the target_up_vector as a parameter as well.
just make sure it is vertical to the target_look_at_vector that you are giving.
if __name__ == "__main__":
# Example 2
# look and up must be vertical
start_look_at_vector = normalize(np.array([1, 2, 3]))
start_up_vector = normalize(np.array([1, -3, 2]))
target_look_at_vector = np.array([0.19283590755300162, 0.6597510192626469, -0.7263217228739983])
target_up_vector = np.array([-0.13225754322703182, 0.7509361508721898, 0.6469955018014842])
phi, theta, psi = get_euler_rotation_angles(
start_look_at_vector, target_look_at_vector, start_up_vector, target_up_vector
)
print(f"phi_x_rotation={phi}, theta_y_rotation={theta}, psi_z_rotation={psi}")
Getting Rotation Matrix in MATLAB is very easy
e.g.
A = [1.353553385, 0.200000003, 0.35]
B = [1 2 3]
[q] = vrrotvec(A,B)
Rot_mat = vrrotvec2mat(q)
I have a unit vector in 3D space whose direction I wish to perturb by some angle within the range 0 to theta, with the position of the vector remaining the same. What is a way I can accomplish this?
Thanks.
EDIT: After thinking about the way I posed the question, it seems to be a bit too general. I'll attempt to make it more specific: Assume that the vector originates from the surface of an object (i.e. sphere, circle, box, line, cylinder, cone). If there are different methods to finding the new direction for each of those objects, then providing help for the sphere one is fine.
EDIT 2: I was going to type this in a comment but it was too much.
So I have orig_vector, which I wish to perturb in some direction between 0 and theta. The theta can be thought of as forming a cone around my vector (with theta being the angle between the center and one side of the cone) and I wish to generate a new vector within that cone. I can generate a point lying on the plane that is tangent to my vector and thus creating a unit vector in the direction of the point, call it rand_vector. At this time, I orig_vector and trand_vector are two unit vectors perpendicular to each other.
I generate my first angle, angle1 between 0 and 2pi and I rotate rand_vector around orig_vector by angle1, forming rand_vector2. I looked up a resource online and it said that the second angle, angle2 should be between 0 and sin(theta) (where theta is the original "cone" angle). Then I rotate rand_vector2 by acos(angle2) around the vector defined by the cross product between rand_vector2 and orig_vector.
When I do this, I don't obtain the desired results. That is, when theta=0, I still get perturbed vectors, and I expect to get orig_vector. If anyone can explain the reason for the angles and why they are the way they are, I would greatly appreciate it.
EDIT 3: This is the final edit, I promise =). So I fixed my bug and everything that I described above works (it was an implementation bug, not a theory bug). However, my question about the angles (i.e. why is angle2 = sin(theta)*rand() and why is perturbed_vector = rand_vector2.Rotate(rand_vector2.Cross(orig_vector), acos(angle2)). Thanks so much!
Here's the algorithm that I've used for this kind of problem before. It was described in Ray Tracing News.
1) Make a third vector perpendicular to the other two to build an orthogonal basis:
cross_vector = unit( cross( orig_vector, rand_vector ) )
2) Pick two uniform random numbers in [0,1]:
s = rand( 0, 1 )
r = rand( 0, 1 )
3) Let h be the cosine of the cone's angle:
h = cos( theta )
4) Modify uniform sampling on a sphere to pick a random vector in the cone around +Z:
phi = 2 * pi * s
z = h + ( 1 - h ) * r
sinT = sqrt( 1 - z * z )
x = cos( phi ) * sinT
y = sin( phi ) * sinT
5) Change of basis to reorient it around the original angle:
perturbed = rand_vector * x + cross_vector * y + orig_vector * z
If you have another vector to represent an axis of rotation, there are libraries that will take the axis and the angle and give you a rotation matrix, which can then be multiplied by your starting vector to get the result you want.
However, the axis of rotation should be at right angles to your starting vector, to get the amount of rotation you expect. If the axis of rotation does not lie in the plane perpendicular to your vector, the result will be somewhat different than theta.
That being said, if you already have a vector at right angles to the one you want to perturb, and you're not picky about the direction of the perturbation, you can just as easily take a linear combination of your starting vector with the perpendicular one, adjust for magnitude as needed.
I.e., if P and Q are vectors having identical magnitude, and are perpendicular, and you want to rotate P in the direction of Q, then the vector R given by R = [Pcos(theta)+Qsin(theta)] will satisfy the constraints you've given. If P and Q have differing magnitude, then there will be some scaling involved.
You may be interested in 3D-coordinate transformations to change your vector angle.
I don't know how many directions you want to change your angle in, but transforming your Cartesian coordinates to spherical coordinates should allow you to make your angle change as you like.
Actually, it is very easy to do that. All you have to do is multiply your vector by the correct rotation matrix. The resulting vector will be your rotated vector. Now, how do you obtain such rotation matrix? That depends on the 3d framework/engine you are using. Any 3d framework must provide functions for obtaining rotation matrices, normally as static methods of the Matrix class.
Good luck.
Like said in other comments you can rotate your vector using a rotation matrix.
The rotation matrix has two angles you rotate your vector around. You can pick them with a random number generator, but just picking two from a flat generator is not correct. To ensure that your rotation vector is generated flat, you have to pick one random angle φ from a flat generator and the other one from a generator flat in cosθ ;this ensures that your solid angle element dcos(θ)dφ is defined correctly (φ and θ defined as usual for spherical coordinates).
Example: picking a random direction with no restriction on range, random() generates flat in [0,1]
angle1 = acos(random())
angle2 = 2*pi*random()
My code in unity - tested and working:
/*
* this is used to perturb given vector 'direction' by changing it by angle not more than 'angle' vector from
* base direction. Used to provide errors for player playing algorithms
*
*/
Vector3 perturbDirection( Vector3 direction, float angle ) {
// division by zero protection
if( Mathf.Approximately( direction.z, 0f )) {
direction.z = 0.0001f;
}
// 1 get some orthogonal vector to direction ( solve direction and orthogonal dot product = 0, assume x = 1, y = 1, then z = as below ))
Vector3 orthogonal = new Vector3( 1f, 1f, - ( direction.x + direction.y ) / direction.z );
// 2 get random vector from circle on flat orthogonal to direction vector. get full range to assume all cone space randomization (-180, 180 )
float orthoAngle = UnityEngine.Random.Range( -180f, 180f );
Quaternion rotateTowardsDirection = Quaternion.AngleAxis( orthoAngle, direction );
Vector3 randomOrtho = rotateTowardsDirection * orthogonal;
// 3 rotate direction towards random orthogonal vector by vector from our available range
float perturbAngle = UnityEngine.Random.Range( 0f, angle ); // range from (0, angle), full cone cover guarantees previous (-180,180) range
Quaternion rotateDirection = Quaternion.AngleAxis( perturbAngle, randomOrtho );
Vector3 perturbedDirection = rotateDirection * direction;
return perturbedDirection;
}
My problem:
How can I take two 3D points and lock them to a single axis? For instance, so that both their z-axes are 0.
What I'm trying to do:
I have a set of 3D coordinates in a scene, representing a a box with a pyramid on it. I also have a camera, represented by another 3D coordinate. I subtract the camera coordinate from the scene coordinate and normalize it, returning a vector that points to the camera. I then do ray-plane intersection with a plane that is behind the camera point.
O + tD
Where O (origin) is the camera position, D is the direction from the scene point to the camera and t is time it takes for the ray to intersect the plane from the camera point.
If that doesn't make sense, here's a crude drawing:
I've searched far and wide, and as far as I can tell, this is called using a "pinhole camera".
The problem is not my camera rotation, I've eliminated that. The trouble is in translating the intersection point to barycentric (uv) coordinates.
The translation on the x-axis looks like this:
uaxis.x = -a_PlaneNormal.y;
uaxis.y = a_PlaneNormal.x;
uaxis.z = a_PlaneNormal.z;
point vaxis = uaxis.CopyCrossProduct(a_PlaneNormal);
point2d.x = intersection.DotProduct(uaxis);
point2d.y = intersection.DotProduct(vaxis);
return point2d;
While the translation on the z-axis looks like this:
uaxis.x = -a_PlaneNormal.z;
uaxis.y = a_PlaneNormal.y;
uaxis.z = a_PlaneNormal.x;
point vaxis = uaxis.CopyCrossProduct(a_PlaneNormal);
point2d.x = intersection.DotProduct(uaxis);
point2d.y = intersection.DotProduct(vaxis);
return point2d;
My question is: how can I turn a ray plane intersection point to barycentric coordinates on both the x and the z axis?
The usual formula for points (p) on a line, starting at (p0) with vector direction (v) is:
p = p0 + t*v
The criterion for a point (p) on a plane containing (p1) and with normal (n) is:
(p - p1).n = 0
So, plug&chug:
(p0 + t*v - p1).n = (p0-p1).n + t*(v.n) = 0
-> t = (p1-p0).n / v.n
-> p = p0 + ((p1-p0).n / v.n)*v
To check:
(p - p1).n = (p0-p1).n + ((p1-p0).n / v.n)*(v.n)
= (p0-p1).n + (p1-p0).n
= 0
If you want to fix the Z coordinate at a particular value, you need to choose a normal along the Z axis (which will define a plane parallel to XY plane).
Then, you have:
n = (0,0,1)
-> p = p0 + ((p1.z-p0.z)/v.z) * v
-> x and y offsets from p0 = ((p1.z-p0.z)/v.z) * (v.x,v.y)
Finally, if you're trying to build a virtual "camera" for 3D computer graphics, the standard way to do this kind of thing is homogeneous coordinates. Ultimately, working with homogeneous coordinates is simpler (and usually faster) than the kind of ad hoc 3D vector algebra I have written above.