I'm currently working on a game project and need to render a point in front of the current players vision, the game is written in a custom c++ engine. I have the current position (x,y,z) and the current rotation (pitch,yaw,roll). I need to extend the point forward along the known angle at a set distance.
edit:
What I Used As A Solution (Its slightly off but that's ok for me)
Vec3 LocalPos = {0,0,0};
Vec3 CurrentLocalAngle = {0,0,0};
float len = 0.1f;
float pitch = CurrentLocalAngle.x * (M_PI / 180);
float yaw = CurrentLocalAngle.y * (M_PI / 180);
float sp = sinf(pitch);
float cp = cosf(pitch);
float sy = sinf(yaw);
float cy = cosf(yaw);
Vec3 dir = { cp * cy, cp * sy, -sp };
LocalPos = { LocalPos.x + dir.x * len, LocalPos.y + dir.y * len,LocalPos.z + dir.z * len };
You can get the forward vector of the player from matrix column 3 if it is column based, then you multiply its normal by the distance you want then add the result to the player position you will get the point you need.
Convert the angle to a directional vector or just get the "forward vector" from the player if it's available in the engine you're using (it should be the same thing).
Directional vectors are normalized by nature (they have distance = 1), so you can just multiply them by the desired distance to get the desired offset. Multiply this vector by the distance you want the point to be relative to the reference point (the player's camera vector I presume), and then you just add one to the other to get the point in the world where this point belongs.
I got two routes as polylines as lists of coordinates such as:
[13.072624,52.333508,13.763972,52.679616]
[52.679616,13.763972,52.333508,13.072624]
I need to compare the direction of two polylines. To do so I thought to get the angle between them and return true (going in the same direction) if it is smaller 45°.
example of route comparison
How can we do so?
What you need to compare against each other is the unit vectors of the line segments. I don't know what language you are working in or how the lists are organized but the following pseudo code will get you the bearings.
// create end points of the line segments from lists
point0(x0, y0);
point1(x1, y1);
point2(x2, y2);
point3(x3, y3);
// create direction vectors from points
directionA = point1 - point0;
directionB = point3 - point2;
// normalize the direction vectors to have a length of 1
// assuming they are not degenerate
directionA.unitize();
directionB.unitize();
// get the dot product of the two direction vectors
dot = directionA.dot(directionB);
// Because the direction vectors are both unit length
// the dot product will return a value between -1.0 and 1.0
// The arc cosine will give you your angle between 0 and 180
angle = arc_cosine_degree(dot)
Then compare the angle to what ever tolerance you choose for determining "same" direction. Wikipedia has good explanations of each of the concepts that I use in the pseudo code (i.e. unit vector, dot product, arc cosine).
Good luck! GCB
In JavaFX the rotateProperty of a Node provides me with its rotation in degree relative to its parent. Is there a way to get the absolute rotation in degree, either relative to the Scene or the Screen?
For example, is there a way to calculate the angle from the Transform object of a Node i can get from getLocalToSceneTransform()?
So, i did the math myself, and for my case i either get the rotation in Radians via:
double xx = myNode.getLocalToSceneTransform().getMxx();
double xy = myNode.getLocalToSceneTransform().getMxy();
double angle = Math.atan2(-xy, xx);
or
double yx = myNode.getLocalToSceneTransform().getMyx();
double yy = myNode.getLocalToSceneTransform().getMyy();
double angle = Math.atan2(yx, yy);
In both cases this can be converted to 360-degrees:
angle = Math.toDegrees(angle);
angle = angle < 0 ? angle + 360 : angle;
The question isn't really well defined, since different Nodes in the Scene graph are potentially rotated about different axes.
The getLocalToSceneTransform() method will return a Transform representing the transformation from the local coordinate system for the node to the coordinate system for the Scene. This is an affine transformation; you can extract a 3x4 matrix representation of it relative to the x- y- and z-axes if you like.
This is a big one for any math/3d geometry lovers. Thank you in advance.
Overview
I have a figure created by extruding faces around twisting spline curves in space. I'm trying to place a "loop" (torus) oriented along the spline path at a given segment of the curve, so that it is "aligned" with the spline. By that I mean the torus's width is parallel to the spline path at the given extrusion segment, and it's height is perpendicular to the face that is selected (see below for picture).
Data I know:
I am given one of the faces of the figure. From that I can also glean that face's centroid (center point), the vertices that compose it, the surrounding faces, and the normal vector of the face.
Current (Non-working) solution outcome:
I can correctly create a torus loop around the centroid of the face that is clicked. However, it does not rotate properly to "align" with the face. See how they look a bit "off" below.
Here's a picture with the material around it:
and here's a picture with it in wireframe mode. You can see the extrusion segments pretty clearly.
Current (Non-working) methodology:
I am attempting to do two calculations. First, I'm calculating the the angle between two planes (the selected face and the horizontal plane at the origin). Second, I'm calculating the angle between the face and a vertical plane at the point of origin. With those two angles, I am then doing two rotations - an X and a Y rotation on the torus to what I hope would be the correct orientation. It's rotating the torus at a variable amount, but not in the place I want it to be.
Formulas:
In doing the above, I'm using the following to calculate the angle between two planes using their normal vectors:
Dot product of normal vector 1 and normal vector 2 = Magnitude of vector 1 * Magnitude of vector 2 * Cos (theta)
Or:
(n1)(n2) = || n1 || * || n2 || * cos (theta)
Or:
Angle = ArcCos { ( n1 * n2 ) / ( || n1 || * || n2 || ) }
To determine the magnitude of a vector, the formula is:
The square root of the sum of the components squared.
Or:
Sqrt { n1.x^2 + n1.y^2 + n1.z^2 }
Also, I'm using the following for the normal vectors of the "origin" planes:
Normal vector of horizontal plane: (1, 0, 0)
Normal vector of Vertical plane: (0, 1, 0)
I've thought through the above normal vectors a couple times... and I think(?) they are right?
Current Implementation:
Below is the code that I'm currently using to implement it. Any thoughts would be much appreciated. I have a sinking feeling that I'm taking a wrong approach in trying to calculate the angles between the planes. Any advice / ideas / suggestions would be much appreciated. Thank you very much in advance for any suggestions.
Function to calculate the angles:
this.toRadians = function (face, isX)
{
//Normal of the face
var n1 = face.normal;
//Normal of the vertical plane
if (isX)
var n2 = new THREE.Vector3(1, 0, 0); // Vector normal for vertical plane. Use for Y rotation.
else
var n2 = new THREE.Vector3(0, 1, 0); // Vector normal for horizontal plane. Use for X rotation.
//Equation to find the cosin of the angle. (n1)(n2) = ||n1|| * ||n2|| (cos theta)
//Find the dot product of n1 and n2.
var dotProduct = (n1.x * n2.x) + (n1.y * n2.y) + (n1.z * n2.z);
// Calculate the magnitude of each vector
var mag1 = Math.sqrt (Math.pow(n1.x, 2) + Math.pow(n1.y, 2) + Math.pow(n1.z, 2));
var mag2 = Math.sqrt (Math.pow(n2.x, 2) + Math.pow(n2.y, 2) + Math.pow(n2.z, 2));
//Calculate the angle of the two planes. Returns value in radians.
var a = (dotProduct)/(mag1 * mag2);
var result = Math.acos(a);
return result;
}
Function to create and rotate the torus loop:
this.createTorus = function (tubeMeshParams)
{
var torus = new THREE.TorusGeometry(5, 1.5, segments/10, 50);
fIndex = this.calculateFaceIndex();
//run the equation twice to calculate the angles
var xRadian = this.toRadians(geometry.faces[fIndex], false);
var yRadian = this.toRadians(geometry.faces[fIndex], true);
//Rotate the Torus
torus.applyMatrix(new THREE.Matrix4().makeRotationX(xRadian));
torus.applyMatrix(new THREE.Matrix4().makeRotationY(yRadian));
torusLoop = new THREE.Mesh(torus, this.m);
torusLoop.scale.x = torusLoop.scale.y = torusLoop.scale.z = tubeMeshParams['Scale'];
//Create the torus around the centroid
posx = geometry.faces[fIndex].centroid.x;
posy = geometry.faces[fIndex].centroid.y;
posz = geometry.faces[fIndex].centroid.z;
torusLoop.geometry.applyMatrix(new THREE.Matrix4().makeTranslation(posx, posy, posz));
torusLoop.geometry.computeCentroids();
torusLoop.geometry.computeFaceNormals();
torusLoop.geometry.computeVertexNormals();
return torusLoop;
}
I found I was using an incorrect approach to do this. Instead of trying to calculate each angle and do a RotationX and a RotationY, I should have done a rotation by axis. Definitely was over thinking it.
makeRotationAxis(); is a function built into three.js.
I have 3d mesh and I would like to draw each face a 2d shape.
What I have in mind is this:
for each face
1. access the face normal
2. get a rotation matrix from the normal vector
3. multiply each vertex to the rotation matrix to get the vertices in a '2d like ' plane
4. get 2 coordinates from the transformed vertices
I don't know if this is the best way to do this, so any suggestion is welcome.
At the moment I'm trying to get a rotation matrix from the normal vector,
how would I do this ?
UPDATE:
Here is a visual explanation of what I need:
At the moment I have quads, but there's no problem
converting them into triangles.
I want to rotate the vertices of a face, so that
one of the dimensions gets flattened.
I also need to store the original 3d rotation of the face.
I imagine that would be inverse rotation of the face
normal.
I think I'm a bit lost in space :)
Here's a basic prototype I did using Processing:
void setup(){
size(400,400,P3D);
background(255);
stroke(0,0,120);
smooth();
fill(0,120,0);
PVector x = new PVector(1,0,0);
PVector y = new PVector(0,1,0);
PVector z = new PVector(0,0,1);
PVector n = new PVector(0.378521084785,0.925412774086,0.0180059205741);//normal
PVector p0 = new PVector(0.372828125954,-0.178844243288,1.35241031647);
PVector p1 = new PVector(-1.25476706028,0.505195975304,0.412718296051);
PVector p2 = new PVector(-0.372828245163,0.178844287992,-1.35241031647);
PVector p3 = new PVector(1.2547672987,-0.505196034908,-0.412717700005);
PVector[] face = {p0,p1,p2,p3};
PVector[] face2d = new PVector[4];
PVector nr = PVector.add(n,new PVector());//clone normal
float rx = degrees(acos(n.dot(x)));//angle between normal and x axis
float ry = degrees(acos(n.dot(y)));//angle between normal and y axis
float rz = degrees(acos(n.dot(z)));//angle between normal and z axis
PMatrix3D r = new PMatrix3D();
//is this ok, or should I drop the builtin function, and add
//the rotations manually
r.rotateX(rx);
r.rotateY(ry);
r.rotateZ(rz);
print("original: ");println(face);
for(int i = 0 ; i < 4; i++){
PVector rv = new PVector();
PVector rn = new PVector();
r.mult(face[i],rv);
r.mult(nr,rn);
face2d[i] = PVector.add(face[i],rv);
}
print("rotated: ");println(face2d);
//draw
float scale = 100.0;
translate(width * .5,height * .5);//move to centre, Processing has 0,0 = Top,Lef
beginShape(QUADS);
for(int i = 0 ; i < 4; i++){
vertex(face2d[i].x * scale,face2d[i].y * scale,face2d[i].z * scale);
}
endShape();
line(0,0,0,nr.x*scale,nr.y*scale,nr.z*scale);
//what do I do with this ?
float c = cos(0), s = sin(0);
float x2 = n.x*n.x,y2 = n.y*n.y,z2 = n.z*n.z;
PMatrix3D m = new PMatrix3D(x2+(1-x2)*c, n.x*n.y*(1-c)-n.z*s, n.x*n.z*(1-c)+n.y*s, 0,
n.x*n.y*(1-c)+n.z*s,y2+(1-y2)*c,n.y*n.z*(1-c)-n.x*s,0,
n.x*n.y*(1-c)-n.y*s,n.x*n.z*(1-c)+n.x*s,z2-(1-z2)*c,0,
0,0,0,1);
}
Update
Sorry if I'm getting annoying, but I don't seem to get it.
Here's a bit of python using Blender's API:
import Blender
from Blender import *
import math
from math import sin,cos,radians,degrees
def getRotMatrix(n):
c = cos(0)
s = sin(0)
x2 = n.x*n.x
y2 = n.y*n.y
z2 = n.z*n.z
l1 = x2+(1-x2)*c, n.x*n.y*(1-c)+n.z*s, n.x*n.y*(1-c)-n.y*s
l2 = n.x*n.y*(1-c)-n.z*s,y2+(1-y2)*c,n.x*n.z*(1-c)+n.x*s
l3 = n.x*n.z*(1-c)+n.y*s,n.y*n.z*(1-c)-n.x*s,z2-(1-z2)*c
m = Mathutils.Matrix(l1,l2,l3)
return m
scn = Scene.GetCurrent()
ob = scn.objects.active.getData(mesh=True)#access mesh
out = ob.name+'\n'
#face0
f = ob.faces[0]
n = f.v[0].no
out += 'face: ' + str(f)+'\n'
out += 'normal: ' + str(n)+'\n'
m = getRotMatrix(n)
m.invert()
rvs = []
for v in range(0,len(f.v)):
out += 'original vertex'+str(v)+': ' + str(f.v[v].co) + '\n'
rvs.append(m*f.v[v].co)
out += '\n'
for v in range(0,len(rvs)):
out += 'original vertex'+str(v)+': ' + str(rvs[v]) + '\n'
f = open('out.txt','w')
f.write(out)
f.close
All I do is get the current object, access the first face, get the normal, get the vertices, calculate the rotation matrix, invert it, then multiply it by each vertex.
Finally I write a simple output.
Here's the output for a default plane for which I rotated all the vertices manually by 30 degrees:
Plane.008
face: [MFace (0 3 2 1) 0]
normal: [0.000000, -0.499985, 0.866024](vector)
original vertex0: [1.000000, 0.866025, 0.500000](vector)
original vertex1: [-1.000000, 0.866026, 0.500000](vector)
original vertex2: [-1.000000, -0.866025, -0.500000](vector)
original vertex3: [1.000000, -0.866025, -0.500000](vector)
rotated vertex0: [1.000000, 0.866025, 1.000011](vector)
rotated vertex1: [-1.000000, 0.866026, 1.000012](vector)
rotated vertex2: [-1.000000, -0.866025, -1.000012](vector)
rotated vertex3: [1.000000, -0.866025, -1.000012](vector)
Here's the first face of the famous Suzanne mesh:
Suzanne.001
face: [MFace (46 0 2 44) 0]
normal: [0.987976, -0.010102, 0.154088](vector)
original vertex0: [0.468750, 0.242188, 0.757813](vector)
original vertex1: [0.437500, 0.164063, 0.765625](vector)
original vertex2: [0.500000, 0.093750, 0.687500](vector)
original vertex3: [0.562500, 0.242188, 0.671875](vector)
rotated vertex0: [0.468750, 0.242188, -0.795592](vector)
rotated vertex1: [0.437500, 0.164063, -0.803794](vector)
rotated vertex2: [0.500000, 0.093750, -0.721774](vector)
rotated vertex3: [0.562500, 0.242188, -0.705370](vector)
The vertices from the Plane.008 mesh are altered, the ones from Suzanne.001's mesh
aren't. Shouldn't they ? Should I expect to get zeroes on one axis ?
Once I got the rotation matrix from the normal vector, what is the rotation on x,y,z ?
Note: 1. Blender's Matrix supports the * operator 2.In Blender's coordinate system Z point's up. It looks like a right handed system, rotated 90 degrees on X.
Thanks
That looks reasonable to me. Here's how to get a rotation matrix from normal vector. The normal is the vector. The angle is 0. You probably want the inverse rotation.
Is your mesh triangulated? I'm assuming it is. If so, you can do this, without rotation matrices. Let the points of the face be A,B,C. Take any two vertices of the face, say A and B. Define the x axis along vector AB. A is at 0,0. B is at 0,|AB|. C can be determined from trigonometry using the angle between AC and AB (which you get by using the dot product) and the length |AC|.
You created the m matrix correctly. This is the rotation that corresponds to your normal vector. You can use the inverse of this matrix to "unrotate" your points. The normal of face2d will be x, i.e. point along the x-axis. So extract your 2d coordinates accordingly. (This assumes your quad is approximately planar.)
I don't know the library you are using (Processing), so I'm just assuming there are methods for m.invert() and an operator for applying a rotation matrix to a point. They may of course be called something else. Luckily the inverse of a pure rotation matrix is its transpose, and multiplying a matrix and a vector are straightforward to do manually if you need to.
void setup(){
size(400,400,P3D);
background(255);
stroke(0,0,120);
smooth();
fill(0,120,0);
PVector x = new PVector(1,0,0);
PVector y = new PVector(0,1,0);
PVector z = new PVector(0,0,1);
PVector n = new PVector(0.378521084785,0.925412774086,0.0180059205741);//normal
PVector p0 = new PVector(0.372828125954,-0.178844243288,1.35241031647);
PVector p1 = new PVector(-1.25476706028,0.505195975304,0.412718296051);
PVector p2 = new PVector(-0.372828245163,0.178844287992,-1.35241031647);
PVector p3 = new PVector(1.2547672987,-0.505196034908,-0.412717700005);
PVector[] face = {p0,p1,p2,p3};
PVector[] face2d = new PVector[4];
//what do I do with this ?
float c = cos(0), s = sin(0);
float x2 = n.x*n.x,y2 = n.y*n.y,z2 = n.z*n.z;
PMatrix3D m_inverse =
new PMatrix3D(x2+(1-x2)*c, n.x*n.y*(1-c)+n.z*s, n.x*n.y*(1-c)-n.y*s, 0,
n.x*n.y*(1-c)-n.z*s,y2+(1-y2)*c,n.x*n.z*(1-c)+n.x*s, 0,
n.x*n.z*(1-c)+n.y*s,n.y*n.z*(1-c)-n.x*s,z2-(1-z2)*c, 0,
0,0,0,1);
face2d[0] = m_inverse * p0; // Assuming there's an appropriate operator*().
face2d[1] = m_inverse * p1;
face2d[2] = m_inverse * p2;
face2d[3] = m_inverse * p3;
// print & draw as you did before...
}
For face v0-v1-v3-v2 vectors v3-v0, v3-v2 and a face normal already form rotation matrix that would transform 2d face into 3d face.
Matrix represents coordinate system. Each row (or column, depending on notation) corresponds to axis coordinate system within new coordinate system. 3d rotation/translation matrix can be represented as:
vx.x vx.y vx.z 0
vy.x vy.y vy.z 0
vz.x vz.y vz.z 0
vp.x vp.y vp.z 1
where vx is an x axis of a coordinate system, vy - y axis, vz - z axis, and vp - origin of new system.
Assume that v3-v0 is an y axis (2nd row), v3-v2 - x axis (1st row), and normal - z axis (3rd row). Build a matrix from them. Then invert matrix. You'll get a matrix that will rotate a 3d face into 2d face.
I have 3d mesh and I would like to draw each face a 2d shape.
I suspect that UV unwrapping algorithms are closer to what you want to achieve than trying to get rotation matrix from 3d face.
That's very easy to achieve: (Note: By "face" I mean "triangle")
Create a view matrix that represents a camera looking at a face.
Determine the center of the face with bi-linear interpolation.
Determine the normal of the face.
Position the camera some units in opposite normal direction.
Let the camera look at the center of the face.
Set the cameras up vector point in the direction of the middle of any vertex of the face.
Set the aspect ratio to 1.
Compute the view matrix using this data.
Create a orthogonal projection matrix.
Set the width and height of the view frustum large enough to contain the whole face (e.g. the length of the longest site of a face).
Compute the projection matrix.
For every vertex v of the face, multiply it by both matrices: v * view * projection.
The result is a projection of Your 3d faces into 2d space as if You were looking at them exactly orthogonal without any perspective disturbances. The final coordinates will be in normalized screen coordinates where (-1, -1) is the bottom left corner, (0, 0) is the center and (1, 1) is the top right corner.