Moving a object based on its rotation in three.js - math

I'm trying to move a cube in three.js based on its rotation but not sure on how to go about it.
As of now I can rotate the cube's z-rotation with the A & D keys. And with the W key I would like it to move forward relative to its rotation.
From 2D I would so something along the lines of:
float angle = GradToRad(obj.rotation);
obj.x = obj.x + cos(angle) * velocity;
obj.y = obj.y + sin(angle) * velocity;
Here's an image of the current implementation.
How can I apply something similar in three.js?

Objects can be considered to be facing their positive-Z axis. So to move an object forward, relative to it's own coordinate system, you can use
Object3D.translateZ( distance );
three.js r.57

It might be easiest to express both rotation and translation in a single (homogenous projective) 4×4 matrix. The Object3D.matrix member in three.js already does that, although you might have to set matrixAutoUpdate to false to use that directly. Then you can move use the translate method to move the object in its own reference frame.

Your 2D method is exactly how I did it in three.js. For the Y position I'm using a terrain collision technique (which still needs work);

Related

How would I get the vector3 rotation needed to rotate towards vector3 coordinates

So the biggest issue with all the answers I've seen is that I cannot use quaternions. I need to rotate a camera to face a vector3 coordinate position but I can only use x, y, and z for the rotation. I've looked for awhile and can't really figure it out.
I have a raycast hitting a point, I use the point for the target coordinates I need the camera to face, using the cameras position I need to get a vector 3 rotation that I can set the camera to in order for the camera to be pointing directly at the coordinates
So the biggest issue with all the answers I've seen is that I cannot use quaternions.
This is plain wrong. If you can use Lua, you can use quaternions. Simply write your own quaternion implementation in pure Lua (or port an existing one).
I need to rotate a camera to face a vector3 coordinate position but I
can only use x, y, and z for the rotation. I've looked for awhile and
can't really figure it out.
An X, Y & Z rotation vector means you're using Euler angles (which still leaves multiple questions concerning orientation and order of rotation application open).
I have a raycast hitting a point, I use the point for the target coordinates I need the camera to face, using the cameras position I need to get a vector 3 rotation that I can set the camera to in order for the camera to be pointing directly at the coordinates
First you'll have to determine the direction the point is from the camera using the camera pos. You haven't specified which vector library you use, so I'll assume the following:
vector.new creates a new vector from a table;
+ and - on two vectors perform addition / subtraction;
the components can be accessed as .x, .y, .z
local direction = raycast_hit_pos - camera_pos
-- x/z-rotation
local function horizontal_rotation(direction)
local xz_dist = math.sqrt(direction.x^2 + direction.z^2)
return math.atan2(direction.y, xz_dist)
end
-- y-rotation
local function vertical_rotation(direction)
return -math.atan2(direction.x, direction.z)
end
-- gets rotation in radians for a z-facing object
function get_rotation(direction)
return vector.new{
x = horizontal_rotation(direction),
y = vertical_rotation(direction),
z = 0
}
end
Depending on orientation and the meaning of your rotation axes you might have to shuffle x, y and z around a bit, flipping some signs.

Normalized Device Coordinate Metal coming from OpenGL

Alright, so I know there are a lot of questions referring to normalized device coordinates here on SO, but none of them address my particular issue.
So, everything I draw it's specified in 2D screen coordinates where top,left is (0,0) and bottom right is (screenWidth, screenHeight) then in my vertex shader I do this calculation to get out NDC (basically, I'm rendering UI elements):
float ndcX = (screenX - ScreenHalfWidth) / ScreenHalfWidth;
float ndcY = 1.0 - (screenY / ScreenHalfHeight);
where ScreenX/ScreenY is pixel coordinates, for example (600, 700) and screenHalf_____ is half of the screen width/height.
And the final position that I return from the vertex shader for the rasterization state is:
gl_Position = vec4(ndcX, ndcY, Depth, 1.0);
Which which works perfectly fine in Opengl ES.
Now the problem is that when I try it just like this in Metal 2, it doesn't work.
I know Metal's NDC are 2x2x1 and Opengl's NDC are 2x2x2 but I thought depth here didn't play an important part in this equation since I am passing it in my self per vertex.
I tried this link and this so question but was confused and the links weren't that helpful since I am trying to avoid matrix calculations in the vertex shader since I am rendering everything 2D for now.
So my questions...What is the formula to transform pixel coordinates to NDC in Metal? Is it possible without using an orthographic projection matrix? Why doesn't my equation work for Metal?
It is of course possible without a projection matrix. Matrices are just a useful convenience for applying transformations. But it's important to understand how they work when situations like this arise, since using a general orthographic projection matrix would perform unnecessary operations to arrive at the same results.
Here are the formulae I might use to do this:
float xScale = 2.0f / drawableSize.x;
float yScale = -2.0f / drawableSize.y;
float xBias = -1.0f;
float yBias = 1.0f;
float clipX = position.x * xScale + xBias;
float clipY = position.y * yScale + yBias;
Where drawableSize is the dimension (in pixels) of the renderbuffer, which can be passed in a buffer to the vertex shader. You can also precompute the scale factors and pass those in instead of the screen dimensions, to save some computation on the GPU.

Three.JS Object following a spline path - rotation / tanget issues & constant speed issue

I think my issue is similar to: Orient object's rotation to a spline point tangent in THREE.JS but I can't access the jsfiddle's properly and I struggled with the second part of the explanation.
Basically, I have created this jsfiddle: http://jsfiddle.net/jayfield1979/qGPTT/2/ which demonstrates a simple cube following the path created by a spline using SplineCurve3. Use standard TrackBall mouse interaction to navigate.
Positioning the cube along the path is simple. However I have two questions.
First, I am using the spline.getTanget( t ) where t is the position along the path in order to have the cube rotate (Y axis as UP only). I think I am missing something because even if I extract the .y property of the resulting tangent provided, the rotations still seem off. Is there some nomalizing that needs doing?
Second, the speed is very varied along the path, obviously a lot more points stacked in creating the tighter curves, but I was wondering is there a way to refactor the path to more evenly distribute the spaces between points? I came across the reparametrizeByArcLength function but struggled to find an explanation how to use it.
Any help or explanation for a bit of a maths dummy, would be gratefully received.
To maintain a constant speed, you use .getPointAt( t ) instead of .getPoint( t ).
To get the box to remain tangent to the curve, you follow the same logic as explained in the answer to Orient object's rotation to a spline point tangent in THREE.JS.
box.position.copy( spline.getPointAt( counter ) );
tangent = spline.getTangentAt( counter ).normalize();
axis.crossVectors( up, tangent ).normalize();
var radians = Math.acos( up.dot( tangent ) );
box.quaternion.setFromAxisAngle( axis, radians );
three.js r.144

Calculating modelview matrix for 2D camera using Eigen

I'm trying to calculate modelview matrix of my 2D camera but I can't get the formula right. I use the Affine3f transform class so the matrix is compatible with OpenGL. This is closest that I did get by trial and error. This code rotates and scales the camera ok, but if I apply translation and rotation at same time the camera movement gets messed up: camera moves in rotated fashion, which is not what I want. (And this probaly due to fact I first apply the rotation matrix and then translation)
Eigen::Affine3f modelview;
modelview.setIdentity();
modelview.translate(Eigen::Vector3f(camera_offset_x, camera_offset_y, 0.0f));
modelview.scale(Eigen::Vector3f(camera_zoom_x, camera_zoom_y, 0.0f));
modelview.rotate(Eigen::AngleAxisf(camera_angle, Eigen::Vector3f::UnitZ()));
modelview.translate(Eigen::Vector3f(camera_x, camera_y, 0.0f));
[loadmatrix_to_gl]
What I want is that camera would rotate and scale around offset position in screenspace {(0,0) is middle of the screen in this case} and then be positioned along the global xy-axes in worldspace {(0,0) is also initialy at middle of the screen} to the final position. How would I do this?
Note that I have set up also an orthographic projection matrix, which may affect this problem.
If you want a 2D image, rendered in the XY plane with OpenGL, to (1) rotate counter-clockwise by a around point P, (2) scale by S, and then (3) translate so that pixels at C (in the newly scaled and rotated image) are at the origin, you would use this transformation:
translate by -P (this moves the pixels at P to the origin)
rotate by a
translate by P (this moves the origin back to where it was)
scale by S (if you did this earlier, your rotation would be messed up)
translate by -C
If the 2D image we being rendered at the origin, you'd also need to end by translate by some value along the negative z axis to be able to see it.
Normally, you'd just do this with OpenGL basics (glTranslatef, glScalef, glRotatef, etc.). And you would do them in the reverse order that I've listed them. Since you want to use glLoadMatrix, you'd do things in the order I described with Eigen. It's important to remember that OpenGL is expecting a Column Major matrix (but that seems to be the default for Eigen; so that's probably not a problem).
JCooper did great explaining the steps to construct the initial matrix.
However I eventually solved the problem bit differently. There was few additional things and steps that were not obvious for me at the time. See JCooper answer's comments. First is to realize all matrix operations are relative.
Thus if you want to position or move the camera with absolute xy-axes, you must first decompose the matrix to extract its absolute position with unchanged axes. Then you translate the matrix by the difference of the old and new position.
Here is way to do this with Eigen:
First compute Affine2f matrix cmat scalar determinant D. With Eigen this is done with D = cmat.linear().determinant();. Next compute 'reverse' matrix matrev of the current rotation+scale matrix R using the D. matrev = (RS.array() / (1.0f / determ)).matrix()); where RS is cmat.matrix().topLeftCorner(2,2)
The absolute camera position P is then given by P = invmat * -C where C is cmat.matrix().col(2).head<2>()
Now we can reposition the camera anywhere along the absolute axes and keeping the rotation+scaling same: V = RS * (T - P) where RS is same as before, T is the new position vec and P is the decomposed position vec.
The cmat then simply translated by V to move the camera: cmat.pretranslate(V)

Quaternion rotation matrix unexpectedly has the opposite sense

I have some understanding problem concerning quaternions.
In order to have my world object rotate in the correct way, I need to invert their quaternion rotation while refreshing the object world matrix.
I create the object rotation with this code:
Rotation = Quaternion.RotationMatrix(
Matrix.LookAtRH(Position,
Position + new Vector3(_moveDirection.X, 0, _moveDirection.Y),
Vector3.Up)
);
and refresh the object World matrix like this:
Object.World = Matrix.RotationQuaternion(Rotation)
* Matrix.Translation(Position);
This is not working; it makes the object rotate in the opposite way compared to what it should!
The is the way that makes my object rotate correctly:
Object.World = Matrix.RotationQuaternion(Quaternion.invert(Rotation))
* Matrix.Translation(Position);
Why do I have to invert the object rotation?
This isn't a quaternion problem so much as it is a usage and/or documentation issue with the DirectX call you're using. The transformation the call gives is the one that happens when you move the camera. If you're keeping the camera fixed and moving the world, you're swapping what's moving and what's fixed. These coordinate transformations are inverses of each other, which is why taking the inverse works for you.
You don't need to take an explicit inverse, though. Just swap the order of the first two arguments.

Resources