I want an object that rotates on a relative axis (the axis rotates with it) to rotate as if it's axis hadn't moved.
In a game (not my own) I have an object that stays in view of the player at all times (it follows the camera). I also want said object to keep the same rotation relative to the camera so it always looks like the same orientation to the player. It is simple to make it hold an orientation with modified X or Z values relative to the camera yaw, but once I rotate the object on the Y axis, the X axis has moved to the camera pitch makes the object rotate incorrectly (the roll would be affected too).
I am pretty sure it will require either a matrix or a combination of sin/cos, but I have had no luck in finding an answer.
Note: Rotation is in pitch, yaw, and roll using radians on the x, y, and z-axis respectively.
Related
So the biggest issue with all the answers I've seen is that I cannot use quaternions. I need to rotate a camera to face a vector3 coordinate position but I can only use x, y, and z for the rotation. I've looked for awhile and can't really figure it out.
I have a raycast hitting a point, I use the point for the target coordinates I need the camera to face, using the cameras position I need to get a vector 3 rotation that I can set the camera to in order for the camera to be pointing directly at the coordinates
So the biggest issue with all the answers I've seen is that I cannot use quaternions.
This is plain wrong. If you can use Lua, you can use quaternions. Simply write your own quaternion implementation in pure Lua (or port an existing one).
I need to rotate a camera to face a vector3 coordinate position but I
can only use x, y, and z for the rotation. I've looked for awhile and
can't really figure it out.
An X, Y & Z rotation vector means you're using Euler angles (which still leaves multiple questions concerning orientation and order of rotation application open).
I have a raycast hitting a point, I use the point for the target coordinates I need the camera to face, using the cameras position I need to get a vector 3 rotation that I can set the camera to in order for the camera to be pointing directly at the coordinates
First you'll have to determine the direction the point is from the camera using the camera pos. You haven't specified which vector library you use, so I'll assume the following:
vector.new creates a new vector from a table;
+ and - on two vectors perform addition / subtraction;
the components can be accessed as .x, .y, .z
local direction = raycast_hit_pos - camera_pos
-- x/z-rotation
local function horizontal_rotation(direction)
local xz_dist = math.sqrt(direction.x^2 + direction.z^2)
return math.atan2(direction.y, xz_dist)
end
-- y-rotation
local function vertical_rotation(direction)
return -math.atan2(direction.x, direction.z)
end
-- gets rotation in radians for a z-facing object
function get_rotation(direction)
return vector.new{
x = horizontal_rotation(direction),
y = vertical_rotation(direction),
z = 0
}
end
Depending on orientation and the meaning of your rotation axes you might have to shuffle x, y and z around a bit, flipping some signs.
Is there a way to convert that data:
Object position which is a 3D point (X, Y, Z),
Camera position which is a 3D point (X, Y, Z),
Camera yaw, pitch, roll (-180:180, -90:90, 0)
Field of view (-45°:45°)
Screen width & height
into the 2D point on the screen (X, Y)?
I'm looking for proper math calculations according to this exact set of data.
It's difficult, but it's possible to do it for yourself.
There are lots of libraries that do this for you, but it is more satisfying if you do it yourself:
This problem is possible and I have written my own 3D engine to do this for objects in javascript using the HTML5 Canvas. You can see my code here and solve a 3D maze game I wrote here to try and understand what I will talk about below...
The basic idea is to work in steps. To start, you have to forget about camera angle (yaw, pitch and roll) as these come later and just imagine you are looking down the y axis. Then the basic idea is to calculate, using trig, the pitch angle and yaw to your object coordinate. By this I mean imagining that you are looking through a letterbox, the yaw angle would be the angle in degrees left and right to your coordinate (so both positive and negative) from the center/ mid line and the yaw up and down from it. Taking these angles, you can map them to the x and y 2D coordinate system.
The calculations for the angles are:
pitch = atan((coord.x - cam.x) / (coord.y - cam.y))
yaw = atan((coord.z - cam.z) / (coord.y - cam.y))
with coord.x, coord.y and coord.z being the coordinates of the object and the same for the cam (cam.x, cam.y and cam.z). These calculations also assume that you are using a Cartesian coordinate system with the different axis being: z up, y forward and x right.
From here, the next step is to map this angle in the 3D world to a coordinate which you can use in a 2D graphical representation.
To map these angles into your screen, you need to scale them up as distances from the mid line. This means multiplying them by your screen width / fov. Finally, these distances will now be positive or negative (as it is an angle from the mid line) so to actually draw it on a canvas, you need to add it to half of the screen width.
So this would mean your canvas coordinate would be:
x = width / 2 + (pitch * (width / fov)
y = height / 2 + (yaw * (height / fov)
where width and height are the dimensions of you screen, fov is the camera's fov and yaw and pitch are the respective angles of the object from the camera.
You have now achieved the first big step which is mapping a 3D coordinate down to 2D. If you have managed to get this all working, I would suggest trying multiple points and connecting them to form shapes. Also try moving your cameras position to see how the perspective changes as you will soon see how realistic it already looks.
In addition, if this worked fine for you, you can move on to having the camera be able to not only change its position in the 3D world but also change its perspective as in yaw, pitch and roll angles. I will not go into this entirely now, but the basic idea is to use 3D world transformation matrices. You can read up about them here but they do get quite complicated, however I can give you the calculations if you get this far.
It might help to read (old style) OpenGL specs:
https://www.khronos.org/registry/OpenGL/specs/gl/glspec14.pdf
See section 2.10
Also:
https://www.khronos.org/opengl/wiki/Vertex_Transformation
Might help with more concrete examples.
Also, for "proper math" look up 4x4 matrices, projections, and homogeneous coordinates.
https://en.wikipedia.org/wiki/Homogeneous_coordinates
I have azimuth , elevation and direction vector of the sun.. i want to place a view point on sun ray direction with some distance. Can anyone describe or provide a link to a resource that will help me understand and implement the required steps?
I used cartesian coordinate system to find direction vector from azimuth and elevation.and then for find
viewport origin.image for this question
x = distance
y = distance* tan azimuth
z = distance * tan elevation.
i want to find that distance value... how?
azimutal coordinate system is referencing to NEH (geometric North East High(Up)) reference frame !!!
in your link to image it is referencing to -Y axis which is not true unless you are not rendering the world but doing some nonlinear graph-plot projection so which one it is?
btw here ECEF/WGS84 and NEH you can find out how to compute NEH for WGS84
As I can see you have bad computation between coordinates so just to be clear this is how it looks like:
on the left is global Earth view and one NEH computed for its position (its origin). In the middle is surface aligned side view and on the right is surface aligned top view. Blue magenta green are input azimutal coordinates, Brown are x,y,z cartesian projections (where the coordinate is on its axis) so:
Dist'= Dist *cos(Elev );
z = Dist *sin(Elev );
x = Dist'*cos(Azimut);
y =-Dist'*sin(Azimut);
if you use different reference frame or axis orientations then change it accordingly ...
I suspect you use 4x4 homogenous transform matrices
for representing coordinate systems and also to hold your view-port so look here:
transform matrix anatomy
constructing the view-port
You need X,Y,Z axis vectors and O origin position. O you already have (at least you think) and Z axis is the ray direction so you should have it too. Now just compute X,Y as alignment to something (else the view will rotate around the ray) I use NEH for that so:
view.Z=Ray.Dir // ray direction
view.Y=NEH.Z // NEH up vector
view.X=view.Y x view.Z // cross product make view.X axis perpendicular to Y ansd Z
view.Y=view.Z x view.X // just to make all three axises perpendicular to each other
view.O=ground position - (distance*Ray.Dir);
To make it a valid view_port you have to:
view = inverse(view)*projection_matrix;
You need inverse matrix computation for that
if you want the whole thing
Then you also want to add the Sun/Earth position computation in that case look here:
complete Earth-Sun position by Kepler's equation
The distance
Now that is clear what is behind you just need to set the distance if you want to set it to Sun then it will be distance=1.0 AU; (astronomical unit) but that is huge distance and if you have perspective your earth will be very small instead use some closer distance to match your view size look here:
How to position the camera so that the object always has the same size
I'm trying to understand how up vectors and lookAt() work together in three.js. I'm setting the up vector of this axisHelper, so that the Y axis always points at the target geo, which marks the position of the up vector. It works as expected for X and Y, rotating the axes around the Z axis; and when I try to adjust the Z value of the up vector I would expect the axes to rotate around the X axis, but nothing happens.
http://jsfiddle.net/68p5r/4/
[Edit: I've added geo to show the up target position.]
I have a dat.gui interface manipulating the up vector to demonstrate, but the problem exists when I set the vector manually as well.
I suspect the problem is around line 74:
zControl.onChange(function(value) {
axes.up.set(this.object.x, this.object.y, value);
axes.lookAt(new THREE.Vector3(0, 0, 1));
});
When I update the up vector, I instruct the axisHelper to update its orientation onscreen by redoing its lookAt() down its Z axis. Changing the X and Y works as expected, why not the Z?
(This is also the case if I use geo instead of an axisHelper: http://jsfiddle.net/68p5r/5/)
When you call Object.lookAt( vector ), the object is rotated so that its internal z-axis points toward the target vector.
But that is not sufficient to specify the object's orientation, because the object itself can still be "spun" on its z-axis.
So the object is then "spun" so that its internal y-axis is in the plane of its internal z-axis and the up vector.
The target vector and the up vector are, together, sufficient to uniquely specify the object's orientation.
three.js r.63
Tip: An axis in three.js should always have unit length; be sure to call axis.normalize() in your code.
I assume your title meant rotate on Z instead of X?
Anyways, the culprit seems to be axes.lookAt(new THREE.Vector3(0, 0, 1)); if you change that to axes.lookAt(new THREE.Vector3(0, 1, 0)); for all methods then Y doesn't rotate as expected. You are telling the axis helper to look down a specific axis (in your case Z). Hence why Z value isn't working.
Is there an example of what your trying to accomplish that might help us?
Maybe someone else can give a bit more in depth explanation of what's happening. Hopefully my answer will push you in the right direction.
Here's how I came to understand the problem:
The lookAt and up vectors determine the orientation of an object like so:
The lookAt vector is applied FIRST, which sets the X and Y rotations, locking the direction the object's Z axis points.
THEN the up vector determines how the object rotates around the Z axis, to set the direction the object's Y axis points -- it won't affect the X and Y rotations at all.
In my example, the axisHelper is looking down its blue Z axis in the direction of the lookAt vector, which is a point in space at (0, 0, -1) -- so the X and Y rotations have already been set. The only thing left to do is figure out how to rotate the axisHelper around its Z axis, which means setting the X and Y points of the up vector -- moving the up vector forward and backward along the Z axis won't change anything.
Here's a fiddle with a demo illustrating this relationship: the blue arrow is the lookAt axis, and the green arrow is the up axis.
https://jsfiddle.net/hrjfgo4b/3
Links to jsfiddle.net must be accompanied by code
To view my 3D environment, I use the "true" 3D isometric projection (flat square on XZ plane, Y is "always" 0). I used the explanation on wikipedia: http://en.wikipedia.org/wiki/Isometric_projection to come to how to do this transformation:
The projection matrix is an orthographic projection matrix between some minimum and maximum coordinate.
The view matrix is two rotations: one around the Y-axis (n * 45 degrees) and one around the X-axis (arctan(sin(45 degrees))).
The result looks ok, so I think I have done it correctly.
But now I want to be able to pick a coordinate with the mouse. I have successfully implemented this by rendering coordinates to an invisible framebuffer and then getting the pixel under the mouse cursor to get the coordinate. Although this works fine, I would really like to see a mathematical sollution because I will need it to calculate bounding boxes, frustums of the area on the screen and stuff like that.
My instincts tell me to:
- go from screen-coordinates to 2D projection coordinates (or how do you say this, I mean transforming screen coordinates to a coordinate between -1 and +1 for both axisses, with y inverted)
- untransform the coordinate with the inverse of the view-matrix.
- yeah... untransform this coordinate with the inverse of the projection matrix, but as my instincts tell, this won't work as everything will have the same Z-coordinate.
This, while every information is perfectly available on the isometric view (I know that the Y value is always 0). So I should be able to convert the isometric 2D x,y coordinate to a calculated 3d (x, 0, z) coordinate without using scans or something like that.
My math isn't bad, but this is something I can't seem to grasp.
Edit: IMO. every different (x, 0, z) coordinate corresponds to a different (x2, y2) coordinate in isometric view. So I should be able to simply calculate a way from (x2, y2) to (x, 0, z). But how?
Anyone?
there is something called project and unproject to transform screen to world and vice versa....
You seem to miss some core concepts here (it’s been a while since I did this stuff, so minor errors included):
There are 3 kinds of coordinates involved here (there are more, these are the relevant ones): Scene, Projection and Window
Scene (3D) are the coordinates in your world
Projection (3D) are those coordinates after being transformed by camera position and projection
Window (2D) are the coordinates in your window. They are generated from projection by scaling x and y appropriately and discarding z (z is still used for “who’s in front?” calculations)
You can not transform from window to scene with a matrix, as every point in window does correspond to a whole line in scene. If you want (x, 0, z) coordinates, you can generate this line and intersect it with the y-plane.
If you want to do this by hand, generate two points in projection with the same (x,y) and different (arbitrary) z coordinates and transform them to scene by multiplying with the inverse of your projection transformation. Now intersect the line through those two points with your y-plane and you’re done.
Note that there should be a “static” solution (a single formula) to this problem – if you solve this all on paper, you should get to it.