2d screen velocity to 3d world velocity while camera is rotated - math

I am creating a small demo game where you can rotate around earth and launch satellites into space. But I have some trouble with the calculations.
You can drag the mouse from the platform to a direction. This is the direction you shoot the satellite to. Because the camera is rotated around the planet, up isn't the same as forward. For the direction of the satellite, I need a Vector3 (direction/velocity).
So data I have is the forward of the platform on screen and the mosue drag direction.
So When the user drags it to (-0.7, 0.7) it means the satilatie launch direction should be (0, 0, 1). The global/World Forward direction.
So how can I translate those 2d screen position and direction to the world direction?

PlayCanvas has a very useful function we could make use of. The implementation is as follows:
* #description Convert a point from 2D canvas pixel space to 3D world space.
* #param {Number} x x coordinate on PlayCanvas' canvas element.
* #param {Number} y y coordinate on PlayCanvas' canvas element.
* #param {Number} z The distance from the camera in world space to create the new point.
* #param {Number} cw The width of PlayCanvas' canvas element.
* #param {Number} ch The height of PlayCanvas' canvas element.
* #param {pc.Vec3} [worldCoord] 3D vector to receive world coordinate result.
* #returns {pc.Vec3} The world space coordinate.
*/
screenToWorld: function (x, y, z, cw, ch, worldCoord) {
...
We can use this function to convert the start and end points (A and B respectively in the diagram) of the mouse drag line to 3D lines in world space. After the conversion we must subtract the camera's world position from the two projected points, and normalize the resulting vectors.
[The z parameter is irrelevant for this purpose because we are only interested in a direction vector and not an actual point, so just set it to e.g. 1. ]
So what does this give us? A plane spanned by these two vectors:
There are three criteria that the velocity direction must satisfy:
Perpendicular to the surface normal (i.e. tangent to the surface) at the launch site.
Parallel to the plane we just found.
Have a component in the direction from A to B.
Let:
Screen points A and B project to directional vectors U and V respectively.
The surface normal at the launch site (the "up" direction as seen by a person standing there) be N:
Where (ψ, φ) = (lat, long).
Finally, the (un-normalized) velocity direction is simply given by cross(N, cross(A, B)). Note that the order of operations matters.
To visualize this:
EDIT:
Small mistake in the second diagram: U×V should be V×U, but the expected result N×(U×V) is still correct.
Note that UxV is not necessarily perpendicular to N. When it is parallel to N, the blue plane "scrapes" the surface, i.e. the green line AB is tangent to the Earth's surface as per rendered on-screen, at the launch site.

Related

projecting points onto the sky

If I had a sphere with unit radius, and I had some points defined as latitudes and longitudes on that sphere, and I had a camera defined with a vertical and horizontal field of view angle which is always in the centre of the sphere. How can I project these points onto that camera?
A point at direction (x,y,z) at infinity has homogeneous coordinates of (x,y,z,0). So assuming that you use a typical view-projection matrices to describe your camera model, it is as simple as calculating
P * V * ( cos(lon)*cos(lat), sin(lon)*cos(lat), sin(lat), 0 )'
and then proceeding with a perspective divide and rasterization.

Converting Euler Angles to XYZ coords for VR

Background
I'm running VTK KiwiViewer source on my mobile device and I'm using it to make VR scenes using point clouds where the user's phone acts as the VR goggles.
I'm getting attitude from CMDeviceMotion which provides me with Euler Angles for the x, y, and z axes (respectively pitch, roll, and yaw).
I'm trying to get a Google Cardboard Experience without leveraging the Cardboard SDK. Reason being because Kiwi will already import all the models I need for testing.
Scenario
Kiwi uses a XYZ coordinate based system for Camera Position and Focal Point. Here are the three objects you have to work with to position the VR view:
Focal Point: xyz of the point the camera is looking at
Camera Position: xyz where the camera is in 3d space
Camera Up: relative xyz to control the rotation of the camera
For now I'm always putting the Camera Position at 0,0,0. I use sin/cos with Euler Angles * 10 to place the Focal Point 10 units away from the camera. Setting the Camera Position and Focal Point location automatically sets Camera Up to a useable correct value.
Setting the Focal Point
x = -(sin(roll) * cos(pitch)) * 10;
y = cos(roll) * sin(pitch) * 10;
z = sin(yaw);
setCameraFocalPoint(x, y, z);
Question
My current setup works okay but it has some nasty quirks. How can I tweak my conversion to get a more solid VR experience?
You need to find out, what convention the Euler angles are made for (X * Y * Z is common, but your SDK might use another). Then, look up the according rotation matrix. Your view direction will be the last column of this matrix (or its inverse if you use right-handed coordinate systems). The up direction will be the second column.
If your SDK allows you to set the view matrix directly, you can use the transposed rotation matrix (and add a fourth row and column of zeroes and m44=1).

Formula for calculating camera x,y,z position to force 3D point to appear at left side of the screen and rightmost position on the globe

I'd need a formula to calculate 3D position and direction or orientation of a camera in a following situation:
Camera starting position is looking directly into center of the Earth. Green line goes straight up to the sky
Position that camera needs to move to is looking like this
Starting position probably shouldn't matter, but the question is:
How to calculate camera position and direction given 3D coordinates of any point on the globe. In the camera final position, the distance from Earth is always fixed. From desired camera point of view, the chosen point should appear at the rightmost point of a globe.
I think what you want for camera position is a point on the intersection of a plane parallel to the tangent plane at the location, but somewhat further from the Center, and a sphere representing the fixed distance the camera should be from the center. The intersection will be a circle, so there are infinitely many camera positions that work.
Camera direction will be 1/2 determined by the location and 1/2 determined by how much earth you want in the picture.
Suppose (0,0,0) is the center of the earth, Re is the radius of the earth, and (a,b,c) is the location on the earth you want to look at. If it's in terms of latitude and longitude you should convert to Cartesian coordinates which is straightforward. Your camera should be on a plane perpendicular to the vector (a,b,c) and at a height kRe above the earth where k>1 is some number you can adjust. The equation for the plane is then ax+by+cz=d where d = kRe^2. Note that the plane passes through the point (ka,kb,kc) in space, which is what we wanted.
Since you want the camera to be at a certain height above the earth, say h*Re where 1 < k < h, you need to find points on ax+by+cz=d for which x^2+y^2+z^2 = h^2*Re^2. So we need the intersection of the plane and a sphere. It will be easier to manage if we have a coordinate system on the plane, which we get from an orthonormal system which includes (a,b,c). A good candidate for the second vector in the orthonormal system is the projection of the z-axis (polar axis, I assume). Projecting (0,0,1) onto (a,b,c),
proj_(a,b,c)(0,0,1) = (a,b,c).(0,0,1)/|(a,b,c)|^2 (a,b,c)
= c/Re^2 (a,b,c)
Then the "horizontal component" of (0,0,1) is
u = proj_Plane(0,0,1) = (0,0,1) - c/Re^2 (a,b,c)
= (-ac/Re^2,-bc/Re^2,1-c^2/Re^2)
You can normalize the vector to length 1 if you wish but there's no need. However, you do need to calculate and store the square of the length of the vector, namely
|u|^2 = ((ac)^2 + (bc)^2 + (Re^2-c^2))/Re^4
We could complicate this further by taking the cross product of (0,0,1) and the previous vector to get the third vector in the orthonormal system, then obtain a parametric equation for the intersection of the plane and sphere on which the camera lies, but if you just want the simplest answer we won't do that.
Now we need to solve for t such that
|(ka,kb,kc)+t(-ac/Re^2,-bc/Re^2,1-c^2/Re^2)|^2 = h^2 Re^2
|(ka,kb,kc)|^2 + 2t (a,b,c).u + t^2 |u|^2 = h^2 Re^2
Since (a,b,c) and u are perpendicular, the middle term drops out, and you have
t^2 = (h^2 Re^2 - k^2 Re^2)/|u|^2.
Substituting that value of t into
(ka,kb,kc)+t(-ac/Re^2,-bc/Re^2,1-c^2/Re^2)
gives the position of the camera in space.
As for direction, you'll have to experiment with that a bit. Some vector that looks like
(a,b,c) + s(-ac/Re^2,-bc/Re^2,1-c^2/Re^2)
should work. It's hard to say a priori because it depends on the camera magnification, width of the view screen, etc. I'm not sure offhand whether you'll need positive or negative values for s. You may also need to rotate the camera viewport, possibly by 90 degrees, I'm not sure.
If this doesn't work out, it's possible I made an error. Let me know how it works out and I'll check.

How to find view point coordinates?

I have azimuth , elevation and direction vector of the sun.. i want to place a view point on sun ray direction with some distance. Can anyone describe or provide a link to a resource that will help me understand and implement the required steps?
I used cartesian coordinate system to find direction vector from azimuth and elevation.and then for find
viewport origin.image for this question
x = distance
y = distance* tan azimuth
z = distance * tan elevation.
i want to find that distance value... how?
azimutal coordinate system is referencing to NEH (geometric North East High(Up)) reference frame !!!
in your link to image it is referencing to -Y axis which is not true unless you are not rendering the world but doing some nonlinear graph-plot projection so which one it is?
btw here ECEF/WGS84 and NEH you can find out how to compute NEH for WGS84
As I can see you have bad computation between coordinates so just to be clear this is how it looks like:
on the left is global Earth view and one NEH computed for its position (its origin). In the middle is surface aligned side view and on the right is surface aligned top view. Blue magenta green are input azimutal coordinates, Brown are x,y,z cartesian projections (where the coordinate is on its axis) so:
Dist'= Dist *cos(Elev );
z = Dist *sin(Elev );
x = Dist'*cos(Azimut);
y =-Dist'*sin(Azimut);
if you use different reference frame or axis orientations then change it accordingly ...
I suspect you use 4x4 homogenous transform matrices
for representing coordinate systems and also to hold your view-port so look here:
transform matrix anatomy
constructing the view-port
You need X,Y,Z axis vectors and O origin position. O you already have (at least you think) and Z axis is the ray direction so you should have it too. Now just compute X,Y as alignment to something (else the view will rotate around the ray) I use NEH for that so:
view.Z=Ray.Dir // ray direction
view.Y=NEH.Z // NEH up vector
view.X=view.Y x view.Z // cross product make view.X axis perpendicular to Y ansd Z
view.Y=view.Z x view.X // just to make all three axises perpendicular to each other
view.O=ground position - (distance*Ray.Dir);
To make it a valid view_port you have to:
view = inverse(view)*projection_matrix;
You need inverse matrix computation for that
if you want the whole thing
Then you also want to add the Sun/Earth position computation in that case look here:
complete Earth-Sun position by Kepler's equation
The distance
Now that is clear what is behind you just need to set the distance if you want to set it to Sun then it will be distance=1.0 AU; (astronomical unit) but that is huge distance and if you have perspective your earth will be very small instead use some closer distance to match your view size look here:
How to position the camera so that the object always has the same size

reverse perspective projection

I'm using
worldview_inverse * (projection_inverse * vector)
to transform screen space coordinates into world space coordinates.
I assumed that
(x,y,1,1)
would transform to a point on the far plane, while
(x,y,-1,1)
transforms to a point on the near plane, and connecting the line I can query all objects in the view frustum that intersect the line.
After the transformation I divide the resulting points by their respective .w component.
This works for the far-plane, but the point on the near plane somehow gets transformed to the world space origin.
I think this has to do with the w components of 1 I'm feeding into the inverse projection, because usually it is 1 before projection, not after, and I'm doing the reverse projection. What am I doing wrong?
I know this is only a workaround, but you can deduce the near plane point by only using the far point and the viewing position.
near_point = view_position
+ (far_point - view_position) * (near_distance / far_distance)
As for you real problem. First, don't forget to divide by W! Also, depending on your projection matrix, have you tried (x,y,0,1) as opposed to z=-1.
near_point = worldview_inverse * (projection_inverse * vector)
near_point /= near_point.W

Resources