projecting points onto the sky - math

If I had a sphere with unit radius, and I had some points defined as latitudes and longitudes on that sphere, and I had a camera defined with a vertical and horizontal field of view angle which is always in the centre of the sphere. How can I project these points onto that camera?

A point at direction (x,y,z) at infinity has homogeneous coordinates of (x,y,z,0). So assuming that you use a typical view-projection matrices to describe your camera model, it is as simple as calculating
P * V * ( cos(lon)*cos(lat), sin(lon)*cos(lat), sin(lat), 0 )'
and then proceeding with a perspective divide and rasterization.

Related

Making a Circle in using Math nodes

I tried to make a circle using Math Nodes in Blender Shader Editor on a default Plane. A default Plane has dimensions of [2 m * 2 m]. I used the standard equation
(x-g)^2 + (y-h)^2 - r^2 = 0
But the circle formed exceeds the Plane when I use the value (1,1) for (g,h). But when I use (0.5,0.5) for (g,h) I get the desired result.
Mathematically, shouldn't the top right corner of the Plane be (2,2) while the centre of the Plane be (1,1).
Please help me.
With the shown setup your center is at 0.5,0.5 and the diameter is 1.
That does get you a circle spanning the 0..1/0..1 coordinate range of a texture.
Try using a 4x4 plane. It probably will give you an insight:
The texture coordinates and the object/vertex coordinates are different.

2d screen velocity to 3d world velocity while camera is rotated

I am creating a small demo game where you can rotate around earth and launch satellites into space. But I have some trouble with the calculations.
You can drag the mouse from the platform to a direction. This is the direction you shoot the satellite to. Because the camera is rotated around the planet, up isn't the same as forward. For the direction of the satellite, I need a Vector3 (direction/velocity).
So data I have is the forward of the platform on screen and the mosue drag direction.
So When the user drags it to (-0.7, 0.7) it means the satilatie launch direction should be (0, 0, 1). The global/World Forward direction.
So how can I translate those 2d screen position and direction to the world direction?
PlayCanvas has a very useful function we could make use of. The implementation is as follows:
* #description Convert a point from 2D canvas pixel space to 3D world space.
* #param {Number} x x coordinate on PlayCanvas' canvas element.
* #param {Number} y y coordinate on PlayCanvas' canvas element.
* #param {Number} z The distance from the camera in world space to create the new point.
* #param {Number} cw The width of PlayCanvas' canvas element.
* #param {Number} ch The height of PlayCanvas' canvas element.
* #param {pc.Vec3} [worldCoord] 3D vector to receive world coordinate result.
* #returns {pc.Vec3} The world space coordinate.
*/
screenToWorld: function (x, y, z, cw, ch, worldCoord) {
...
We can use this function to convert the start and end points (A and B respectively in the diagram) of the mouse drag line to 3D lines in world space. After the conversion we must subtract the camera's world position from the two projected points, and normalize the resulting vectors.
[The z parameter is irrelevant for this purpose because we are only interested in a direction vector and not an actual point, so just set it to e.g. 1. ]
So what does this give us? A plane spanned by these two vectors:
There are three criteria that the velocity direction must satisfy:
Perpendicular to the surface normal (i.e. tangent to the surface) at the launch site.
Parallel to the plane we just found.
Have a component in the direction from A to B.
Let:
Screen points A and B project to directional vectors U and V respectively.
The surface normal at the launch site (the "up" direction as seen by a person standing there) be N:
Where (ψ, φ) = (lat, long).
Finally, the (un-normalized) velocity direction is simply given by cross(N, cross(A, B)). Note that the order of operations matters.
To visualize this:
EDIT:
Small mistake in the second diagram: U×V should be V×U, but the expected result N×(U×V) is still correct.
Note that UxV is not necessarily perpendicular to N. When it is parallel to N, the blue plane "scrapes" the surface, i.e. the green line AB is tangent to the Earth's surface as per rendered on-screen, at the launch site.

Formula for calculating camera x,y,z position to force 3D point to appear at left side of the screen and rightmost position on the globe

I'd need a formula to calculate 3D position and direction or orientation of a camera in a following situation:
Camera starting position is looking directly into center of the Earth. Green line goes straight up to the sky
Position that camera needs to move to is looking like this
Starting position probably shouldn't matter, but the question is:
How to calculate camera position and direction given 3D coordinates of any point on the globe. In the camera final position, the distance from Earth is always fixed. From desired camera point of view, the chosen point should appear at the rightmost point of a globe.
I think what you want for camera position is a point on the intersection of a plane parallel to the tangent plane at the location, but somewhat further from the Center, and a sphere representing the fixed distance the camera should be from the center. The intersection will be a circle, so there are infinitely many camera positions that work.
Camera direction will be 1/2 determined by the location and 1/2 determined by how much earth you want in the picture.
Suppose (0,0,0) is the center of the earth, Re is the radius of the earth, and (a,b,c) is the location on the earth you want to look at. If it's in terms of latitude and longitude you should convert to Cartesian coordinates which is straightforward. Your camera should be on a plane perpendicular to the vector (a,b,c) and at a height kRe above the earth where k>1 is some number you can adjust. The equation for the plane is then ax+by+cz=d where d = kRe^2. Note that the plane passes through the point (ka,kb,kc) in space, which is what we wanted.
Since you want the camera to be at a certain height above the earth, say h*Re where 1 < k < h, you need to find points on ax+by+cz=d for which x^2+y^2+z^2 = h^2*Re^2. So we need the intersection of the plane and a sphere. It will be easier to manage if we have a coordinate system on the plane, which we get from an orthonormal system which includes (a,b,c). A good candidate for the second vector in the orthonormal system is the projection of the z-axis (polar axis, I assume). Projecting (0,0,1) onto (a,b,c),
proj_(a,b,c)(0,0,1) = (a,b,c).(0,0,1)/|(a,b,c)|^2 (a,b,c)
= c/Re^2 (a,b,c)
Then the "horizontal component" of (0,0,1) is
u = proj_Plane(0,0,1) = (0,0,1) - c/Re^2 (a,b,c)
= (-ac/Re^2,-bc/Re^2,1-c^2/Re^2)
You can normalize the vector to length 1 if you wish but there's no need. However, you do need to calculate and store the square of the length of the vector, namely
|u|^2 = ((ac)^2 + (bc)^2 + (Re^2-c^2))/Re^4
We could complicate this further by taking the cross product of (0,0,1) and the previous vector to get the third vector in the orthonormal system, then obtain a parametric equation for the intersection of the plane and sphere on which the camera lies, but if you just want the simplest answer we won't do that.
Now we need to solve for t such that
|(ka,kb,kc)+t(-ac/Re^2,-bc/Re^2,1-c^2/Re^2)|^2 = h^2 Re^2
|(ka,kb,kc)|^2 + 2t (a,b,c).u + t^2 |u|^2 = h^2 Re^2
Since (a,b,c) and u are perpendicular, the middle term drops out, and you have
t^2 = (h^2 Re^2 - k^2 Re^2)/|u|^2.
Substituting that value of t into
(ka,kb,kc)+t(-ac/Re^2,-bc/Re^2,1-c^2/Re^2)
gives the position of the camera in space.
As for direction, you'll have to experiment with that a bit. Some vector that looks like
(a,b,c) + s(-ac/Re^2,-bc/Re^2,1-c^2/Re^2)
should work. It's hard to say a priori because it depends on the camera magnification, width of the view screen, etc. I'm not sure offhand whether you'll need positive or negative values for s. You may also need to rotate the camera viewport, possibly by 90 degrees, I'm not sure.
If this doesn't work out, it's possible I made an error. Let me know how it works out and I'll check.

How to find view point coordinates?

I have azimuth , elevation and direction vector of the sun.. i want to place a view point on sun ray direction with some distance. Can anyone describe or provide a link to a resource that will help me understand and implement the required steps?
I used cartesian coordinate system to find direction vector from azimuth and elevation.and then for find
viewport origin.image for this question
x = distance
y = distance* tan azimuth
z = distance * tan elevation.
i want to find that distance value... how?
azimutal coordinate system is referencing to NEH (geometric North East High(Up)) reference frame !!!
in your link to image it is referencing to -Y axis which is not true unless you are not rendering the world but doing some nonlinear graph-plot projection so which one it is?
btw here ECEF/WGS84 and NEH you can find out how to compute NEH for WGS84
As I can see you have bad computation between coordinates so just to be clear this is how it looks like:
on the left is global Earth view and one NEH computed for its position (its origin). In the middle is surface aligned side view and on the right is surface aligned top view. Blue magenta green are input azimutal coordinates, Brown are x,y,z cartesian projections (where the coordinate is on its axis) so:
Dist'= Dist *cos(Elev );
z = Dist *sin(Elev );
x = Dist'*cos(Azimut);
y =-Dist'*sin(Azimut);
if you use different reference frame or axis orientations then change it accordingly ...
I suspect you use 4x4 homogenous transform matrices
for representing coordinate systems and also to hold your view-port so look here:
transform matrix anatomy
constructing the view-port
You need X,Y,Z axis vectors and O origin position. O you already have (at least you think) and Z axis is the ray direction so you should have it too. Now just compute X,Y as alignment to something (else the view will rotate around the ray) I use NEH for that so:
view.Z=Ray.Dir // ray direction
view.Y=NEH.Z // NEH up vector
view.X=view.Y x view.Z // cross product make view.X axis perpendicular to Y ansd Z
view.Y=view.Z x view.X // just to make all three axises perpendicular to each other
view.O=ground position - (distance*Ray.Dir);
To make it a valid view_port you have to:
view = inverse(view)*projection_matrix;
You need inverse matrix computation for that
if you want the whole thing
Then you also want to add the Sun/Earth position computation in that case look here:
complete Earth-Sun position by Kepler's equation
The distance
Now that is clear what is behind you just need to set the distance if you want to set it to Sun then it will be distance=1.0 AU; (astronomical unit) but that is huge distance and if you have perspective your earth will be very small instead use some closer distance to match your view size look here:
How to position the camera so that the object always has the same size

Projection matrix point to sphere surface

I need to project a 3D object onto a sphere's surface (uhm.. like casting a shadow).
AFAIR this should be possible with a projection matrix.
If the "shadow receiver" was a plane, then my projection matrix would be a 3D to 2D-plane projection, but my receiver in this case is a 3D spherical surface.
So given sphere1(centerpoint,radius),sphere2(othercenter,otherradius) and an eyepoint how can I compute a matrix that projects all points from sphere2 onto sphere1 (like casting a shadow).
Do you mean that given a vertex v you want the following projection:
v'= centerpoint + (v - centerpoint) * (radius / |v - centerpoint|)
This is not possible with a projection matrix. You could easily do it in a shader though.
Matrixes are commonly used to represent linear operations, like projection onto a plane.
In your case, the resulting vertices aren't deduced from input using a linear function, so this projection is not possible using a matrix.
If the sphere1 is sphere((0,0,0),1), that is, the sphere of radius 1 centered at the origin, then you're in effect asking for a way to convert any location (x,y,z) in 3D to a corresponding location (x', y', z') on the unit sphere. This is equivalent to vector renormalization: (x',y',z') = (x,y,z)/sqrt(x^2+y^2+z^2).
If sphere1 is not the unit sphere, but is say sphere((a,b,c),R) you can do mostly the same thing:
(x',y',z') = R*(x-a,y-b,z-c) / sqrt((x-a)^2+(y-b)^2+(z-c)^2) + (a,b,c). This is equivalent to changing coordinates so the first sphere is the unit sphere, solving the problem, then changing coordinates back.
As people have pointed out, these functions are nonlinear, so the projection cannot be called a "matrix." But if you prefer for some reason to start with a projection matrix, you could project first from 3D to a plane, then from a plane to the sphere. I'm not sure if that would be any better though.
Finally, let me point out that linear maps don't produce division-by-zero errors, but if you look closely at the formulas above, you'll see that this map can. Geometrically, that's because it's hard to project the center point of a sphere to its boundary.

Resources