Snap dragged mouse position perpendicular to start edge in 3D - math

I'm drawing a line as I drag my mouse from an edge AB in 3D, using the vector that the mouse's path creates to draw the line CD - see the image. I need to determine whether the line CD is perpendicular to AB and if not how I move it to a position where it is perpendicular CE.
Perpendicular Problem
I can determine the angle of the mouse's path in relation to the edge but I'm struggling to determine where the mouse should be snapped to, to be perpendicular to the edge.
In 2D I could simply determine two vectors that would be perpendicular to the edge (1 on either side of the edge) and compare my mouse path to those but in 3D the possible vectors are infinite, so I need to limit the options somehow.
I can create a triangular plane between points CBD but I still don't know the best way to use this to determine a perpendicular vector from the edge FG, I only know that my mouse path is or is not perpendicular.
I might be approaching this all wrong, so any help would be appreciated.
Thanks.
Edit:
I'm not sure if I need to write a new question for this but thought it seemed reasonable to carry on this thread.
I can now drag perpendicular lines from any edge on my geometry using the below answer from MBo. However, I now have the issue of infinite perpendicular directions for any given edge.
Is there an easy way to limit these to four directions (see image - green dashed line is mouse path)? I'm showing a cube in the image but it could be any edge geometry in 3D space. Perpendicular Dragging
My current thinking is that the best way is to take the edge that the mouse is dragging from and use any connected edges to create a plane, then using the plane's normal limit perpendiculars to that but if there's a better way, please let me know. Thanks.

Find projection of point D onto line A-B using vector algebra:
AB = B - A //(AB.x = B.x-A.x and so on)
AD = D - A
P = A + AB * (AB.dot.AD) / (AB.dot.AB)
//(AB.dot.AD) = AB.x * AD.x + AB.y * AD.y + AB.z * AD.z
Now shift D by difference of C and P
E = D + (C - P)
Note that when value t=(AB.dot.AD) / (AB.dot.AB) is not in range 0..1, projection point lies outside of AB segment

Related

Calculating the distance of vectors along an arbitrary normal

I'm trying to create a function for extruding a face along a normal by dragging the mouse. For the purpose of the question, I've simplified things to 2D vectors, so that the view is looking down onto a cube, with the normal being that of the face to extrude.
I can limit the movement of the mouse by the direction of the face normal easily, my question is how to work out the correct distance along the normal direction that the mouse has travelled.
I have two vectors (A & B1). A is the starting point and B1 is the current mouse position (see image Vector Normal Projection). I need to project B1 so that it points along the face normal direction from point A. So B1 becomes B2. This will also be the case for a mouse position of Bx (Bx needs to be projected from A along the normal so that it becomes B2). This will mean that whether the mouse is at B1, B2 or Bx they will all give the same distance along the normal direction (2 in this case).
I may be approaching the problem incorrectly, so please let me know if there is a better way to tackle this.
Thanks. 
Length of projection of w=AB1 vector onto line AB2 having normalized (unit length) direction vector e is very simple using dot product:
L = (w.dot.e)
perhaps you already have e if you know angle - in 2D it's components are:
e = (cos(fi), sin(fi))

Unity 3D Vector calculating on one axis crossing other axis

Hey I have a problem and I do not get this calculated on Unity 3D
I want to manipulate vertices. Thats ok but I want to move on X Axis where my mouse is. That doesn't work properly.
So what I do is just throw a ray from origin to a direction. So the ray could be infinitiv long.
With this I want to move the vertice of the mesh to the point where the mouse is. I limited it the range with ray_z = vertice_z (pseudo) but if you look the black line which is the ray you notice it getting longer or shorter when I move or rotate the camera. So the vertice is not on the same position like the mouse.
So I don't get calculated. How can I calculate the position from Z (black line) crosses X (red line).
Example:
cam(1,0,0) // cam & the start position of the ray
x_axis(10,0,10) // red line cutting black
ray_position(15,0,15) // the end of the ray (where the mouse could be if you look from cam to mouse)
Btw: The viewport is not top down. I painted wrong.
If you didn't understand I may try again ^^.
You're looking for Plane.Raycast, I think.
Let's say your plane has a <0, 0, -1> normal (the x-y plane) and passes through the origin:
Plane p = new Plane(Vector3.back, Vector3.zero);
Then you can find the point where a camera/mouse ray intersects with that plane:
Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
float distanceAlongRay;
p.Raycast(ray, out distanceAlongRay);
Vector3 mouseOnPlane = ray.GetPoint(distanceAlongRay);
mouseOnPlane is the point on the x-y plane where the mouse ray hit. If you're only interested in the x, then use mouseOnPlane.x.

Formula for calculating camera x,y,z position to force 3D point to appear at left side of the screen and rightmost position on the globe

I'd need a formula to calculate 3D position and direction or orientation of a camera in a following situation:
Camera starting position is looking directly into center of the Earth. Green line goes straight up to the sky
Position that camera needs to move to is looking like this
Starting position probably shouldn't matter, but the question is:
How to calculate camera position and direction given 3D coordinates of any point on the globe. In the camera final position, the distance from Earth is always fixed. From desired camera point of view, the chosen point should appear at the rightmost point of a globe.
I think what you want for camera position is a point on the intersection of a plane parallel to the tangent plane at the location, but somewhat further from the Center, and a sphere representing the fixed distance the camera should be from the center. The intersection will be a circle, so there are infinitely many camera positions that work.
Camera direction will be 1/2 determined by the location and 1/2 determined by how much earth you want in the picture.
Suppose (0,0,0) is the center of the earth, Re is the radius of the earth, and (a,b,c) is the location on the earth you want to look at. If it's in terms of latitude and longitude you should convert to Cartesian coordinates which is straightforward. Your camera should be on a plane perpendicular to the vector (a,b,c) and at a height kRe above the earth where k>1 is some number you can adjust. The equation for the plane is then ax+by+cz=d where d = kRe^2. Note that the plane passes through the point (ka,kb,kc) in space, which is what we wanted.
Since you want the camera to be at a certain height above the earth, say h*Re where 1 < k < h, you need to find points on ax+by+cz=d for which x^2+y^2+z^2 = h^2*Re^2. So we need the intersection of the plane and a sphere. It will be easier to manage if we have a coordinate system on the plane, which we get from an orthonormal system which includes (a,b,c). A good candidate for the second vector in the orthonormal system is the projection of the z-axis (polar axis, I assume). Projecting (0,0,1) onto (a,b,c),
proj_(a,b,c)(0,0,1) = (a,b,c).(0,0,1)/|(a,b,c)|^2 (a,b,c)
= c/Re^2 (a,b,c)
Then the "horizontal component" of (0,0,1) is
u = proj_Plane(0,0,1) = (0,0,1) - c/Re^2 (a,b,c)
= (-ac/Re^2,-bc/Re^2,1-c^2/Re^2)
You can normalize the vector to length 1 if you wish but there's no need. However, you do need to calculate and store the square of the length of the vector, namely
|u|^2 = ((ac)^2 + (bc)^2 + (Re^2-c^2))/Re^4
We could complicate this further by taking the cross product of (0,0,1) and the previous vector to get the third vector in the orthonormal system, then obtain a parametric equation for the intersection of the plane and sphere on which the camera lies, but if you just want the simplest answer we won't do that.
Now we need to solve for t such that
|(ka,kb,kc)+t(-ac/Re^2,-bc/Re^2,1-c^2/Re^2)|^2 = h^2 Re^2
|(ka,kb,kc)|^2 + 2t (a,b,c).u + t^2 |u|^2 = h^2 Re^2
Since (a,b,c) and u are perpendicular, the middle term drops out, and you have
t^2 = (h^2 Re^2 - k^2 Re^2)/|u|^2.
Substituting that value of t into
(ka,kb,kc)+t(-ac/Re^2,-bc/Re^2,1-c^2/Re^2)
gives the position of the camera in space.
As for direction, you'll have to experiment with that a bit. Some vector that looks like
(a,b,c) + s(-ac/Re^2,-bc/Re^2,1-c^2/Re^2)
should work. It's hard to say a priori because it depends on the camera magnification, width of the view screen, etc. I'm not sure offhand whether you'll need positive or negative values for s. You may also need to rotate the camera viewport, possibly by 90 degrees, I'm not sure.
If this doesn't work out, it's possible I made an error. Let me know how it works out and I'll check.

How to find view point coordinates?

I have azimuth , elevation and direction vector of the sun.. i want to place a view point on sun ray direction with some distance. Can anyone describe or provide a link to a resource that will help me understand and implement the required steps?
I used cartesian coordinate system to find direction vector from azimuth and elevation.and then for find
viewport origin.image for this question
x = distance
y = distance* tan azimuth
z = distance * tan elevation.
i want to find that distance value... how?
azimutal coordinate system is referencing to NEH (geometric North East High(Up)) reference frame !!!
in your link to image it is referencing to -Y axis which is not true unless you are not rendering the world but doing some nonlinear graph-plot projection so which one it is?
btw here ECEF/WGS84 and NEH you can find out how to compute NEH for WGS84
As I can see you have bad computation between coordinates so just to be clear this is how it looks like:
on the left is global Earth view and one NEH computed for its position (its origin). In the middle is surface aligned side view and on the right is surface aligned top view. Blue magenta green are input azimutal coordinates, Brown are x,y,z cartesian projections (where the coordinate is on its axis) so:
Dist'= Dist *cos(Elev );
z = Dist *sin(Elev );
x = Dist'*cos(Azimut);
y =-Dist'*sin(Azimut);
if you use different reference frame or axis orientations then change it accordingly ...
I suspect you use 4x4 homogenous transform matrices
for representing coordinate systems and also to hold your view-port so look here:
transform matrix anatomy
constructing the view-port
You need X,Y,Z axis vectors and O origin position. O you already have (at least you think) and Z axis is the ray direction so you should have it too. Now just compute X,Y as alignment to something (else the view will rotate around the ray) I use NEH for that so:
view.Z=Ray.Dir // ray direction
view.Y=NEH.Z // NEH up vector
view.X=view.Y x view.Z // cross product make view.X axis perpendicular to Y ansd Z
view.Y=view.Z x view.X // just to make all three axises perpendicular to each other
view.O=ground position - (distance*Ray.Dir);
To make it a valid view_port you have to:
view = inverse(view)*projection_matrix;
You need inverse matrix computation for that
if you want the whole thing
Then you also want to add the Sun/Earth position computation in that case look here:
complete Earth-Sun position by Kepler's equation
The distance
Now that is clear what is behind you just need to set the distance if you want to set it to Sun then it will be distance=1.0 AU; (astronomical unit) but that is huge distance and if you have perspective your earth will be very small instead use some closer distance to match your view size look here:
How to position the camera so that the object always has the same size

What are barycentric calculations used for?

I've been looking at XNA's Barycentric method and the descriptions I find online are pretty opaque to me. An example would be nice. Just an explanation in English would be great... what is the purpose and how could it be used?
From Wikipedia:
In geometry, the barycentric coordinate system is a coordinate system in which the location of a point is specified as the center of mass, or barycenter, of masses placed at the vertices of a simplex (a triangle, tetrahedron, etc).
They are used, I believe, for raytracing in game development.
When a ray intersects a triangle in a normal mesh, you just record it as either a hit or a miss. But if you want to implement a subsurf modifier (image below), which makes meshes much smoother, you will need the distance the ray hit from the center of the triangle (which is much easier to work with in Barycentric coordinates).
Subsurf modifiers are not that hard to visualize:
The cube is the original shape, and the smooth mesh inside is the "subsurfed" cube, I think with a recursion depth of three or four.
Actually, that might not be correct. Don't take my exact word for it, but I do know that they are used for texture mapping on geometric shapes.
Here's a little set of slides you can look at: http://www8.cs.umu.se/kurser/TDBC07/HT04/handouts/HO-lecture11.pdf
In practice the barycentric coordinates of a point P in respect of a triangle ABC are just its weights (u,v,w) according to the triangle's vertices, such that P = u*A + v*B + w*C. If the point lies within the triangle, you got u,v,w in [0,1] and u+v+w = 1.
They are used for any task involving knowledge of a point's location in respect to the vertices of a triangle, like e.g. interpolation of attributes across a triangle. For example in raytracing you got a hitpoint inside the triangle. When you want to know that point's normal or other attributes, you compute its barycentric coordinates within the triangle. Then you can use these weights to sum up the attributes of the triangle's vertices and you got the interpolated attribute.
To compute a point P's barycentric coordinates (u,v,w) within a triangle ABC you can use:
u = [PBC] / [ABC]
v = [APC] / [ABC]
w = [ABP] / [ABC]
where [ABC] denotes the area of the triangle ABC.

Resources