Cone from direction vector - math

I have a normalized direction vector (from a 3d position to a light position) and I would like this vector to be rotated by some angle so I can create a "cone".
Id like to simulate cone tracing by using the direction vector as the center of the cone and create an X number of samples to create more rays to sample from.
What I would like to know is basically the math behind:
https://docs.unrealengine.com/latest/INT/BlueprintAPI/Math/Random/RandomUnitVectorinCone/index.html
Which seems to do exactly what Im looking for.

1) Make arbitrary vector P, perpendicular to your direction vector D.
You can choose component with max magnitude, exchange it with middle-magnitude component, negate it, and make min magnitude component zero.
For example, if z- component is maximal and y-component is minimal, you may make such P:
D = (dx, dy, dz)
p = (-dz, 0, dx)
P = Normalize(p) //unit vector
2) Make vector Q perpendicular both D and P through vector product:
Q = D x P //unit vector
3) Generate random point in the PQ plane disk
RMax = Tan(Phi) //where Phi is cone angle
Theta = Random(0..2*Pi)
r = RMax * Sqrt(Random(0..1))
V = r * (P * Cos(Theta) + Q * Sin(Theta))
4) Normalize vector V
Note that distribution of vectors is slightly non-uniform on the sphere segment.(it is uniform on the plane disk). There are methods to generate uniform distribution on the sphere but some work needed to apply them to segment (my first attempt before edit was wrong).
Edit: Modification to make sphere-uniform distribution (not checked thoroughly)
RMax = Tan(Phi) //where Phi is cone angle
Theta = Random(0..2*Pi)
u = Random(Cos(Phi)..1)
r = RMax * Sqrt(1 - u^2)
V = r * (P * Cos(Theta) + Q * Sin(Theta))

Related

How to find points of certain distance on a circle perimeter?

Suppose, (x1, y1) is a point on the perimeter of a circle (x-420)^2 + (y-540)^2 = 260^2 what are the two points on the circle perimeter of distance d(euclidean) from the point (x1, y1)
Using trig
Assuming you are using a programming language. The answer is using pseudo code.
Using radians the distance d along a circle can be expressed as an angle a computed as a = d / r (where r is the radius)
Given an arbitrary point on the circle. (x1-420)^2 + (y1-540)^2 = 260^2 (NOTE assumes x1, y1 are known) we can extract the center is x = 420, y = 540, and radius r = 260
The angular distance d is then a = d / 260.
Most languages have the function atan2 which will compute the angle of a vector, We can get the angle from the circle center to the arbitrary point as ang = atan2(y1 - 540, x1 - 420) (Note y first then x)
Thus the absolute angles from the arbitrary point {x1, y1} to the points d distance along the circle (ang1 , ang2) is computed as...
// ? represents known unknowns
x = 420
y = 540
r = 260
d = ?
x1 = ?
y1 = ?
ang = atan2(y1 - y, x1 - x)
ang1 = ang + d / r
ang2 = ang - d / r
And the coordinates of the points (px1, py1, px2, py2) computed as...
px1 = cos(ang1) * r + x
py1 = sin(ang1) * r + y
px2 = cos(ang2) * r + x
py2 = sin(ang2) * r + y
Vector algebra
The problem can also be solved using vector algebra and does not require the trig function atan2
Compute the unit vector representing the angle a = d / r and then with the circle at the origin, transform (rotate) the point on the circle using the unit vector in both directions. Translate the points back to the circles original position for the solution.

Given 2 vector and 2 angle how to find the 3rd vector

It seems to be a very easy question but I just can't figure it out ...
as shown on the below graph:
Supposing we know :
Vector (X,Y)
Vector (X1,Y1)
Angle a
How can I get the vector (?,?) in Unity ?
Many Thanks in advance.
Subtract X1,Y1 from all coordinates.
XX = X - X1
YY = Y - Y1
Let (DX, DY) is vector between (XX, YY) and unknown point.
This vector is perpendicular to (XX, YY), so scalar product is zero.
And length of this vector is equal to length of (XX, YY) multiplied by tangent of angle.
So equation system is
DX * XX + DY * YY = 0
DX^2 + DY^2 = (XX^2 + YY^2) * Tan^2(Alpha)
Solve this system for unknowns (DX, DY) (there are two solutions in general case), then calculate unknown coordinates as (X + DX, Y + DY)
Not totally sure if there is a more efficient method to do this, but it will work.
First you need to find the magnitude of the distance vector between X,Y and X1,Y1. We will call this Dist1.
Dist1 = Vector2.Distance(new Vector2(X,Y), new Vector2(X1,Y1));
Using this distance, we can find the magnitude of the vector for the line going to X?,Y? which we will call DistQ.
DistQ = Dist1 / Mathf.Cos(a * Mathf.Deg2Rad);
You now need to find the angle of this line relative to the overall coordinate plane which will create a new triangle with X?Y? and the x-axis.
angle = Mathf.Atan2((Y - Y1), (X - X1)) * Mathf.Rad2Deg - a;
Now we can use more trig with the DistQ hypotenuse and this new angle to find the X?(XF) and Y?(YF) components relative to X1 and Y1, which we will add on to get the final vector components.
XF = DistQ * Mathf.Cos(angle * Mathf.Deg2Rad) + X1;
YF = DistQ * Mathf.Sin(angle * Mathf.Deg2Rad) + Y1;

Distance from origin to plane (shortest)

So I was reading over something on this page (http://gamedeveloperjourney.blogspot.com/2009/04/point-plane-collision-detection.html)
The author mentioned
d = - D3DXVec3Dot(&vP1, &vNormal);
where vP1 is a point on the plane and vNormal is the normal to the plane. I'm curious as to how this gets you the distance from the world origin since the result will always be 0. In addition, just to be clear (since I'm still kind of hazy on the d part of a plane equation), is d in a plane equation the distance from a line through the world origin to the plane's origin?
In the generic case the distance between a point p and a plane can be computed by
<p - p0, normal>
where <a, b> is the dot product operation
<a, b> = ax*bx + ay*by + az*bz
and where p0 is a point on the plane.
When n is of unity length the dot product between a vector and it is the (signed) length of the projection of the vector on the normal
The formula you are reporting is just the special case when the point p is the origin. In this case
distance = <origin - p0, normal> = - <p0, normal>
This equality is formally wrong because the dot product is about vectors, not points... but still holds numerically. Writing down the explicit formula you get that
(0 - p0.x)*n.x + (0 - p0.y)*n.y + (0 - p0.z)*n.z
is the same as
- (p0.x*n.x + p0.y*n.y + p0.z*n.z)
Indeed a nice way to store a plane is to save the normal n and the value of k = <p0, n> where p0 is any point on the plane (the value of k is independent on which point you choose of the plane).
The result is not always zero. The result will only be zero if the plane goes through the origin. (Here let's assume the plane doesn't go through the origin.)
Basically, you are given a line from the origin to some point on the plane. (I.e. you have a vector from the origin to vP1). The problem with this vector is that most likely it's slanted and going to some far away place on the plane rather than to the closest point on the plane. So, if you simply took the length of vP1 you will get a distance that is too big.
What you need to do is get the projection of vP1 onto some vector that you know is perpendicular to the plane. That of course is vNormal. So take the dot product of vP1 and vNormal, and divide by the length of vNormal and you have the answer. (If they are kind enough to give you a vNormal that already is magnitude one, then no need to divide.)
You can work this out with Lagrange multipliers:
You know that the closest point on the plane must be of the form:
c = p + v
Where c is the closest point and v is a vector along the plane (which is thus orthogonal to n, the normal). You are trying for find the c with the smallest norm (or norm squared). So you are trying to minimized dot(c,c) subject to v being orthogonal to n (thus dot(v,n) = 0).
Thus, set up Lagrangian:
L = dot(c,c) + lambda * ( dot(v,n) )
L = dot(p+v,p+v) + lambda * ( dot(v,n) )
L = dot(p,p) + 2*dot(p,v) + dot(v,v) * lambda * ( dot(v,n) )
And take the derivative with respect to v (and set to 0) to get:
2 * p + 2 * v + lambda * n = 0
You can solve for lambda by in the equation above by dot producting both sides by n to get
2 * dot(p,n) + 2 * dot(v,n) + lambda * dot(n,n) = 0
2 * dot(p,n) + lambda = 0
lambda = - 2 * dot(p,n)
Note again that dot(n,n) = 1 and dot(v,n) = 0 (since v is in the plane and n is orthogonal to it). Then subtitute lambda back in to get:
2 * p + 2 * v - 2 * dot(p,n) * n = 0
and solve for v to get:
v = dot(p,n) * n - p
Then plug this back into c = p + v to get:
c = dot(p,n) * n
The length of this vector is |dot(p,n)| and the sign tells you whether the point is in the direction of the normal vector from the origin, or the reverse direction from the origin.

Calculate a Vector that lies on a 3D Plane

I have a 3D Plane defined by two 3D Vectors:
P = a Point which lies on the Plane
N = The Plane's surface Normal
And I want to calculate any vector that lies on the plane.
Take any vector, v, not parallel to N, its vector cross product with N ( w1 = v x N ) is a vector that is parallel to the plane.
You can also take w2 = v - N (v.N)/(N.N) which is the projection of v into plane.
A point in the plane can then be given by x = P + a w, In fact all points in the plane can be expressed as
x = P + a w2 + b ( w2 x N )
So long as the v from which w2 is "suitable".. cant remember the exact conditions and too lazy to work it out ;)
If you want to determine if a point lies in the plane rather than find a point in the plane, you can use
x.N = P.N
for all x in the plane.
If N = (xn, yn, zn) and P = (xp, yp, zp), then the plane's equation is given by:
(x-xp, y-yp, z-zp) * (xn, yn, zn) = 0
where (x, y, z) is any point of the plane and * denotes the inner product.
And I want to calculate any vector
that lies on the plane.
If I understand correctly You need to check if point belongs to the plane?
http://en.wikipedia.org/wiki/Plane_%28geometry%29
You mast check if this equation: nx(x − x0) + ny(y − y0) + nz(z − z0) = 0 is true for your point.
where: [nx,ny,nz] is normal vector,[x0,y0,z0] is given point, [x,y,z] is point you are checking.
//edit
Now I'm understand Your question. You need two linearly independent vectors that are the planes base. Sow You need to fallow Michael Anderson answerer but you must add second vector and use combination of that vectors. More: http://en.wikipedia.org/wiki/Basis_%28linear_algebra%29

Equation of a helix parametrized by arc length between two points in space

What is the equation of a helix parametrized by arc length (i.e. a function of arc length) between any two points in space? Is there any function for this ? How do i implement the same using matlab or mathematica ?
just to add to Mitch Wheat's answer, helices are not unique; for a given axis, the degrees of freedom are distance between turns, radius, and phase (P, A, and phi below)
if you generalize to
w = 2*pi/P
r(t) = (A cos (wt-phi)) i + (A sin (wt-phi)) j + (t) k
then one way to analyze the arclength as a function of t (without having to compute the arclength integral explicitly) is to realize that the magnitude of velocity is constant; the component of velocity parallel to the radius is 0, the component of velocity parallel to the axis is 1, the component of velocity perpendicular to both radius and axis is Aw, so therefore the magnitude of velocity is speed = sqrt(1 + A2w2), => arclength s = sqrt(1 + A2w2)t
You'd need some way of defining the axis, P, A and phi as a function of whatever inputs you are given. Just the endpoints and arclength wouldn't be enough.
To find the arc length parameterization of the helix defined by
r(t) = cos t i + sin t j + t k
Arc Length = s = Integral(a,b){sqrt((dx/dt)^2 + (dy/dt)^2 + (dz/dt)^2) dt}
First find the arc length function
s(t) = Integral(0,t) { sqrt((sin u)^2 + (cos u)^2 + 1) du }
= Integral(0,t) { sqrt(2) du } = sqrt(2) * t
Solving for t gives
t = s / sqrt(2)
Now substitute back to get
r(s) = cos(s / sqrt(2)) i + sin(s / sqrt(2)) j + (s / sqrt(2)) k
I'll leave the last bit to you!

Resources