Good day. I am using a Quadratic Bezier Curve with the following configurations:
Start Point P1 = (1, 2)
Anchor Point P2 = (1, 8)
End Point P3 = (10, 8)
I know that given a t, I know I can solve for x and y using the following equation:
t = 0.5; // given example value
x = (1 - t) * (1 - t) * P1.x + 2 * (1 - t) * t * P2.x + t * t * P3.x;
y = (1 - t) * (1 - t) * P1.y + 2 * (1 - t) * t * P2.y + t * t * P3.y;
where P1.x is the x coordinate of P1, and so on.
What I've tried now is that given an x value, I calculate for t using wolframalpha and then I plug that t in to the y equation and I get a my x and y point.
However, I want to automate finding t and then y. I have a formula to get x and y given a t. However, I don't have a formula to get t based on x. I'm a bit rusty with my algebra and expanding the first equation to isolate t doesn't look too easy.
Does anyone have a formula to get t based on x? My google search skills are failing me as of now.
I think it's also worth noting that my Bezier curve faces right.
Any help will be very much appreciated. Thanks.
problem is that what you want to solve is not function in general
for any t is just one (x,y) pair
but for any x there can be 0,1,2,+inf solutions of t
I would do this iteratively
you already can get any point p(t)=Bezier(t) so use iteration of t to minimize distance |p(t).x-x|
for(t=0.0,dt=0.1;t<=1.0;t+=dt)
find all local mins of d=|p(t).x-x|
so when d start rising again set dt*=-0.1 and stop if |dt|<1e-6 or any other threshold. Stop if t is out of interval <0,1> and remember the solution to some list. Restore original t,dt and reset the local min search variables
process all local mins
eliminate all that has bigger distance then some threshold/accuracy compute y and do what you need with the point ...
It is much slower then algebraic approach but you can use this for any curvature not just quadratic
Usually cubic curves are used and do this algebraically with them is a nightmare.
Look at your Bernstein polynomials B[i]; you have...
x = SUM_i ( B[i](t) * P[i].x )
...where...
B[0](t) = t^2 - 2*t + 1
B[1](t) = -2*t^2 + 2*t
B[2](t) = t^2
...so you can rearrange (assuming I did this right)...
0 = (P[0].x - 2*P[1].x + P[2].x) * t^2 + (-2*P[0].x + 2*P[1].x) * t + P[0].x - x
Now you should just be able to use the quadratic formula to find if the solutions for t exist (i.e., are real, not complex), and what they are.
import numpy as np
import matplotlib.pyplot as plt
#Control points
p0=(1000,2500); p1=(2000,-1500); p2=(5000,3000)
#x-coordinates to fit
xcoord = [1750., 2750., 3950.,4760., 4900.]
# t variable with as few points as needed, considering accuracy. I found 30 is good enough
t = np.linspace(0,1,30)
# calculate coordinates of quadratic Bezier curve
x = (1 - t) * (1 - t) * p0[0] + 2 * (1 - t) * t * p1[0] + t * t * p2[0];
y = (1 - t) * (1 - t) * p0[1] + 2 * (1 - t) * t * p1[1] + t * t * p2[1];
# find the closest points to each x-coordinate. Interpolate y-coordinate
ycoord=[]
for ind in xcoord:
for jnd in range(len(x[:-1])):
if ind >= x[jnd] and ind <= x[jnd+1]:
ytemp = (ind-x[jnd])*(y[jnd+1]-y[jnd])/(x[jnd+1]-x[jnd]) + y[jnd]
ycoord.append(ytemp)
plt.figure()
plt.xlim(0, 6000)
plt.ylim(-2000, 4000)
plt.plot(p0[0],p0[1],'kx', p1[0],p1[1],'kx', p2[0],p2[1],'kx')
plt.plot((p0[0],p1[0]),(p0[1],p1[1]),'k:', (p1[0],p2[0]),(p1[1],p2[1]),'k:')
plt.plot(x,y,'r', x, y, 'k:')
plt.plot(xcoord, ycoord, 'rs')
plt.show()
Related
I have a simple flux model in R. It boils down to two differential equations that model two state variables within the model, we'll call them A and B. They are calculated as simple difference equations of four component fluxes flux1-flux4, 5 parameters p1-p5, and a 6th parameter, of_interest, that can take on values between 0-1.
parameters<- c(p1=0.028, p2=0.3, p3=0.5, p4=0.0002, p5=0.001, of_interest=0.1)
state <- c(A=28, B=1.4)
model<-function(t,state,parameters){
with(as.list(c(state,parameters)),{
#fluxes
flux1 = (1-of_interest) * p1*(B / (p2 + B))*p3
flux2 = p4* A #microbial death
flux3 = of_interest * p1*(B / (p2 + B))*p3
flux4 = p5* B
#differential equations of component fluxes
dAdt<- flux1 - flux2
dBdt<- flux3 - flux4
list(c(dAdt,dBdt))
})
I would like to write a function to take the derivative of dAdt with respect to of_interest, set the derived equation to 0, then rearrange and solve for the value of of_interest. This will be the value of the parameter of_interest that maximizes the function dAdt.
So far I have been able to solve the model at steady state, across the possible values of of_interest to demonstrate there should be a maximum.
require(rootSolve)
range<- seq(0,1,by=0.01)
for(i in range){
of_interest=i
parameters<- c(p1=0.028, p2=0.3, p3=0.5, p4=0.0002, p5=0.001, of_interest=of_interest)
state <- c(A=28, B=1.4)
ST<- stode(y=y,func=model,parms=parameters,pos=T)
out<- c(out,ST$y[1])
Then plotting:
plot(out~range, pch=16,col='purple')
lines(smooth.spline(out~range,spar=0.35), lwd=3,lty=1)
How can I analytically solve for the value of of_interest that maximizes dAdt in R? If an analytical solution is not possible, how can I know, and how can I go about solving this numerically?
Update: I think this problem can be solved with the deSolve package in R, linked here, however I am having trouble implementing it using my particular example.
Your equation in B(t) is just-about separable since you can divide out B(t), from which you can get that
B(t) = C * exp{-p5 * t} * (p2 + B(t)) ^ {of_interest * p1 * p3}
This is an implicit solution for B(t) which we'll solve point-wise.
You can solve for C given your initial value of B. I suppose t = 0 initially? In which case
C = B_0 / (p2 + B_0) ^ {of_interest * p1 * p3}
This also gives a somewhat nicer-looking expression for A(t):
dA(t) / dt = B_0 / (p2 + B_0) * p1 * p3 * (1 - of_interest) *
exp{-p5 * t} * ((p2 + B(t) / (p2 + B_0)) ^
{of_interest * p1 * p3 - 1} - p4 * A(t)
This can be solved by integrating factor (= exp{p4 * t}), via numerical integration of the term involving B(t). We specify the lower limit of the integral as 0 so that we never have to evaluate B outside the range [0, t], which means the integrating constant is simply A_0 and thus:
A(t) = (A_0 + integral_0^t { f(tau; parameters) d tau}) * exp{-p4 * t}
The basic gist is B(t) is driving everything in this system -- the approach will be: solve for the behavior of B(t), then use this to figure out what's going on with A(t), then maximize.
First, the "outer" parameters; we also need nleqslv to get B:
library(nleqslv)
t_min <- 0
t_max <- 10000
t_N <- 10
#we'll only solve the behavior of A & B over t_rng
t_rng <- seq(t_min, t_max, length.out = t_N)
#I'm calling of_interest ttheta
ttheta_min <- 0
ttheta_max <- 1
ttheta_N <- 5
tthetas <- seq(ttheta_min, ttheta_max, length.out = ttheta_N)
B_0 <- 1.4
A_0 <- 28
#No sense storing this as a vector when we'll only ever use it as a list
parameters <- list(p1 = 0.028, p2 = 0.3, p3 = 0.5,
p4 = 0.0002, p5 = 0.001)
From here, the basic outline is:
Given the parameter values (in particular ttheta), solve for BB over t_rng via non-linear equation solving
Given BB and the parameter values, solve for AA over t_rng by numerical integration
Given AA and your expression for dAdt, plug & maximize.
derivs <-
sapply(tthetas, function(th){
#append current ttheta
params <- c(parameters, ttheta = th)
#declare a function we'll use to solve for B (see above)
b_slv <- function(b, t)
with(params, b - B_0 * ((p2 + b)/(p2 + B_0)) ^
(ttheta * p1 * p3) * exp(-p5 * t))
#solving point-wise (this is pretty fast)
# **See below for a note**
BB <- sapply(t_rng, function(t) nleqslv(B_0, function(b) b_slv(b, t))$x)
#this is f(tau; params) that I mentioned above;
# we have to do linear interpolation since the
# numerical integrator isn't constrained to the grid.
# **See below for note**
a_int <- function(t){
#approximate t to the grid (t_rng)
# (assumes B is monotonic, which seems to be true)
# (also, if t ends up negative, just assign t_rng[1])
t_n <- max(1L, which.max(t_rng - t >= 0) - 1L)
idx <- t_n:(t_n+1)
ts <- t_rng[idx]
#distance-weighted average of the local B values
B_app <- sum((-1) ^ (0:1) * (t - ts) / diff(ts) * BB[idx])
#finally, f(tau; params)
with(params, (1 - ttheta) * p1 * p3 * B_0 / (p2 + B_0) *
((p2 + B_app)/(p2 + B_0)) ^ (ttheta * p1 * p3 - 1) *
exp((p4 - p5) * t))
}
#a_int only works on scalars; the numeric integrator
# requires a version that works on vectors
a_int_v <- function(t) sapply(t, a_int)
AA <- exp(-params$p4 * t_rng) *
sapply(t_rng, function(tt)
#I found the subdivisions constraint binding in some cases
# at the default value; no trouble at 1000.
A_0 + integrate(a_int_v, 0, tt, subdivisions = 1000L)$value)
#using the explicit version of dAdt given as flux1 - flux2
max(with(params, (1 - ttheta) * p1 * p3 * BB / (p2 + BB) - p4 * AA))})
Finally, simply run `tthetas[which.max(derivs)]` to get the maximizer.
Note:
This code is not optimized for efficiency. There are a few places where there are some potential speed-ups:
probably faster to run the equation solver recursively, as it'll converge faster with better initial guesses -- using the previous value instead of the initial value is surely better
Will be faster to simply use Riemann sums to integrate; the tradeoff is in accuracy, but should be fine if you have a dense enough grid. One beauty of Riemann is you won't have to interpolate at all, and numerically they're simple linear algebra. I ran this with t_N == ttheta_N == 1000L and it ran within a few minutes.
Probably possible to vectorize a_int directly instead of just sapplying on it, which concomitant speed-up by more direct appeal to BLAS.
Loads of other small stuff. Pre-compute ttheta * p1 * p3 since it's re-used so much, etc.
I didn't bother including any of that stuff, though, because you're honestly probably better off porting this to a faster language -- Julia is my own pet favorite, but of course R speaks well with C++, C, Fortran, etc.
Assuming I have a spaceship (source); And an asteroid (target) is somewhere near it.
I know, in 3D space (XYZ vectors):
My ship's position (sourcePos) and velocity (sourceVel).
The asteroid's position (targetPos) and velocity (targetVel).
(eg. sourcePos = [30, 20, 10]; sourceVel = [30, 20, 10]; targetPos = [600, 400, 200]; targetVel = [300, 200, 100]`)
I also know that:
The ship's velocity is constant.
The asteroid's velocity is constant.
My ship's projectile speed (projSpd) is constant.
My ship's projectile trajectory, after being shot, is linear (/straight).
(eg. projSpd = 2000.00)
How can I calculate the interception coordinates I need to shoot at in order to hit the asteroid?
Notes:
This question is based on this Yahoo - Answers page.
I also searched for similar problems on Google and here on SO, but most of the answers are for 2D-space, and, of the few for 3D, neither the explanation nor the pseudo-codes explain what is doing what and/or why, so I couldn't really understand enough to apply them on my code successfully. Here are some of the pages I visited:
Danik Games Devlog, Blitz3D Forums thread, UnityAnswers, StackOverflow #1, StackOverflow #2
I really can't figure out the maths / execution-flow on the linked pages as they are, unless someone dissects it (further) into what is doing what, and why;
Provides a properly-commented pseudo-code for me to follow;
Or at least points me to links that actually explain how the equations work instead of just throwing even more random numbers and unfollowable equations in my already-confused psyche.
I find the easiest approach to these kind of problems to make sense of them first, and have a basic high school level of maths will help too.
Solving this problem is essentially solving 2 equations with 2 variables which are unknown to you:
The vector you want to find for your projectile (V)
The time of impact (t)
The variables you know are:
The target's position (P0)
The target's vector (V0)
The target's speed (s0)
The projectile's origin (P1)
The projectile's speed (s1)
Okay, so the 1st equation is basic. The impact point is the same for both the target and the projectile. It is equal to the starting point of both objects + a certain length along the line of both their vectors. This length is denoted by their respective speeds, and the time of impact. Here's the equation:
P0 + (t * s0 * V0) = P1 + (t * s0 * V)
Notice that there are two missing variables here - V & t, and so we won't be able to solve this equation right now. On to the 2nd equation.
The 2nd equation is also quite intuitive. The point of impact's distance from the origin of the projectile is equal to the speed of the projectile multiplied by the time passed:
We'll take a mathematical expression of the point of impact from the 1st equation:
P0 + (t * s0 * V0) <-- point of impact
The point of origin is P1
The distance between these two must be equal to the speed of the projectile multiplied by the time passed (distance = speed * time).
The formula for distance is: (x0 - x1)^2 + (y0 - y1)^2 = distance^2, and so the equation will look like this:
((P0.x + s0 * t * V0.x) - P1.x)^2 + ((P0.y + s0 * t * V0.y) - P1.y)^2 = (s1 * t)^2
(You can easily expand this for 3 dimensions)
Notice that here, you have an equation with only ONE unknown variable: t!. We can discover here what t is, then place it in the previous equation and find the vector V.
Let me solve you some pain by opening up this formula for you (if you really want to, you can do this yourself).
a = (V0.x * V0.x) + (V0.y * V0.y) - (s1 * s1)
b = 2 * ((P0.x * V0.x) + (P0.y * V0.y) - (P1.x * V0.x) - (P1.y * V0.y))
c = (P0.x * P0.x) + (P0.y * P0.y) + (P1.x * P1.x) + (P1.y * P1.y) - (2 * P1.x * P0.x) - (2 * P1.y * P0.y)
t1 = (-b + sqrt((b * b) - (4 * a * c))) / (2 * a)
t2 = (-b - sqrt((b * b) - (4 * a * c))) / (2 * a)
Now, notice - we will get 2 values for t here.
One or both may be negative or an invalid number. Obviously, since t denotes time, and time can't be invalid or negative, you'll need to discard these values of t.
It could very well be that both t's are bad (in which case, the projectile cannot hit the target since it's faster and out of range). It could also be that both t's are valid and positive, in which case you'll want to choose the smaller of the two (since it's preferable to hit the target sooner rather than later).
t = smallestWhichIsntNegativeOrNan(t1, t2)
Now that we've found the time of impact, let's find out what the direction the projectile should fly is. Back to our 1st equation:
P0 + (t * s0 * V0) = P1 + (t * s0 * V)
Now, t is no longer a missing variable, so we can solve this quite easily. Just tidy up the equation to isolate V:
V = (P0 - P1 + (t * s0 * V0)) / (t * s1)
V.x = (P0.x - P1.x + (t * s0 * V0.x)) / (t * s1)
V.y = (P0.y - P1.y + (t * s0 * V0.y)) / (t * s1)
And that's it, you're done!
Assign the vector V to the projectile and it will go to where the target will be rather than where it is now.
I really like this problem since it takes math equations we learnt in high school where everyone said "why are learning this?? we'll never use it in our lives!!", and gives them a pretty awesome and practical application.
I hope this helps you, or anyone else who's trying to solve this.
If you want a projectile to hit asteroid, it should be shoot at the point interceptionPos that satisfy the equation:
|interceptionPos - sourcePos| / |interceptionPos - targetPos| = projSpd / targetVel
where |x| is a length of vector x.
In other words, it would take equal amount of time for the target and the projectile to reach this point.
This problem would be solved by means of geometry and trigonometry, so let's draw it.
A will be asteroid position, S - ship, I - interception point.
Here we have:
AI = targetVel * t
SI = projSpd * t
AS = |targetPos - sourcePos|
vector AS and AI direction is defined, so you can easily calculate cosine of the SAI angle by means of simple vector math (take definitions from here and here). Then you should use the Law of cosines with the SAI angle. It will yield a quadratic equation with variable t that is easy to solve (no solutions = your projectile is slower than asteroid). Just pick the positive solution t, your point-to-shoot will be
targetPos + t * targetVel
I hope you can write a code to solve it by yourself. If you cannot get something please ask in comments.
I got a solution. Notice that the ship position, and the asteroid line (position and velocity) define a 3D plane where the intercept point lies. In my notation below | [x,y,z] | denotes the magnitude of the vector or Sqrt(x^2+y^2+z^2).
Notice that if the asteroid travels with targetSpd = |[300,200,100]| = 374.17 then to reach the intercept point (still unknown, called hitPos) will require time equal to t = |hitPos-targetPos|/targetSpd. This is the same time the projectile needs to reach the intercept point, or t = |hitPos - sourcePos|/projSpd. The two equations are used to solve for the time to intercept
t = |targetPos-sourcePos|/(projSpd - targetSpd)
= |[600,400,200]-[30,20,10]|/(2000 - |[300,200,100]|)
= 710.81 / ( 2000-374.17 ) = 0.4372
Now the location of the intetception point is found by
hitPos = targetPos + targetVel * t
= [600,400,200] + [300,200,100] * 0.4372
= [731.18, 487.45, 243.73 ]
Now that I know the hit position, I can calculate the direction of the projectile as
projDir = (hitPos-sourcePos)/|hitPos-sourcePos|
= [701.17, 467.45, 233.73]/874.52 = [0.8018, 0.5345, 0.2673]
Together the projDir and projSpd define the projectile velocity vector.
Credit to Gil Moshayof's answer, as it really was what I worked off of to build this. But they did two dimensions, and I did three, so I'll share my Unity code in case it helps anyone along. A little long winded and redundant. It helps me to read it and know what's going on.
Vector3 CalculateIntercept(Vector3 targetLocation, Vector3 targetVelocity, Vector3 interceptorLocation, float interceptorSpeed)
{
Vector3 A = targetLocation;
float Ax = targetLocation.x;
float Ay = targetLocation.y;
float Az = targetLocation.z;
float As = targetVelocity.magnitude;
Vector3 Av = Vector3.Normalize(targetVelocity);
float Avx = Av.x;
float Avy = Av.y;
float Avz = Av.z;
Vector3 B = interceptorLocation;
float Bx = interceptorLocation.x;
float By = interceptorLocation.y;
float Bz = interceptorLocation.z;
float Bs = interceptorSpeed;
float t = 0;
float a = (
Mathf.Pow(As, 2) * Mathf.Pow(Avx, 2) +
Mathf.Pow(As, 2) * Mathf.Pow(Avy, 2) +
Mathf.Pow(As, 2) * Mathf.Pow(Avz, 2) -
Mathf.Pow(Bs, 2)
);
if (a == 0)
{
Debug.Log("Quadratic formula not applicable");
return targetLocation;
}
float b = (
As * Avx * Ax +
As * Avy * Ay +
As * Avz * Az +
As * Avx * Bx +
As * Avy * By +
As * Avz * Bz
);
float c = (
Mathf.Pow(Ax, 2) +
Mathf.Pow(Ay, 2) +
Mathf.Pow(Az, 2) -
Ax * Bx -
Ay * By -
Az * Bz +
Mathf.Pow(Bx, 2) +
Mathf.Pow(By, 2) +
Mathf.Pow(Bz, 2)
);
float t1 = (-b + Mathf.Pow((Mathf.Pow(b, 2) - (4 * a * c)), (1 / 2))) / (2 * a);
float t2 = (-b - Mathf.Pow((Mathf.Pow(b, 2) - (4 * a * c)), (1 / 2))) / (2 * a);
Debug.Log("t1 = " + t1 + "; t2 = " + t2);
if (t1 <= 0 || t1 == Mathf.Infinity || float.IsNaN(t1))
if (t2 <= 0 || t2 == Mathf.Infinity || float.IsNaN(t2))
return targetLocation;
else
t = t2;
else if (t2 <= 0 || t2 == Mathf.Infinity || float.IsNaN(t2) || t2 > t1)
t = t1;
else
t = t2;
Debug.Log("t = " + t);
Debug.Log("Bs = " + Bs);
float Bvx = (Ax - Bx + (t * As + Avx)) / (t * Mathf.Pow(Bs, 2));
float Bvy = (Ay - By + (t * As + Avy)) / (t * Mathf.Pow(Bs, 2));
float Bvz = (Az - Bz + (t * As + Avz)) / (t * Mathf.Pow(Bs, 2));
Vector3 Bv = new Vector3(Bvx, Bvy, Bvz);
Debug.Log("||Bv|| = (Should be 1) " + Bv.magnitude);
return Bv * Bs;
}
I followed the problem formulation as described by Gil Moshayof's answer, but found that there was an error in the simplification of the quadratic formula. When I did the derivation by hand I got a different solution.
The following is what worked for me when finding the intersect in 2D:
std::pair<double, double> find_2D_intersect(Vector3 sourcePos, double projSpd, Vector3 targetPos, double targetSpd, double targetHeading)
{
double P0x = targetPos.x;
double P0y = targetPos.y;
double s0 = targetSpd;
double V0x = std::cos(targetHeading);
double V0y = std::sin(targetHeading);
double P1x = sourcePos.x;
double P1y = sourcePos.y;
double s1 = projSpd;
// quadratic formula
double a = (s0 * s0)*((V0x * V0x) + (V0y * V0y)) - (s1 * s1);
double b = 2 * s0 * ((P0x * V0x) + (P0y * V0y) - (P1x * V0x) - (P1y * V0y));
double c = (P0x * P0x) + (P0y * P0y) + (P1x * P1x) + (P1y * P1y) - (2 * P1x * P0x) - (2 * P1y * P0y);
double t1 = (-b + std::sqrt((b * b) - (4 * a * c))) / (2 * a);
double t2 = (-b - std::sqrt((b * b) - (4 * a * c))) / (2 * a);
double t = choose_best_time(t1, t2);
double intersect_x = P0x + t * s0 * V0x;
double intersect_y = P0y + t * s0 * V0y;
return std::make_pair(intersect_x, intersect_y);
}
I have a square bitmap of a circle and I want to compute the normals of all the pixels in that circle as if it were a sphere of radius 1:
The sphere/circle is centered in the bitmap.
What is the equation for this?
Don't know much about how people program 3D stuff, so I'll just give the pure math and hope it's useful.
Sphere of radius 1, centered on origin, is the set of points satisfying:
x2 + y2 + z2 = 1
We want the 3D coordinates of a point on the sphere where x and y are known. So, just solve for z:
z = Âħsqrt(1 - x2 - y2).
Now, let us consider a unit vector pointing outward from the sphere. It's a unit sphere, so we can just use the vector from the origin to (x, y, z), which is, of course, <x, y, z>.
Now we want the equation of a plane tangent to the sphere at (x, y, z), but this will be using its own x, y, and z variables, so instead I'll make it tangent to the sphere at (x0, y0, z0). This is simply:
x0x + y0y + z0z = 1
Hope this helps.
(OP):
you mean something like:
const int R = 31, SZ = power_of_two(R*2);
std::vector<vec4_t> p;
for(int y=0; y<SZ; y++) {
for(int x=0; x<SZ; x++) {
const float rx = (float)(x-R)/R, ry = (float)(y-R)/R;
if(rx*rx+ry*ry > 1) { // outside sphere
p.push_back(vec4_t(0,0,0,0));
} else {
vec3_t normal(rx,sqrt(1.-rx*rx-ry*ry),ry);
p.push_back(vec4_t(normal,1));
}
}
}
It does make a nice spherical shading-like shading if I treat the normals as colours and blit it; is it right?
(TZ)
Sorry, I'm not familiar with those aspects of C++. Haven't used the language very much, nor recently.
This formula is often used for "fake-envmapping" effect.
double x = 2.0 * pixel_x / bitmap_size - 1.0;
double y = 2.0 * pixel_y / bitmap_size - 1.0;
double r2 = x*x + y*y;
if (r2 < 1)
{
// Inside the circle
double z = sqrt(1 - r2);
.. here the normal is (x, y, z) ...
}
Obviously you're limited to assuming all the points are on one half of the sphere or similar, because of the missing dimension. Past that, it's pretty simple.
The middle of the circle has a normal facing precisely in or out, perpendicular to the plane the circle is drawn on.
Each point on the edge of the circle is facing away from the middle, and thus you can calculate the normal for that.
For any point between the middle and the edge, you use the distance from the middle, and some simple trig (which eludes me at the moment). A lerp is roughly accurate at some points, but not quite what you need, since it's a curve. Simple curve though, and you know the beginning and end values, so figuring them out should only take a simple equation.
I think I get what you're trying to do: generate a grid of depth data for an image. Sort of like ray-tracing a sphere.
In that case, you want a Ray-Sphere Intersection test:
http://www.siggraph.org/education/materials/HyperGraph/raytrace/rtinter1.htm
Your rays will be simple perpendicular rays, based off your U/V coordinates (times two, since your sphere has a diameter of 2). This will give you the front-facing points on the sphere.
From there, calculate normals as below (point - origin, the radius is already 1 unit).
Ripped off from the link above:
You have to combine two equations:
Ray: R(t) = R0 + t * Rd , t > 0 with R0 = [X0, Y0, Z0] and Rd = [Xd, Yd, Zd]
Sphere: S = the set of points[xs, ys, zs], where (xs - xc)2 + (ys - yc)2 + (zs - zc)2 = Sr2
To do this, calculate your ray (x * pixel / width, y * pixel / width, z: 1), then:
A = Xd^2 + Yd^2 + Zd^2
B = 2 * (Xd * (X0 - Xc) + Yd * (Y0 - Yc) + Zd * (Z0 - Zc))
C = (X0 - Xc)^2 + (Y0 - Yc)^2 + (Z0 - Zc)^2 - Sr^2
Plug into quadratic equation:
t0, t1 = (- B + (B^2 - 4*C)^1/2) / 2
Check discriminant (B^2 - 4*C), and if real root, the intersection is:
Ri = [xi, yi, zi] = [x0 + xd * ti , y0 + yd * ti, z0 + zd * ti]
And the surface normal is:
SN = [(xi - xc)/Sr, (yi - yc)/Sr, (zi - zc)/Sr]
Boiling it all down:
So, since we're talking unit values, and rays that point straight at Z (no x or y component), we can boil down these equations greatly:
Ray:
X0 = 2 * pixelX / width
Y0 = 2 * pixelY / height
Z0 = 0
Xd = 0
Yd = 0
Zd = 1
Sphere:
Xc = 1
Yc = 1
Zc = 1
Factors:
A = 1 (unit ray)
B
= 2 * (0 + 0 + (0 - 1))
= -2 (no x/y component)
C
= (X0 - 1) ^ 2 + (Y0 - 1) ^ 2 + (0 - 1) ^ 2 - 1
= (X0 - 1) ^ 2 + (Y0 - 1) ^ 2
Discriminant
= (-2) ^ 2 - 4 * 1 * C
= 4 - 4 * C
From here:
If discriminant < 0:
Z = ?, Normal = ?
Else:
t = (2 + (discriminant) ^ 1 / 2) / 2
If t < 0 (hopefully never or always the case)
t = -t
Then:
Z: t
Nx: Xi - 1
Ny: Yi - 1
Nz: t - 1
Boiled farther still:
Intuitively it looks like C (X^2 + Y^2) and the square-root are the most prominent figures here. If I had a better recollection of my math (in particular, transformations on exponents of sums), then I'd bet I could derive this down to what Tom Zych gave you. Since I can't, I'll just leave it as above.
I have two points in space, L1 and L2 that defines two points on a line.
I have three points in space, P1, P2 and P3 that 3 points on a plane.
So given these inputs, at what point does the line intersect the plane?
Fx. the plane equation A*x+B*y+C*z+D=0 is:
A = p1.Y * (p2.Z - p3.Z) + p2.Y * (p3.Z - p1.Z) + p3.Y * (p1.Z - p2.Z)
B = p1.Z * (p2.X - p3.X) + p2.Z * (p3.X - p1.X) + p3.Z * (p1.X - p2.X)
C = p1.X * (p2.Y - p3.Y) + p2.X * (p3.Y - p1.Y) + p3.X * (p1.Y - p2.Y)
D = -(p1.X * (p2.Y * p3.Z - p3.Y * p2.Z) + p2.X * (p3.Y * p1.Z - p1.Y * p3.Z) + p3.X * (p1.Y * p2.Z - p2.Y * p1.Z))
But what about the rest?
The simplest (and very generalizable) way to solve this is to say that
L1 + x*(L2 - L1) = (P1 + y*(P2 - P1)) + (P1 + z*(P3 - P1))
which gives you 3 equations in 3 variables. Solve for x, y and z, and then substitute back into either of the original equations to get your answer. This can be generalized to do complex things like find the point that is the intersection of two planes in 4 dimensions.
For an alternate approach, the cross product N of (P2-P1) and (P3-P1) is a vector that is at right angles to the plane. This means that the plane can be defined as the set of points P such that the dot product of P and N is the dot product of P1 and N. Solving for x such that (L1 + x*(L2 - L1)) dot N is this constant gives you one equation in one variable that is easy to solve. If you're going to be intersecting a lot of lines with this plane, this approach is definitely worthwhile.
Written out explicitly this gives:
N = cross(P2-P1, P3 - P1)
Answer = L1 + (dot(N, P1 - L1) / dot(N, L2 - L1)) * (L2 - L1)
where
cross([x, y, z], [u, v, w]) = x*u + y*w + z*u - x*w - y*u - z*v
dot([x, y, z], [u, v, w]) = x*u + y*v + z*w
Note that that cross product trick only works in 3 dimensions, and only for your specific problem of a plane and a line.
This is how I ended up doing it in come code. Luckily one code library (XNA) had half of what I needed, and the rest was easy.
var lv = L2-L1;
var ray = new Microsoft.Xna.Framework.Ray(L1,lv);
var plane = new Microsoft.Xna.Framework.Plane(P1, P2, P3);
var t = ray.Intersects(plane); //Distance along line from L1
///Result:
var x = L1.X + t * lv.X;
var y = L1.Y + t * lv.Y;
var z = L1.Z + t * lv.Z;
Of course I would prefer having just the simple equations that takes place under the covers of XNA.
I came across a situation doing some advanced collision detection, where I needed to calculate the roots of a quartic function.
I wrote a function that seems to work fine using Ferrari's general solution as seen here: http://en.wikipedia.org/wiki/Quartic_function#Ferrari.27s_solution.
Here's my function:
private function solveQuartic(A:Number, B:Number, C:Number, D:Number, E:Number):Array{
// For paramters: Ax^4 + Bx^3 + Cx^2 + Dx + E
var solution:Array = new Array(4);
// Using Ferrari's formula: http://en.wikipedia.org/wiki/Quartic_function#Ferrari.27s_solution
var Alpha:Number = ((-3 * (B * B)) / (8 * (A * A))) + (C / A);
var Beta:Number = ((B * B * B) / (8 * A * A * A)) - ((B * C) / (2 * A * A)) + (D / A);
var Gamma:Number = ((-3 * B * B * B * B) / (256 * A * A * A * A)) + ((C * B * B) / (16 * A * A * A)) - ((B * D) / (4 * A * A)) + (E / A);
var P:Number = ((-1 * Alpha * Alpha) / 12) - Gamma;
var Q:Number = ((-1 * Alpha * Alpha * Alpha) / 108) + ((Alpha * Gamma) / 3) - ((Beta * Beta) / 8);
var PreRoot1:Number = ((Q * Q) / 4) + ((P * P * P) / 27);
var R:ComplexNumber = ComplexNumber.add(new ComplexNumber((-1 * Q) / 2), ComplexNumber.sqrt(new ComplexNumber(PreRoot1)));
var U:ComplexNumber = ComplexNumber.pow(R, 1/3);
var preY1:Number = (-5 / 6) * Alpha;
var RedundantY:ComplexNumber = ComplexNumber.add(new ComplexNumber(preY1), U);
var Y:ComplexNumber;
if(U.isZero()){
var preY2:ComplexNumber = ComplexNumber.pow(new ComplexNumber(Q), 1/3);
Y = ComplexNumber.subtract(RedundantY, preY2);
} else{
var preY3:ComplexNumber = ComplexNumber.multiply(new ComplexNumber(3), U);
var preY4:ComplexNumber = ComplexNumber.divide(new ComplexNumber(P), preY3);
Y = ComplexNumber.subtract(RedundantY, preY4);
}
var W:ComplexNumber = ComplexNumber.sqrt(ComplexNumber.add(new ComplexNumber(Alpha), ComplexNumber.multiply(new ComplexNumber(2), Y)));
var Two:ComplexNumber = new ComplexNumber(2);
var NegativeOne:ComplexNumber = new ComplexNumber(-1);
var NegativeBOverFourA:ComplexNumber = new ComplexNumber((-1 * B) / (4 * A));
var NegativeW:ComplexNumber = ComplexNumber.multiply(W, NegativeOne);
var ThreeAlphaPlusTwoY:ComplexNumber = ComplexNumber.add(new ComplexNumber(3 * Alpha), ComplexNumber.multiply(new ComplexNumber(2), Y));
var TwoBetaOverW:ComplexNumber = ComplexNumber.divide(new ComplexNumber(2 * Beta), W);
solution["root1"] = ComplexNumber.add(NegativeBOverFourA, ComplexNumber.divide(ComplexNumber.add(W, ComplexNumber.sqrt(ComplexNumber.multiply(NegativeOne, ComplexNumber.add(ThreeAlphaPlusTwoY, TwoBetaOverW)))), Two));
solution["root2"] = ComplexNumber.add(NegativeBOverFourA, ComplexNumber.divide(ComplexNumber.subtract(NegativeW, ComplexNumber.sqrt(ComplexNumber.multiply(NegativeOne, ComplexNumber.subtract(ThreeAlphaPlusTwoY, TwoBetaOverW)))), Two));
solution["root3"] = ComplexNumber.add(NegativeBOverFourA, ComplexNumber.divide(ComplexNumber.subtract(W, ComplexNumber.sqrt(ComplexNumber.multiply(NegativeOne, ComplexNumber.add(ThreeAlphaPlusTwoY, TwoBetaOverW)))), Two));
solution["root4"] = ComplexNumber.add(NegativeBOverFourA, ComplexNumber.divide(ComplexNumber.add(NegativeW, ComplexNumber.sqrt(ComplexNumber.multiply(NegativeOne, ComplexNumber.subtract(ThreeAlphaPlusTwoY, TwoBetaOverW)))), Two));
return solution;
}
The only issue is that I seem to get a few exceptions. Most notably when I have two real roots, and two imaginary roots.
For example, this equation:
y = 0.9604000000000001x^4 - 5.997600000000001x^3 + 13.951750054511718x^2 - 14.326264455924333x + 5.474214401412618
Returns the roots:
1.7820304835380467 + 0i
1.34041662585388 + 0i
1.3404185025061823 + 0i
1.7820323472855648 + 0i
If I graph that particular equation, I can see that the actual roots are closer to 1.2 and 2.9 (approximately). I can't dismiss the four incorrect roots as random, because they're actually two of the roots for the equation's first derivative:
y = 3.8416x^3 - 17.9928x^2 + 27.9035001x - 14.326264455924333
Keep in mind that I'm not actually looking for the specific roots to the equation I posted. My question is whether there's some sort of special case that I'm not taking into consideration.
Any ideas?
For finding roots of polynomials of degree >= 3, I've always had better results using Jenkins-Traub ( http://en.wikipedia.org/wiki/Jenkins-Traub_algorithm ) than explicit formulas.
I do not know why Ferrari's solution does not work, but I tried to use the standard numerical method (create a companion matrix and compute its eigenvalues), and I obtain the correct solution, i.e., two real roots at 1.2 and 1.9.
This method is not for the faint of heart. After constructing the companion matrix of the polynomial, you run the QR algorithm to find the eigenvalues of that matrix. Those are the zeroes of the polynomial.
I suggest you to use an existing implementation of the QR algorithm since a good deal of it is closer to kitchen recipe than algorithmics. But it is, I believe, the most widely used algorithm to compute eigenvalues, and thereby, roots of polynomials.
You can see my answer to a related question. I support the view of Olivier: the way to go may just be the companion matrix / eigenvalue approach (very stable, simple, reliable, and fast).
Edit
I guess it does not hurt if I reproduce the answer here, for convenience:
The numerical solution for doing this many times in a reliable, stable manner, involve: (1) Form the companion matrix, (2) find the eigenvalues of the companion matrix.
You may think this is a harder problem to solve than the original one, but this is how the solution is implemented in most production code (say, Matlab).
For the polynomial:
p(t) = c0 + c1 * t + c2 * t^2 + t^3
the companion matrix is:
[[0 0 -c0],[1 0 -c1],[0 1 -c2]]
Find the eigenvalues of such matrix; they correspond to the roots of the original polynomial.
For doing this very fast, download the singular value subroutines from LAPACK, compile them, and link them to your code. Do this in parallel if you have too many (say, about a million) sets of coefficients. You could use QR decomposition, or any other stable methodology for computing eigenvalues (see the Wikipedia entry on "matrix eigenvalues").
Notice that the coefficient of t^3 is one, if this is not the case in your polynomials, you will have to divide the whole thing by the coefficient and then proceed.
Good luck.
Edit: Numpy and octave also depend on this methodology for computing the roots of polynomials. See, for instance, this link.
The other answers are good and sound advice. However, recalling my experience with the implementation of Ferrari's method in Forth, I think your wrong results are probably caused by 1. wrong implementation of the necessary and rather tricky sign combinations, 2. not realizing yet that ".. == beta" in floating-point should become "abs(.. - beta) < eps, 3. not yet having found out that there are other square roots in the code that may return complex solutions.
For this particular problem my Forth code in diagnostic mode returns:
x1 = 1.5612244897959360787072371026316680470492e+0000 -1.6542769593216835969789894020584464029664e-0001 i
--> -4.2123274051525879873007970023884313331788e-0054 3.4544674220377778501545407451201598284464e-0077 i
x2 = 1.5612244897959360787072371026316680470492e+0000 1.6542769593216835969789894020584464029664e-0001 i
--> -4.2123274051525879873007970023884313331788e-0054 -3.4544674220377778501545407451201598284464e-0077 i
x3 = 1.2078440724224197532447709413299479764843e+0000 0.0000000000000000000000000000000000000000e-0001 i
--> -4.2123274051525879873010733597821943554068e-0054 0.0000000000000000000000000000000000000000e-0001 i
x4 = 1.9146049071693819497220585618954851525216e+0000 -0.0000000000000000000000000000000000000000e-0001 i
--> -4.2123274051525879873013497171759573776348e-0054 0.0000000000000000000000000000000000000000e-0001 i
The text after "-->" follows from backsubstituting the root into the original equation.
For reference, here are Mathematica/Alpha's results to the highest possible precision I managed to set it:
Mathematica:
x1 = 1.20784407242
x2 = 1.91460490717
x3 = 1.56122449 - 0.16542770 i
x4 = 1.56122449 + 0.16542770 i
A good alternative to the methods already mentioned is the TOMS Algorithm 326, which is based on the paper "Roots of Low Order Polynomials" by Terence R.F.Nonweiler CACM (Apr 1968).
This is an algebraic solution to 3rd and 4th order polynomials that is reasonably compact, fast, and quite accurate. It is much simpler than Jenkins Traub.
Be warned however that the TOMS code doesn't work all that well.
This Iowa Hills Root Solver page has code for a Quartic / Cubic root finder that is a bit more refined. It also has a Jenkins Traub type root finder.