Choosing starting point in GNU Scientific Library multiroot finder - math

I'm using an implementation of the GNU Scientific Library's multiroot finder to solve for the unknowns (x and y) in the following system of non-linear equations:
I'm a bit confused, however, about the "starting point":
Solve(const double *x, int maxIter = 0, double absTol = 0, double relTol = 0) Find the root starting from the point X; Use the
number of iteration and tolerance if given otherwise use default
parameter values which can be defined by the static method SetDefault
How is the starting point chosen?

In one dimensional polynomial problem, this is a well-studied domain of Real-root_isolation. In two dimensions its less well know.
First note we can transform the equations into polynomials, take the first one
sqrt[(x-x1)^2 + (y-y1)^2] + s (t2 -t1) = sqrt[(x-x1)^2 + (y-y1)^2]
square both sides
[(x-x1)^2 + (y-y1)^2] + 2 sqrt[(x-x1)^2 + (y-y1)^2][s (t2 -t1)] + [s (t2 -t1)]^2 = [(x-x1)^2 + (y-y1)^2]
rearrange
2 sqrt[(x-x1)^2 + (y-y1)^2][s (t2 -t1)] = [(x-x1)^2 + (y-y1)^2] - [(x-x1)^2 + (y-y1)^2] - [s (t2 -t1)]^2
square again
4 [(x-x1)^2 + (y-y1)^2] [s (t2 -t1)]^2 = ([(x-x1)^2 + (y-y1)^2] - [(x-x1)^2 + (y-y1)^2] - [s (t2 -t1)]^2)^2
giving us a polynomial.(note)
Once we have a set of polynomial there are some algebraic techniques you can use.
A technique I use is the convert the polynomials into Bernstein polynomials. First, rescale your domain so both variables lie in [0,1]. If m, n are the degrees of the two polynomials then a polynomial can be expressed as the sum
sum i=0..m sum j=0..n b_ij mCi nCj x^i (1-x)^(n-i) y^j (1-y)^(n-j)
Where the mCi, nCj are binomial coefficients.
These polynomials have a nice convexity property. If the coefficients b_ij are all positive then the value of the polynomial will be positive for all x,y in [0,1]. Similar if the coefficients are all negative.
This allows you to say that "there is no solution in this region".
So a strategy to solve the problem might be:
pick the largest region the solutions might occur in.
Subdivide this into a set of smaller regions
For each region calculate the Bernstein polynomials for each of your equations
Examine the coefficients of the Bernstein polynomials, if they are all positive (or all negative) reject that region
You should now have rejected a large part of the domain. Start the multiroot finder using a point in each remaining region.
If you want details of how to construct Berstein polynomials you can read my paper at A new method for drawing Algebraic Surfaces
Note: by squaring we actually get more solutions that we want. In the initial problem we want the principle sqareroot, i.e. +sqrt(A), there are also solutions using the other root -sqrt(A). This makes the problem a bit easier rather than four solutions which we would get from the intersection of two hyperbolae, we just have the intersections of one branch so one or two solutions to the problem.
For your problem, there is quite a nice simple way to get one of the starting points.
Assume s=0. Then each equation will give the set of points that are equidistant from two points. This is a line, the perpendicular bisector of the line segment joining the points, then simply find the point of intersection of the three perpendicular bisectors. This will be the centre of the Circumscribed_circle. This might be enough for the algorithm. Even simpler is to simply take the average of the three points.

Related

How to identify the roots of an equation by plotting it's real and imaginary parts

This is more of a general Maths question (might be silly even). But in high school we learn to identify the roots of an equation via it's plot right.
For example, for the equation
y = x^2 - 1
The blue line would show us the roots. This is when the blue line crosses x, so +- 1.
Now, if we said that the equation had a real and an imaginary part, so that it is
y = x^2 - 1 + (x^2 - 0.5)i
as given in the Mathematica screenshot, then we have a real part which crosses zero, and an imaginary part which also crosses zero but at a different x. So my question is: is it possible to identify the roots of such an equation by simply looking at the real and imaginary parts of the plot?
Note: part of my confusion is that if I use FindRoot, in Mathematica, I get either 0.877659 - 0.142424i or -0.877659 + 0.142424i. So might be some fundamental property in Maths I don't know about which prevents one from identifying roots of a complex function through separating real and imaginary parts...
we have a real part which crosses zero, and an imaginary part which also crosses zero but at a different x.
Those are graphs of the real and imaginary parts plotted for real values of x. If they both crossed the horizontal axis at the same point(s), that would mean the equation has real root(s), since both real and imaginary parts would be zero for some real value of x. However, this equation has no real roots, so the crossing points are different.
So my question is: is it possible to identify the roots of such an equation by simply looking at the real and imaginary parts of the plot?
f(x) = x^2 - 1 + i (x^2 - 0.5) is a complex function of a complex variable, which maps a complex variable x = a + i b to the complex value f(x) = Re(f(x)) + i Im(f(x)).
Each of Re(f(x)) and Im(f(x)) is a real function of a complex variable. Such functions can be plotted in 3D by representing x = a + i b as a point in the (a, b) plane, and the value of the function along the third dimension, say c. For example, f(x) has the following graphs for the real and imaginary parts.
The cross-sections of the two surfaces by the horizontal plane c = 0 are pairs of curves where each function is zero, respectively. It follows that the intersections of those curves are the points where Re(f(x)) = Im(f(x)) = 0, which means they are the roots of the equation f(x) = 0.
Since f(x) = 0 is a quadratic equation, it must have two roots, and those two points are in fact ±(0.877659 - 0.142424 i), as can be verified by direct calculation.

Calculating Normals across a sphere with a wave-like vertex shader

I've been trying to get the correct normals for a sphere I'm messing with using a vertex shader. The algorithm can be boiled down simply to
vert.xyz += max(0, sin(time + 0.004*vert.x))*10*normal.xyz
This causes a wave to roll across the sphere.
In order to make my normals correct, I need to transform them as well. I can take the tangent vector at a given x,y,z, get a perpendicular vector (0, -vert.z, vert.y), and then cross the tangent with the perp vector.
I've been having some issue with the math though, and it's become a personal vendetta at this point. I've solved for the derivative hundreds of times but I keep getting it incorrect. How can I get the tangent?
Breaking down the above line, I can make a math function
f(x,y,z) = max(0, sin(time + 0.004*x))*10*Norm(x,y,z) + (x,y,z)
where Norm(..) is Normalize((x,y,z) - CenterOfSphere)
After applying f(x,y,z), unchanged normals
What is the correct f '(x,y,z)?
I've accounted for the weirdness caused by the max in f(...), so that's not the issue.
Edit: The most successful algorithm I have right now is as follows:
Tangent vector.x = 0.004*10*cos(0.004*vert.x + time)*norm.x + 10*sin(0.004*vert.x + time) + 1
Tangent vector.y = 10*sin(0.004*vert.x + time) + 1
Tangent vector.z = 10*sin(0.004*vert.x + time) + 1
2nd Tangent vector.x = 0
2nd Tangent vector.y = -norm.z
2nd Tangent vector.z = norm.y
Normalize both, and perform Cross(Tangent2, Tangent1). Normalize again, and done (it should be Cross(Tangent1, Tangent2), but this seems to have better results... more hints of an issue in my math!).
This yields this
Get tangent/normal by derivate of function can sometimes fail if your surface points are nonlinearly distributed and or some math singularity is present or if you make a math mistake (which is the case in 99.99%). Anyway you can always use the geometric approach:
1. you can get the tangents easy by
U(x,y,z)=f(x+d,y,z)-f(x,y,z);
V(x,y,z)=f(x,y+d,z)-f(x,y,z);
where d is some small enough step
and f(x,y,z) is you current surface point computation
not sure why you use 3 input variables I would use just 2
but therefore if the shifted point is the same as unshifted
use this instead =f(x,y,z+d)-f(x,y,z);
at the end do not forget to normalize U,V size to unit vector
2. next step
if bullet 1 leads to correct normals
then you can simply solve the U,V algebraically
so rewrite U(x,y,z)=f(x+d,y,z)-f(x,y,z); to full equation
by substituting f(x,y,z) with the surface point equation
and simplify
[notes]
sometimes well selected d can simplify normalization to multipliyng by a constant
you should add normals visualization for example like this:
to actually see what is really happening (for debug purposses)

determine trajectory for object followers - curve of pursuit

I have started to develop unit trajectories for a game server and for now I'm trying to retrieve the position of a unit at a given time. It is easy when the trajectory is just a straight line, but it is far more complicated when unit chases another unit.
I've done flash app to illustrate the problem. Black trajectory is for unit which travels in a single direction. Blue chases black and red chases blue. What I want is to precalculate whole trajectory for blue and red to be able to retrieve their position in a constant time.
Is it possible? Thanks for any help!!
Here's a paper A classic chase problem solved from a
physics perspective by Carl E. Mungan that solves a particular version in which the chaser is initially perpendicular to the chased object's trajectory. I believe this is an inessential element of the solution since that perpendicularity disappears along the rest of the trajectory.
It is an autonomous system of differential equations in the sense that time does not appear explicitly in the coefficients or terms of the equations. This supports the idea that the family of solutions given in the paper is general enough to provide for non-perpendicular initial conditions.
The paper provides further links and references, as well as a useful search term, "curves of pursuit".
Let's state a slight different, slightly more general initial condition than Mungan's. Suppose the chased object ("ship") is initially located at the origin and travels up the positive y-axis (x=0) with constant speed V. The chasing object ("torpedo") is initially located at (x0,y0), and although instantaneous reorienting directly at the "ship", also travels at some constant speed v.
The special case where x0 is zero results in a linear pursuit curve, i.e. a head-on collision or a trailing chase accordingly as y0 is positive or negative. Otherwise by reflection in the y-axis one may assume without loss of generality that x0 > 0. Thus rational powers of x-coordinates will be well-defined.
Assume for our immediate purpose that speeds V,v are unequal, so that ratio r = V/v is not 1. The following is a closed-form solution (1) for the "torpedo" curve similar to Mungan's equation (10):
(1+r) (1-r)
[ (x/H) (x/H) ]
(y/H) = (1/2) [ ----- - ----- ] + C (1)
[ (1+r) (1-r) ]
in which the constants H,C can be determined by the initial conditions.
Applying the condition that initially the torpedo moves toward the ship's location at the origin, we take the derivative with respect to x in (1) and cancel a factor 1/H from both sides:
r -r
dy/dx = (1/2) [ (x/H) - (x/H) ] (2)
Now equate the curve's slope dy/dx at initial point (x0,y0) with that of its line passing through the origin:
r -r
(x0/H) - (x0/H) = 2y0/x0 = K (3)
This amounts to a quadratic equation for positive B = (x0/H)^r:
B^2 - K*B - 1 = 0 (4)
namely B = [K + sqrt(K^2 + 4)]/2 (but use the alternative form if K < 0 to avoid cancellation error), which allows H to be determined from our knowledge of x0 and r:
H = x0/(B^(1/r)) (5)
Knowing H makes it a simple matter to determine the additive constant C in (1) by substituting the initial point (x0,y0) there.
The tricky part will be to determine which point on the "torpedo" trajectory corresponds to a given time t > 0. The inverse of that problem is solved fairly simply. Given a point on the trajectory, find the tangent line at that point using derivative formula (2) and deduce time t from the y-intercept b of that line (i.e. from the current "ship" position):
t = b/V (6)
Therefore determining (x(t),y(t)) where the "torpedo" is located at a given time t > 0 is essentially a root-finding exercise. One readily brackets the desired x(t) between two x-coordinates x1 and x2 that correspond to times t1 and t2 such that t1 < t < t2. A root-finding method can be used to refine this interval until the desired accuracy is achieved. Once a fairly small interval has been refined, Newton's method will provide rapid convergence. We can look at the details of such a procedure in a next installment!
I can set up the problem for you but not solve it.
The black curve is moving at a constant velocity v0, and in a straight line.
The blue curve moves at a constant velocity v1 in the direction of black.
For simplicity, choose coordinates so that at time t=0 the black curve starts at (x=0, y=0) and is moving in the direction x.
Thus, at time t >= 0, the position of the black curve is (v0 t, 0).
Problem statement
The goal is to find x, y of the blue curve for times t >= 0 given the initial position (x(t=0), y(t=0)). The differential equations of motion are
dx / dt = v1 (v0 t - x) / a(t)
dy / dt = v1 (- y) / a(t)
where a(t) = sqrt((v0 t - x)^2 + (y^2)) is the distance between blue and black at time t.
This is a system of two nonlinear coupled differential equations. It seems likely that there is no complete anaytical solution. Wolfram Alpha gives up without trying for the input
D[y[t],t] = -y[t] / sqrt[(t-x[t])^2 + y[t]^2], D[x[t],t] = (t-x[t]) / sqrt[(t-x[t])^2 + y[t]^2]
You could try asking on math.stackexchange. Good luck!

Scale-agnostic, differentiable, co-planarity measure

I am looking for an (almost everywhere) differentiable function f(p1, p2, p3, p4) that given four points will give me a scale-agnostic measure for co-planarity. It is zero if the four points lie on the same plane and positive otherwise. Scale-agnostic means that, when I uniformly scale all points the planarity measure will return the same.
I came up with something that is quite complex and not easy to optimize. Define u=p2-p1, v=p3-p1, w=p4-p1. Then the planarity measure is:
[(u x v) * w]² / (|u x v|² |w|²)
where x means cross product and '*' means dot product.
The numerator is simply (the square of) the volume of the tetrahedron defined by the four points, and the denominator is a normalizing factor that makes this measure become simply the cosine of an angle. Because angles do not changed under uniform scale, this function satisfies all my requirements.
Does anybody know of something simpler?
Alex.
Edit:
I eventually used an Augmented Lagrangian method to perform optimization, so I don't need it to be scale agnostic. Just using the constraint (u x v) * w = 0 is enough, as the optimization procedure finds the correct Lagrange multiplier to compensate for the scale.
Your methods seems ok, I'd do something like this for efficient implementation:
Take u, v, w as you did
Normalize them: various tricks exist to evaluate the inverse square root efficiently with whatever precision you want, like this jewel. Most modern processors have builtins for this operation.
Take f = |det(u, v, w)| ( = (u x v) . w ). There are fast direct implementations for 3x3 matrices; see #batty's answer to this question.
This amounts to what you do without the squares. It is still homogeneous and almost everywhere differentiable. Take the square of the determinant if you want something differentiable everywhere.
EDIT: #phkahler implicitly suggested using the ratio of the radius of the inscribed sphere to the radius of the circumscribed sphere as a measure of planarity. This is a bounded differentiable function of the points, invariant by scaling. However, this is at least as difficult to compute as what you (and I) suggest. Especially computing the radius of the circumscribed sphere is very sensitive to roundoff errors.
A measure that should be symmetric with respect to point reorderings is:
((u x v).w)^2/(|u||v||w||u-v||u-w||v-w|)
which is proportional to the volume of the tetrahedron squared divided by all 6 edge lengths. It is not simpler than your formula or Alexandre C.'s, but it is not much more complicated. However, it does become unnecessarily singular when any two points coincide.
A better-behaved, order-insensitive formula is:
let a = u x v
b = v x w
c = w x u
(a.w)^2/(|a| + |b| + |c| + |a+b+c|)^3
which is something like the volume of the tetrahedron divided by the surface area, but raised to appropriate powers to make the whole thing scale-insensitive. This is also a bit more complex than your formula, but it works unless all 4 points are collinear.
How about
|(u x v) * w| / |u|^3
(and you can change |x| to (x)^2 if you think it's simpler).

Vector math, finding coördinates on a planar between 2 vectors

I am trying to generate a 3d tube along a spline. I have the coördinates of the spline (x1,y1,z1 - x2,y2,z2 - etc) which you can see in the illustration in yellow. At those points I need to generate circles, whose vertices are to be connected at a later stadium. The circles need to be perpendicular to the 'corners' of two line segments of the spline to form a correct tube. Note that the segments are kept low for illustration purpose.
[apparently I'm not allowed to post images so please view the image at this link]
http://img191.imageshack.us/img191/6863/18720019.jpg
I am as far as being able to calculate the vertices of each ring at each point of the spline, but they are all on the same planar ie same angled. I need them to be rotated according to their 'legs' (which A & B are to C for instance).
I've been thinking this over and thought of the following:
two line segments can be seen as 2 vectors (in illustration A & B)
the corner (in illustraton C) is where a ring of vertices need to be calculated
I need to find the planar on which all of the vertices will reside
I then can use this planar (=vector?) to calculate new vectors from the center point, which is C
and find their x,y,z using radius * sin and cos
However, I'm really confused on the math part of this. I read about the dot product but that returns a scalar which I don't know how to apply in this case.
Can someone point me into the right direction?
[edit]
To give a bit more info on the situation:
I need to construct a buffer of floats, which -in groups of 3- describe vertex positions and will be connected by OpenGL ES, given another buffer with indices to form polygons.
To give shape to the tube, I first created an array of floats, which -in groups of 3- describe control points in 3d space.
Then along with a variable for segment density, I pass these control points to a function that uses these control points to create a CatmullRom spline and returns this in the form of another array of floats which -again in groups of 3- describe vertices of the catmull rom spline.
On each of these vertices, I want to create a ring of vertices which also can differ in density (amount of smoothness / vertices per ring).
All former vertices (control points and those that describe the catmull rom spline) are discarded.
Only the vertices that form the tube rings will be passed to OpenGL, which in turn will connect those to form the final tube.
I am as far as being able to create the catmullrom spline, and create rings at the position of its vertices, however, they are all on a planars that are in the same angle, instead of following the splines path.
[/edit]
Thanks!
Suppose you have a parametric curve such as:
xx[t_] := Sin[t];
yy[t_] := Cos[t];
zz[t_] := t;
Which gives:
The tangent vector to our curve is formed by the derivatives in each direction. In our case
Tg[t_]:= {Cos[t], -Sin[t], 1}
The orthogonal plane to that vector comes solving the implicit equation:
Tg[t].{x - xx[t], y - yy[t], z - zz[t]} == 0
In our case this is:
-t + z + Cos[t] (x - Sin[t]) - (y - Cos[t]) Sin[t] == 0
Now we find a circle in that plane, centered at the curve. i.e:
c[{x_, y_, z_, t_}] := (x - xx[t])^2 + (y - yy[t])^2 + (z - zz[t])^2 == r^2
Solving both equations, you get the equation for the circles:
HTH!
Edit
And by drawing a lot of circles, you may get a (not efficient) tube:
Or with a good Graphics 3D library:
Edit
Since you insist :) here is a program to calculate the circle at junctions.
a = {1, 2, 3}; b = {3, 2, 1}; c = {2, 3, 4};
l1 = Line[{a, b}];
l2 = Line[{b, c}];
k = Cross[(b - a), (c - b)] + b; (*Cross Product*)
angle = -ArcCos[(a - b).(c - b)/(Norm[(a - b)] Norm[(c - b)])]/2;
q = RotationMatrix[angle, k - b].(a - b);
circle[t_] := (k - b)/Norm[k - b] Sin#t + (q)/Norm[q] Cos#t + b;
Show[{Graphics3D[{
Red, l1,
Blue, l2,
Black, Line[{b, k}],
Green, Line[{b, q + b}]}, Axes -> True],
ParametricPlot3D[circle[t], {t, 0, 2 Pi}]}]
Edit
Here you have the mesh constructed by this method. It is not pretty, IMHO:
I don't know what your language of choice is, but if you speak MatLab there are already a few implementations available. Even if you are using another language, some of the code might be clear enough to inspire a reimplementation.
The key point is that if you don't want your tube to twist when you connect the vertices, you cannot determine the basis locally, but need to propagate it along the curve. The Frenet frame, as proposed by jalexiou, is one option but simpler stuff works fine as well.
I did a simple MatLab implementation called tubeplot.m in my formative years (based on a simple non-Frenet propagation), and googling it, I can see that Anders Sandberg from kth.se has done a (re?)implementation with the same name, available at http://www.nada.kth.se/~asa/Ray/Tubeplot/tubeplot.html.
Edit:
The following is pseudocode for the simple implementation in tubeplot.m. I have found it to be quite robust.
The plan is to propagate two normals a and b along the curve, so
that at each point on the curve a, b and the tangent to the curve
will form an orthogonal basis which is "as close as possible" to the
basis used in the previous point.
Using this basis we can find points on the circumference of the tube.
// *** Input/output ***
// v[0]..v[N-1]: Points on your curve as vectors
// No neighbours should overlap
// nvert: Number of vertices around tube, integer.
// rtube: Radius of tube, float.
// xyz: (N, nvert)-array with vertices of the tube as vectors
// *** Initialization ***
// 1: Tangent vectors
for i=1 to N-2:
dv[i]=v[i+1]-v[i-1]
dv[0]=v[1]-v[0], dv[N-1]=v[N-1]-v[N-2]
// 2: An initial value for a (must not be pararllel to dv[0]):
idx=<index of smallest component of abs(dv[0])>
a=[0,0,0], a[idx]=1.0
// *** Loop ***
for i = 0 to N-1:
b=normalize(cross(a,dv[i]));
a=normalize(cross(dv[i],b));
for j = 0 to nvert-1:
th=j*2*pi/nvert
xyz[i,j]=v[i] + cos(th)*rtube*a + sin(th)*rtube*b
Implementation details: You can probably speed up things by precalculating the cos and sin. Also, to get a robust performance, you should fuse input points closer than, say, 0.1*rtube, or a least test that all the dv vectors are non-zero.
HTH
You need to look at Fenet formulas in Differential Geometry. See figure 2.1 for an example with a helix.
Surfaces & Curves
Taking the cross product of the line segment and the up vector will give you a vector at right-angles to them both (unless the line segment points exactly up or down) which I'll call horizontal. Taking the cross product of horizontal and the line segment with give you another vector that's at right angles to the line segment and the other one (let's call it vertical). You can then get the circle coords by lineStart + cos theta * horizontal + sin theta * vertical for theta in 0 - 2Pi.
Edit: To get the points for the mid-point between two segments, use the sum of the two line segment vectors to find the average.

Resources