Some help rendering the Mandelbrot set - math

I have been given some work to do with the fractal visualisation of the Mandelbrot set.
I'm not looking for a complete solution (naturally), I'm asking for help with regard to the orbits of complex numbers.
Say I have a given Complex number derived from a point on the complex plane. I now need to iterate over its orbit sequence and plot points according to whether the orbits increase by orders of magnitude or not.
How do I gather the orbits of a complex number? Any guidance is much appreciated (links etc). Any pointers on Math functions needed to test the orbit sequence e.g. Math.pow()
I'm using Java but that's not particularly relevant here.
Thanks again,
Alex

When you display the Mandelbrot set, you simply translate the real and imaginaty planes into x and y coordinates, respectively.
So, for example the complex number 4.5 + 0.27i translates into x = 4.5, y = 0.27.
The Mandelbrot set is all points where the equation Z = Z² + C never reaches a value where |Z| >= 2, but in practice you include all points where the value doesn't exceed 2 within a specific number of iterations, for example 1000. To get the colorful renderings that you usually see of the set, you assign different colors to points outside the set depending on how fast they reach the limit.
As it's complex numbers, the equation is actually Zr + Zi = (Zr + Zi)² + Cr + Ci. You would divide that into two equations, one for the real plane and one for the imaginary plane, and then it's just plain algebra. C is the coordinate of the point that you want to test, and the initial value of Z is zero.
Here's an image from my multi-threaded Mandelbrot generator :)

Actually the Mandelbrot set is the set of complex numbers for which the iteration converges.
So the only points in the Mandelbrot set are that big boring colour in the middle. and all of the pretty colours you see are doing nothing more than representing the rate at which points near the boundary (but the wrong side) spin off to infinity.
In mathspeak,
M = {c in C : lim (k -> inf) z_k = 0 } where z_0 = c, z_(k+1) = z_k^2 + c
ie pick any complex number c. Now to determine whether it is in the set, repeatedly iterate it z_0 = c, z_(k+1) = z_k^2 + c, and z_k will approach either zero or infinity. If its limit (as k tends to infinity) is zero, then it is in. Otherwise not.
It is possible to prove that once |z_k| > 2, it is not going to converge. This is a good exercise in optimisation: IIRC |Z_k|^2 > 2 is sufficient... either way, squaring up will save you the expensive sqrt() function.

Wolfram Mathworld has a nice site talking about the Mandelbrot set.
A Complex class will be most helpful.
Maybe an example like this will stimulate some thought. I wouldn't recommend using an Applet.
You have to know how to do add, subtract, multiply, divide, and power operations with complex numbers, in addition to functions like sine, cosine, exponential, etc. If you don't know those, I'd start there.
The book that I was taught from was Ruel V. Churchill "Complex Variables".

/d{def}def/u{dup}d[0 -185 u 0 300 u]concat/q 5e-3 d/m{mul}d/z{A u m B u
m}d/r{rlineto}d/X -2 q 1{d/Y -2 q 2{d/A 0 d/B 0 d 64 -1 1{/f exch d/B
A/A z sub X add d B 2 m m Y add d z add 4 gt{exit}if/f 64 d}for f 64 div
setgray X Y moveto 0 q neg u 0 0 q u 0 r r r r fill/Y}for/X}for showpage

Related

define multi-dimensional object of a given volume

this question is in theoretical math perspective.
In the space S = [-1, 1]^d, given a volume V, can I always define object in S with exactly this volume? also - can I surround any x in S with an object of volume V?
I would answer yes on these 2 questions because I only need to find multiplication of d positive real numbers that would be lengths of lines, and I assume I can construct it around any x in S, but I just want to be sure, and get some nice explanation.
thanks
Basically yes, let's say n is ceil(log2(V)), so V is c 2^n with 0.5 < c <= 1,
then one possible solution is the c th part of an n dimensional cube,
so for 7 you take 7/8ths of a cube, for 9 you have 9/16ths of a hypercube, etc...
The only assumption here is that "Volume" should be interpreted as the nth-dimensional measure in n-dimensional space.
If I get it right your space S has coordinates in range <-1,+1> which limits its volumes to:
V = 2^3
HV = 2^d
where V is standard 3D volume and HV is d-dimensional hyper-volume that can fit into your space S. So you can construct objects with volumes and hypervolumes up to this limit.
So if you want to construct object with volumes V or HV then you can create axis aligned cube of size a:
a*a*a = V
a = V^(1/3)
a^d = HV
a = HV^(1/d)
if a<=2 otherwise your S is too small ...

Linear regression / line finding for non-function lines

I want to find a line having a number of points which are around that line. Line is in 2D space and is defined by two points, or by one point and an angle. What would be the algorithm for it?
There's a lot about this on SO and in internet and also in Numerical Receipes, but all examples seem to focus on function form of the line (y=ax+b) which is not going to work well for (almost) vertical lines.
I could possibly detect if the line is more horizontal or more vertical, and swap coordinates in the other case, but maybe there exists some more elegant solution?
I'm using C# ATM but can translate from any code probably.
I'm sorry I can't provide a reference, but here's how:
Suppose your N (2d) data points are p[] and you want to find a vector a and a scalar d to minimise
E = Sum{ i | sqr( a'*p[i] - d) }/N
(The line is { q | a'*q = d} E is the sum of the squares of the distances of the data points from the line).
Some tedious algebra shows that
E = a'*C*a + sqr(d - a'*M)
where M is the mean and C the covariance of the data, ie
M = Sum{ i | p[i] } / N
C = Sum{ i | (p[i]-M)*(p[i]-M)' } / N
E will be minimised by choosing d = a'*M, and a to be an eigenvector of C corresponding to the smaller eigenvalue.
So the algorithm is:
Compute M and C
Find the smaller eigenvalue of C and the corresponding eigenvector a
Compute d = a'*M
(Note that the same thing works in higher dimensions too. For example in 3d we would find the 'best' plane).

Tangent formula

I have this formula
B = tan(atan(A) + C)
where A is the input, B is output and C is a constant. The problem is that sin, cos and tan functions are computationally expensive and also there is quite a big loss of precision along the formula when calculated as 4 byte floats. I am in the process of optimizing my code so is there any way to avoid using these functions even if the total number of calculations is several times higher?
Further background: the numbers A, B and C are the ratio's of x/y coordinate for 3 points on a 2 dimensional plane
According to Wolfram Alpha, tan(atan(A)+C) can be written as (A+tan(C))/(1-A*tan(C)).
You can easily derive this by hand from the tangent sum formula:
tan(a + b) = (tan a + tan b)/(1 - tan a tan b).
If the implementation of tan in your math library is slow or inaccurate it's possible that faster or more precise implementations exist.
I'll presume that your formula is correct. Mark's comment essentially comes down to the idea that C must have units of an angle for the formula to make sense, but if C is a ratio, then it won't have the proper units. Mark has a valid question.
In the end, you will still need to compute a tangent, but there are things you can do to help a bit.
First, apply a simple trig identity, for the tangent of a sum. This, combined with the fact that tan(atan(A)) = A, reduces your formula to
B = (A + tan(C))/(1 - A*tan(C))
Thus you still need to compute ONE tangent, that of C. (Thus precompute tan(C), once.) Nothing will get you around that.
However, there are ways to compute a tangent more efficiently than as a ratio sin(C)/cos(C). For example, a direct series approximation might be better. Or there is a trick using a series for the versine, which is itself more efficient to compute than the tangent series. And for small angles it can be rapidly convergent. You can assure small angles using range reduction tricks for that versine. Other tricks exist too.
atan(A) = atan(x_a/y_a) for some point is the angle between vector (x_a,y_a) and Oy. Because C is a constant, you can precompute some vector c=(x_c,y_c) with unit length and inclined to Oy with angle C. Then cos(atan(A)+C) can be expressed as inner product of these vectors divided by length of a. From cos you can get tan using main identity. In the end a got:
B = sqrt((x_a^2 + y_a^2)/(x_a*x_c + y_a*y_c)^2 - 1)
This might be more effective. Be careful with signs.

determine trajectory for object followers - curve of pursuit

I have started to develop unit trajectories for a game server and for now I'm trying to retrieve the position of a unit at a given time. It is easy when the trajectory is just a straight line, but it is far more complicated when unit chases another unit.
I've done flash app to illustrate the problem. Black trajectory is for unit which travels in a single direction. Blue chases black and red chases blue. What I want is to precalculate whole trajectory for blue and red to be able to retrieve their position in a constant time.
Is it possible? Thanks for any help!!
Here's a paper A classic chase problem solved from a
physics perspective by Carl E. Mungan that solves a particular version in which the chaser is initially perpendicular to the chased object's trajectory. I believe this is an inessential element of the solution since that perpendicularity disappears along the rest of the trajectory.
It is an autonomous system of differential equations in the sense that time does not appear explicitly in the coefficients or terms of the equations. This supports the idea that the family of solutions given in the paper is general enough to provide for non-perpendicular initial conditions.
The paper provides further links and references, as well as a useful search term, "curves of pursuit".
Let's state a slight different, slightly more general initial condition than Mungan's. Suppose the chased object ("ship") is initially located at the origin and travels up the positive y-axis (x=0) with constant speed V. The chasing object ("torpedo") is initially located at (x0,y0), and although instantaneous reorienting directly at the "ship", also travels at some constant speed v.
The special case where x0 is zero results in a linear pursuit curve, i.e. a head-on collision or a trailing chase accordingly as y0 is positive or negative. Otherwise by reflection in the y-axis one may assume without loss of generality that x0 > 0. Thus rational powers of x-coordinates will be well-defined.
Assume for our immediate purpose that speeds V,v are unequal, so that ratio r = V/v is not 1. The following is a closed-form solution (1) for the "torpedo" curve similar to Mungan's equation (10):
(1+r) (1-r)
[ (x/H) (x/H) ]
(y/H) = (1/2) [ ----- - ----- ] + C (1)
[ (1+r) (1-r) ]
in which the constants H,C can be determined by the initial conditions.
Applying the condition that initially the torpedo moves toward the ship's location at the origin, we take the derivative with respect to x in (1) and cancel a factor 1/H from both sides:
r -r
dy/dx = (1/2) [ (x/H) - (x/H) ] (2)
Now equate the curve's slope dy/dx at initial point (x0,y0) with that of its line passing through the origin:
r -r
(x0/H) - (x0/H) = 2y0/x0 = K (3)
This amounts to a quadratic equation for positive B = (x0/H)^r:
B^2 - K*B - 1 = 0 (4)
namely B = [K + sqrt(K^2 + 4)]/2 (but use the alternative form if K < 0 to avoid cancellation error), which allows H to be determined from our knowledge of x0 and r:
H = x0/(B^(1/r)) (5)
Knowing H makes it a simple matter to determine the additive constant C in (1) by substituting the initial point (x0,y0) there.
The tricky part will be to determine which point on the "torpedo" trajectory corresponds to a given time t > 0. The inverse of that problem is solved fairly simply. Given a point on the trajectory, find the tangent line at that point using derivative formula (2) and deduce time t from the y-intercept b of that line (i.e. from the current "ship" position):
t = b/V (6)
Therefore determining (x(t),y(t)) where the "torpedo" is located at a given time t > 0 is essentially a root-finding exercise. One readily brackets the desired x(t) between two x-coordinates x1 and x2 that correspond to times t1 and t2 such that t1 < t < t2. A root-finding method can be used to refine this interval until the desired accuracy is achieved. Once a fairly small interval has been refined, Newton's method will provide rapid convergence. We can look at the details of such a procedure in a next installment!
I can set up the problem for you but not solve it.
The black curve is moving at a constant velocity v0, and in a straight line.
The blue curve moves at a constant velocity v1 in the direction of black.
For simplicity, choose coordinates so that at time t=0 the black curve starts at (x=0, y=0) and is moving in the direction x.
Thus, at time t >= 0, the position of the black curve is (v0 t, 0).
Problem statement
The goal is to find x, y of the blue curve for times t >= 0 given the initial position (x(t=0), y(t=0)). The differential equations of motion are
dx / dt = v1 (v0 t - x) / a(t)
dy / dt = v1 (- y) / a(t)
where a(t) = sqrt((v0 t - x)^2 + (y^2)) is the distance between blue and black at time t.
This is a system of two nonlinear coupled differential equations. It seems likely that there is no complete anaytical solution. Wolfram Alpha gives up without trying for the input
D[y[t],t] = -y[t] / sqrt[(t-x[t])^2 + y[t]^2], D[x[t],t] = (t-x[t]) / sqrt[(t-x[t])^2 + y[t]^2]
You could try asking on math.stackexchange. Good luck!

Scale-agnostic, differentiable, co-planarity measure

I am looking for an (almost everywhere) differentiable function f(p1, p2, p3, p4) that given four points will give me a scale-agnostic measure for co-planarity. It is zero if the four points lie on the same plane and positive otherwise. Scale-agnostic means that, when I uniformly scale all points the planarity measure will return the same.
I came up with something that is quite complex and not easy to optimize. Define u=p2-p1, v=p3-p1, w=p4-p1. Then the planarity measure is:
[(u x v) * w]² / (|u x v|² |w|²)
where x means cross product and '*' means dot product.
The numerator is simply (the square of) the volume of the tetrahedron defined by the four points, and the denominator is a normalizing factor that makes this measure become simply the cosine of an angle. Because angles do not changed under uniform scale, this function satisfies all my requirements.
Does anybody know of something simpler?
Alex.
Edit:
I eventually used an Augmented Lagrangian method to perform optimization, so I don't need it to be scale agnostic. Just using the constraint (u x v) * w = 0 is enough, as the optimization procedure finds the correct Lagrange multiplier to compensate for the scale.
Your methods seems ok, I'd do something like this for efficient implementation:
Take u, v, w as you did
Normalize them: various tricks exist to evaluate the inverse square root efficiently with whatever precision you want, like this jewel. Most modern processors have builtins for this operation.
Take f = |det(u, v, w)| ( = (u x v) . w ). There are fast direct implementations for 3x3 matrices; see #batty's answer to this question.
This amounts to what you do without the squares. It is still homogeneous and almost everywhere differentiable. Take the square of the determinant if you want something differentiable everywhere.
EDIT: #phkahler implicitly suggested using the ratio of the radius of the inscribed sphere to the radius of the circumscribed sphere as a measure of planarity. This is a bounded differentiable function of the points, invariant by scaling. However, this is at least as difficult to compute as what you (and I) suggest. Especially computing the radius of the circumscribed sphere is very sensitive to roundoff errors.
A measure that should be symmetric with respect to point reorderings is:
((u x v).w)^2/(|u||v||w||u-v||u-w||v-w|)
which is proportional to the volume of the tetrahedron squared divided by all 6 edge lengths. It is not simpler than your formula or Alexandre C.'s, but it is not much more complicated. However, it does become unnecessarily singular when any two points coincide.
A better-behaved, order-insensitive formula is:
let a = u x v
b = v x w
c = w x u
(a.w)^2/(|a| + |b| + |c| + |a+b+c|)^3
which is something like the volume of the tetrahedron divided by the surface area, but raised to appropriate powers to make the whole thing scale-insensitive. This is also a bit more complex than your formula, but it works unless all 4 points are collinear.
How about
|(u x v) * w| / |u|^3
(and you can change |x| to (x)^2 if you think it's simpler).

Resources