SICP: Why does this recursion-based sine approximation work? - recursion

Here is the question and solution to Structure and Interpretation of Computer Programs' exercise 1.15 (see here). My problem is, I don't know how the combination of these formulae actually work:
and
for small x radian values.
I understand the idea that the closer the radian angle gets to zero, the more it approximates the sine of that angle. I've seen excellent explanations (MIT OCW, Khan Academy). I also have worked out how the
formula is derived. But how are they being used together to derive an answer to sin(x)? The p function seems to simply be taking the variable angle divided by 3 each recursive pass until angle is down below 0.1 Then on the way back, we perform p as many times as we had to divide by 3. So it seems
magically becomes the same as
through recursive application. How? I'm not very deeply versed in recursion theory. Also, if this is logarithmically getting closer to 0.1, it's not as if we're totaling up lots of small x's a la integration. This seems to be doing something vaguely like the Y-combinator -- which I also don't grasp that well yet.
Also, when we see the recursive steps (recursion) repeatedly dividing angle by $3$, what tells you definitively this is logarithmic? I mean, it looks like it's taking those giant order of magnitude leaps at each division, but is there another analytical way to call this logarithmic reduction?

The first thing to point out is that is not exactly accurate since x is just an approximation. The correct notation is
. This might seem a little nitpicky but it's important because this explains the exercise and the definition of sine given in the book.
The way and are combined is in the definition of the sine procedure. The idea is that we would like to return either the approximation or the second formula () depending on the value of x.
If x is "sufficiently small", then we just return x as an approximation for sin(x). But if it's not "sufficiently small" we will use . This is obviously fine since it's an equality. It might seem unnecessary until you notice that sin(x/3) is smaller and therefore it might be "sufficiently smaller". This is why the procedure is recursive, we will keep doing this until the argument for sine is "sufficiently small".
It seems that the source of your confusion is here:
So it seems magically becomes the same as .
This is not the case. It's a bit tricky since (define (p x) (- (* 3 x) (* (4 (cube x)))) doesn't include any sine but remember that the x in this definition is just a local variable. But if we look at the final line of the definition of the sine procedure we can see that we are actually calling (p (sin (/ angle 3.0))))) so the sine is in the argument of the p call.
The reason why the recursion is logarithmic in terms of the number of steps is that the number of times we will be calling the p procedure is around the number of times we have to divide the angle by 3.0 to get a value smaller than 0.1. This is a value close to 1 if the angle is a big number. So we will have to call p until angle/(3.0^n) < 0.1 which approximates to the n such that 3.0^n > angle which approximates to

Related

Big O confusion: log2(N) vs log3(N)

Why is O(log2N) = O(log3N) ?
I don't understand this. Does big O not mean upper bound of something?
Isn't log2N bigger than log3N ? When I graph them, log2N is above log3N .
Big O doesn't deal with constant factors, and the difference between Logx(n) and Logy(n) is a constant factor.
To put it a little differently, the base of the logarithm basically just modifies the slope of a line/curve on the graph. Big-O isn't concerned with the slope of the curve on the graph, only with the shape of the curve. If you can get one curve to match another by shifting its slope up or down, then as far as Big-O notation cares, they're the same function and the same curve.
To try to put this in perspective, perhaps a drawing of some of the more common curve shapes would be useful:
As noted above, only the shape of a line matters though, not its slope. In the following figure:
...all the lines are straight, so even though their slopes differ radically, they're still all identical as far as big-O cares--they're all just O(N), regardless of the slope. With logarithms, we get roughly the same effect--each line will be curved like the O(log N) line in the previous picture, but changing the base of the logarithm will rotate that curve around the origin so you'll (again) have he same shape of line, but at different slopes (so, again, as far as big-O cares, they're all identical). So, getting to the original question, if we change bases of logarithms, we get curves that look something like this:
Here it may be a little less obvious that all that's happening is a constant change in the slope, but that's exactly the difference here, just like with the straight lines above.
It is because changing base of logarithms is equal to multiplying it by a constant. And big O does not care about constants.
log_a(b) = log_c(b) / log_c(a)
So to get from log2(n) to log3(n) you need to multiply it by 1 / log(3) 2.
In other words log2(n) = log3(n) / log3(2).
log3(2) is a constant and O(cn) = O(n), thus O (log2(n)) = O (log3(n))
There are some good answer here already, so please read them too.
To understand why Log2(n) is O(log3(n)) you need to understand two things.
1) What is mean by BigO notation. I suggest reading this: http://en.wikipedia.org/wiki/Big_O_notation If you understnad this,you will know 2n and 16n+5 are both O(N)
2) how logarithms work. the difference between log2 (N) and log10(N) will be a simple ratio, easily calculated if you want it as per luk32's answer.
Since logs at different bases differ only a by a constant ratio, and Big O is indifferent to minor things like constant multiplying factors, you will often find O(logN) actually omits the base, because the choice of any constant base (eg 2,3,10,e) makes no difference in this context.
It depends on the context in which O notation is used. When you are using it in algorithmic complexity reasoning you are interested in the asymptotic behaviour of a function, ie how it grows/decreases when it tends to (plus or minus) infinity (or another point of accumulation).
Therefore whereas f(n) = 3n is always less than g(n) = 1000n they both appear in O(n) since they grow linearly (according to their expressions) asymptotically.
The same reasoning pattern can be taken for the logarithm case that you posted since different bases logarithms differ for a constant factor, but share the same asymptotical behaviour.
Changing context, if you were interested in computing the exact performance of an algorithm given your estimates being exact and not approximate, you would prefer the lower one of course. In general all computational complexity comparisons are approximation thus done via asymptotical reasoning.

How do I efficiently find the maximum value in an array containing values of a smooth function?

I have a function that takes a floating point number and returns a floating point number. It can be assumed that if you were to graph the output of this function it would be 'n' shaped, ie. there would be a single maximum point, and no other points on the function with a zero slope. We also know that input value that yields this maximum output will lie between two known points, perhaps 0.0 and 1.0.
I need to efficiently find the input value that yields the maximum output value to some degree of approximation, without doing an exhaustive search.
I'm looking for something similar to Newton's Method which finds the roots of a function, but since my function is opaque I can't get its derivative.
I would like to down-thumb all the other answers so far, for various reasons, but I won't.
An excellent and efficient method for minimizing (or maximizing) smooth functions when derivatives are not available is parabolic interpolation. It is common to write the algorithm so it temporarily switches to the golden-section search (Brent's minimizer) when parabolic interpolation does not progress as fast as golden-section would.
I wrote such an algorithm in C++. Any offers?
UPDATE: There is a C version of the Brent minimizer in GSL. The archives are here: ftp://ftp.club.cc.cmu.edu/gnu/gsl/ Note that it will be covered by some flavor of GNU "copyleft."
As I write this, the latest-and-greatest appears to be gsl-1.14.tar.gz. The minimizer is located in the file gsl-1.14/min/brent.c. It appears to have termination criteria similar to what I implemented. I have not studied how it decides to switch to golden section, but for the OP, that is probably moot.
UPDATE 2: I googled up a public domain java version, translated from FORTRAN. I cannot vouch for its quality. http://www1.fpl.fs.fed.us/Fmin.java I notice that the hard-coded machine efficiency ("machine precision" in the comments) is 1/2 the value for a typical PC today. Change the value of eps to 2.22045e-16.
Edit 2: The method described in Jive Dadson is a better way to go about this. I'm leaving my answer up since it's easier to implement, if speed isn't too much of an issue.
Use a form of binary search, combined with numeric derivative approximations.
Given the interval [a, b], let x = (a + b) /2
Let epsilon be something very small.
Is (f(x + epsilon) - f(x)) positive? If yes, the function is still growing at x, so you recursively search the interval [x, b]
Otherwise, search the interval [a, x].
There might be a problem if the max lies between x and x + epsilon, but you might give this a try.
Edit: The advantage to this approach is that it exploits the known properties of the function in question. That is, I assumed by "n"-shaped, you meant, increasing-max-decreasing. Here's some Python code I wrote to test the algorithm:
def f(x):
return -x * (x - 1.0)
def findMax(function, a, b, maxSlope):
x = (a + b) / 2.0
e = 0.0001
slope = (function(x + e) - function(x)) / e
if abs(slope) < maxSlope:
return x
if slope > 0:
return findMax(function, x, b, maxSlope)
else:
return findMax(function, a, x, maxSlope)
Typing findMax(f, 0, 3, 0.01) should return 0.504, as desired.
For optimizing a concave function, which is the type of function you are talking about, without evaluating the derivative I would use the secant method.
Given the two initial values x[0]=0.0 and x[1]=1.0 I would proceed to compute the next approximations as:
def next_x(x, xprev):
return x - f(x) * (x - xprev) / (f(x) - f(xprev))
and thus compute x[2], x[3], ... until the change in x becomes small enough.
Edit: As Jive explains, this solution is for root finding which is not the question posed. For optimization the proper solution is the Brent minimizer as explained in his answer.
The Levenberg-Marquardt algorithm is a Newton's method like optimizer. It has a C/C++ implementation levmar that doesn't require you to define the derivative function. Instead it will evaluate the objective function in the current neighborhood to move to the maximum.
BTW: this website appears to be updated since I last visited it, hope it's even the same one I remembered. Apparently it now also support other languages.
Given that it's only a function of a single variable and has one extremum in the interval, you don't really need Newton's method. Some sort of line search algorithm should suffice. This wikipedia article is actually not a bad starting point, if short on details. Note in particular that you could just use the method described under "direct search", starting with the end points of your interval as your two points.
I'm not sure if you'd consider that an "exhaustive search", but it should actually be pretty fast I think for this sort of function (that is, a continuous, smooth function with only one local extremum in the given interval).
You could reduce it to a simple linear fit on the delta's, finding the place where it crosses the x axis. Linear fit can be done very quickly.
Or just take 3 points (left/top/right) and fix the parabola.
It depends mostly on the nature of the underlying relation between x and y, I think.
edit this is in case you have an array of values like the question's title states. When you have a function take Newton-Raphson.

Function for returning a list of points on a Bezier curve at equal arclength

Someone somewhere has had to solve this problem. I can find many a great website explaining this problem and how to solve it. While I'm sure they are well written and make sense to math whizzes, that isn't me. And while I might understand in a vague sort of way, I do not understand how to turn that math into a function that I can use.
So I beg of you, if you have a function that can do this, in any language, (sure even fortran or heck 6502 assembler) - please help me out.
prefer an analytical to iterative solution
EDIT: Meant to specify that its a cubic bezier I'm trying to work with.
What you're asking for is the inverse of the arc length function. So, given a curve B, you want a function Linv(len) that returns a t between 0 and 1 such that the arc length of the curve between 0 and t is len.
If you had this function your problem is really easy to solve. Let B(0) be the first point. To find the next point, you'd simply compute B(Linv(w)) where w is the "equal arclength" that you refer to. To get the next point, just evaluate B(Linv(2*w)) and so on, until Linv(n*w) becomes greater than 1.
I've had to deal with this problem recently. I've come up with, or come across a few solutions, none of which are satisfactory to me (but maybe they will be for you).
Now, this is a bit complicated, so let me just give you the link to the source code first:
http://icedtea.classpath.org/~dlila/webrevs/perfWebrev/webrev/raw_files/new/src/share/classes/sun/java2d/pisces/Dasher.java. What you want is in the LengthIterator class. You shouldn't have to look at any other parts of the file. There are a bunch of methods that are defined in another file. To get to them just cut out everything from /raw_files/ to the end of the URL. This is how you use it. Initialize the object on a curve. Then to get the parameter of a point with arc length L from the beginning of the curve just call next(L) (to get the actual point just evaluate your curve at this parameter, using deCasteljau's algorithm, or zneak's suggestion). Every subsequent call of next(x) moves you a distance of x along the curve compared to your last position. next returns a negative number when you run out of curve.
Explanation of code: so, I needed a t value such that B(0) to B(t) would have length LEN (where LEN is known). I simply flattened the curve. So, just subdivide the curve recursively until each curve is close enough to a line (you can test for this by comparing the length of the control polygon to the length of the line joining the end points). You can compute the length of this sub-curve as (controlPolyLength + endPointsSegmentLen)/2. Add all these lengths to an accumulator, and stop the recursion when the accumulator value is >= LEN. Now, call the last subcurve C and let [t0, t1] be its domain. You know that the t you want is t0 <= t < t1, and you know the length from B(0) to B(t0) - call this value L0t0. So, now you need to find a t such that C(0) to C(t) has length LEN-L0t0. This is exactly the problem we started with, but on a smaller scale. We could use recursion, but that would be horribly slow, so instead we just use the fact that C is a very flat curve. We pretend C is a line, and compute the point at t using P=C(0)+((LEN-L0t0)/length(C))*(C(1)-C(0)). This point doesn't actually lie on the curve because it is on the line C(0)->C(1), but it's very close to the point we want. So, we just solve Bx(t)=Px and By(t)=Py. This is just finding cubic roots, which has a closed source solution, but I just used Newton's method. Now we have the t we want, and we can just compute C(t), which is the actual point.
I should mention that a few months ago I skimmed through a paper that had another solution to this that found an approximation to the natural parameterization of the curve. The author has posted a link to it here: Equidistant points across Bezier curves

Approximating nonparametric cubic Bezier

What is the best way to approximate a cubic Bezier curve? Ideally I would want a function y(x) which would give the exact y value for any given x, but this would involve solving a cubic equation for every x value, which is too slow for my needs, and there may be numerical stability issues as well with this approach.
Would this be a good solution?
Just solve the cubic.
If you're talking about Bezier plane curves, where x(t) and y(t) are cubic polynomials, then y(x) might be undefined or have multiple values. An extreme degenerate case would be the line x= 1.0, which can be expressed as a cubic Bezier (control point 2 is the same as end point 1; control point 3 is the same as end point 4). In that case, y(x) has no solutions for x != 1.0, and infinite solutions for x == 1.0.
A method of recursive subdivision will work, but I would expect it to be much slower than just solving the cubic. (Unless you're working with some sort of embedded processor with unusually poor floating-point capacity.)
You should have no trouble finding code that solves a cubic that has already been thoroughly tested and debuged. If you implement your own solution using recursive subdivision, you won't have that advantage.
Finally, yes, there may be numerical stablility problems, like when the point you want is near a tangent, but a subdivision method won't make those go away. It will just make them less obvious.
EDIT: responding to your comment, but I need more than 300 characters.
I'm only dealing with bezier curves where y(x) has only one (real) root. Regarding numerical stability, using the formula from http://en.wikipedia.org/wiki/Cubic_equation#Summary, it would appear that there might be problems if u is very small. – jtxx000
The wackypedia article is math with no code. I suspect you can find some cookbook code that's more ready-to-use somewhere. Maybe Numerical Recipies or ACM collected algorithms link text.
To your specific question, and using the same notation as the article, u is only zero or near zero when p is also zero or near zero. They're related by the equation:
u^^6 + q u^^3 == p^^3 /27
Near zero, you can use the approximation:
q u^^3 == p^^3 /27
or p / 3u == cube root of q
So the computation of x from u should contain something like:
(fabs(u) >= somesmallvalue) ? (p / u / 3.0) : cuberoot (q)
How "near" zero is near? Depends on how much accuracy you need. You could spend some quality time with Maple or Matlab looking at how much error is introduced for what magnitudes of u. Of course, only you know how much accuracy you need.
The article gives 3 formulas for u for the 3 roots of the cubic. Given the three u values, you can get the 3 corresponding x values. The 3 values for u and x are all complex numbers with an imaginary component. If you're sure that there has to be only one real solution, then you expect one of the roots to have a zero imaginary component, and the other two to be complex conjugates. It looks like you have to compute all three and then pick the real one. (Note that a complex u can correspond to a real x!) However, there's another numerical stability problem there: floating-point arithmetic being what it is, the imaginary component of the real solution will not be exactly zero, and the imaginary components of the non-real roots can be arbitrarily close to zero. So numeric round-off can result in you picking the wrong root. It would be helpfull if there's some sanity check from your application that you could apply there.
If you do pick the right root, one or more iterations of Newton-Raphson can improve it's accuracy a lot.
Yes, de Casteljau algorithm would work for you. However, I don't know if it will be faster than solving the cubic equation by Cardano's method.

As a programmer how would you explain imaginary numbers?

As a programmer I think it is my job to be good at math but I am having trouble getting my head round imaginary numbers. I have tried google and wikipedia with no luck so I am hoping a programmer can explain in to me, give me an example of a number squared that is <= 0, some example usage etc...
I guess this blog entry is one good explanation:
The key word is rotation (as opposed to direction for negative numbers, which are as stranger as imaginary number when you think of them: less than nothing ?)
Like negative numbers modeling flipping, imaginary numbers can model anything that rotates between two dimensions “X” and “Y”. Or anything with a cyclic, circular relationship
Problem: not only am I a programmer, I am a mathematician.
Solution: plow ahead anyway.
There's nothing really magical to complex numbers. The idea behind their inception is that there's something wrong with real numbers. If you've got an equation x^2 + 4, this is never zero, whereas x^2 - 2 is zero twice. So mathematicians got really angry and wanted there to always be zeroes with polynomials of degree at least one (wanted an "algebraically closed" field), and created some arbitrary number j such that j = sqrt(-1). All the rules sort of fall into place from there (though they are more accurately reorganized differently-- specifically, you formally can't actually say "hey this number is the square root of negative one"). If there's that number j, you can get multiples of j. And you can add real numbers to j, so then you've got complex numbers. The operations with complex numbers are similar to operations with binomials (deliberately so).
The real problem with complexes isn't in all this, but in the fact that you can't define a system whereby you can get the ordinary rules for less-than and greater-than. So really, you get to where you don't define it at all. It doesn't make sense in a two-dimensional space. So in all honesty, I can't actually answer "give me an exaple of a number squared that is <= 0", though "j" makes sense if you treat its square as a real number instead of a complex number.
As for uses, well, I personally used them most when working with fractals. The idea behind the mandelbrot fractal is that it's a way of graphing z = z^2 + c and its divergence along the real-imaginary axes.
You might also ask why do negative numbers exist? They exist because you want to represent solutions to certain equations like: x + 5 = 0. The same thing applies for imaginary numbers, you want to compactly represent solutions to equations of the form: x^2 + 1 = 0.
Here's one way I've seen them being used in practice. In EE you are often dealing with functions that are sine waves, or that can be decomposed into sine waves. (See for example Fourier Series).
Therefore, you will often see solutions to equations of the form:
f(t) = A*cos(wt)
Furthermore, often you want to represent functions that are shifted by some phase from this function. A 90 degree phase shift will give you a sin function.
g(t) = B*sin(wt)
You can get any arbitrary phase shift by combining these two functions (called inphase and quadrature components).
h(t) = Acos(wt) + iB*sin(wt)
The key here is that in a linear system: if f(t) and g(t) solve an equation, h(t) will also solve the same equation. So, now we have a generic solution to the equation h(t).
The nice thing about h(t) is that it can be written compactly as
h(t) = Cexp(wt+theta)
Using the fact that exp(iw) = cos(w)+i*sin(w).
There is really nothing extraordinarily deep about any of this. It is merely exploiting a mathematical identity to compactly represent a common solution to a wide variety of equations.
Well, for the programmer:
class complex {
public:
double real;
double imaginary;
complex(double a_real) : real(a_real), imaginary(0.0) { }
complex(double a_real, double a_imaginary) : real(a_real), imaginary(a_imaginary) { }
complex operator+(const complex &other) {
return complex(
real + other.real,
imaginary + other.imaginary);
}
complex operator*(const complex &other) {
return complex(
real*other.real - imaginary*other.imaginary,
real*other.imaginary + imaginary*other.real);
}
bool operator==(const complex &other) {
return (real == other.real) && (imaginary == other.imaginary);
}
};
That's basically all there is. Complex numbers are just pairs of real numbers, for which special overloads of +, * and == get defined. And these operations really just get defined like this. Then it turns out that these pairs of numbers with these operations fit in nicely with the rest of mathematics, so they get a special name.
They are not so much numbers like in "counting", but more like in "can be manipulated with +, -, *, ... and don't cause problems when mixed with 'conventional' numbers". They are important because they fill the holes left by real numbers, like that there's no number that has a square of -1. Now you have complex(0, 1) * complex(0, 1) == -1.0 which is a helpful notation, since you don't have to treat negative numbers specially anymore in these cases. (And, as it turns out, basically all other special cases are not needed anymore, when you use complex numbers)
If the question is "Do imaginary numbers exist?" or "How do imaginary numbers exist?" then it is not a question for a programmer. It might not even be a question for a mathematician, but rather a metaphysician or philosopher of mathematics, although a mathematician may feel the need to justify their existence in the field. It's useful to begin with a discussion of how numbers exist at all (quite a few mathematicians who have approached this question are Platonists, fyi). Some insist that imaginary numbers (as the early Whitehead did) are a practical convenience. But then, if imaginary numbers are merely a practical convenience, what does that say about mathematics? You can't just explain away imaginary numbers as a mere practical tool or a pair of real numbers without having to account for both pairs and the general consequences of them being "practical". Others insist in the existence of imaginary numbers, arguing that their non-existence would undermine physical theories that make heavy use of them (QM is knee-deep in complex Hilbert spaces). The problem is beyond the scope of this website, I believe.
If your question is much more down to earth e.g. how does one express imaginary numbers in software, then the answer above (a pair of reals, along with defined operations of them) is it.
I don't want to turn this site into math overflow, but for those who are interested: Check out "An Imaginary Tale: The Story of sqrt(-1)" by Paul J. Nahin. It talks about all the history and various applications of imaginary numbers in a fun and exciting way. That book is what made me decide to pursue a degree in mathematics when I read it 7 years ago (and I was thinking art). Great read!!
The main point is that you add numbers which you define to be solutions to quadratic equations like x2= -1. Name one solution to that equation i, the computation rules for i then follow from that equation.
This is similar to defining negative numbers as the solution of equations like 2 + x = 1 when you only knew positive numbers, or fractions as solutions to equations like 2x = 1 when you only knew integers.
It might be easiest to stop trying to understand how a number can be a square root of a negative number, and just carry on with the assumption that it is.
So (using the i as the square root of -1):
(3+5i)*(2-i)
= (3+5i)*2 + (3+5i)*(-i)
= 6 + 10i -3i - 5i * i
= 6 + (10 -3)*i - 5 * (-1)
= 6 + 7i + 5
= 11 + 7i
works according to the standard rules of maths (remembering that i squared equals -1 on line four).
An imaginary number is a real number multiplied by the imaginary unit i. i is defined as:
i == sqrt(-1)
So:
i * i == -1
Using this definition you can obtain the square root of a negative number like this:
sqrt(-3)
== sqrt(3 * -1)
== sqrt(3 * i * i) // Replace '-1' with 'i squared'
== sqrt(3) * i // Square root of 'i squared' is 'i' so move it out of sqrt()
And your final answer is the real number sqrt(3) multiplied by the imaginary unit i.
A short answer: Real numbers are one-dimensional, imaginary numbers add a second dimension to the equation and some weird stuff happens if you multiply...
If you're interested in finding a simple application and if you're familiar with matrices,
it's sometimes useful to use complex numbers to transform a perfectly real matrice into a triangular one in the complex space, and it makes computation on it a bit easier.
The result is of course perfectly real.
Great answers so far (really like Devin's!)
One more point:
One of the first uses of complex numbers (although they were not called that way at the time) was as an intermediate step in solving equations of the 3rd degree.
link
Again, this is purely an instrument that is used to answer real problems with real numbers having physical meaning.
In electrical engineering, the impedance Z of an inductor is jwL, where w = 2*pi*f (frequency) and j (sqrt(-1))means it leads by 90 degrees, while for a capacitor Z = 1/jwc = -j/wc which is -90deg/wc so that it lags a simple resistor by 90 deg.

Resources