Related
This is a continuation of the two questions posted here,
Declaring a functional recursive sequence in Matlab
Nesting a specific recursion in Pari-GP
To make a long story short, I've constructed a family of functions which solve the tetration functional equation. I've proven these things are holomorphic. And now it's time to make the graphs, or at least, somewhat passable code to evaluate these things. I've managed to get to about 13 significant digits in my precision, but if I try to get more, I encounter a specific error. That error is really nothing more than an overflow error. But it's a peculiar overflow error; Pari-GP doesn't seem to like nesting the logarithm.
My particular mathematical function is approximated by taking something large (think of the order e^e^e^e^e^e^e) to produce something small (of the order e^(-n)). The math inherently requires samples of large values to produce these small values. And strangely, as we get closer to numerically approximating (at about 13 significant digits or so), we also get closer to overflowing because we need such large values to get those 13 significant digits. I am a god awful programmer; and I'm wondering if there could be some work around I'm not seeing.
/*
This function constructs the approximate Abel function
The variable z is the main variable we care about; values of z where real(z)>3 almost surely produces overflow errors
The variable l is the multiplier of the approximate Abel function
The variable n is the depth of iteration required
n can be set to 100, but produces enough accuracy for about 15
The functional equation this satisfies is exp(beta_function(z,l,n))/(1+exp(-l*z)) = beta_function(z+1,l,n); and this program approaches the solution for n to infinity
*/
beta_function(z,l,n) =
{
my(out = 0);
for(i=0,n-1,
out = exp(out)/(exp(l*(n-i-z)) +1));
out;
}
/*
This function is the error term between the approximate Abel function and the actual Abel function
The variable z is the main variable we care about
The variable l is the multiplier
The variable n is the depth of iteration inherited from beta_function
The variable k is the new depth of iteration for this function
n can be set about 100, still; but 15 or 20 is more optimal.
Setting the variable k above 10 will usually produce overflow errors unless the complex arguments of l and z are large.
Precision of about 10 digits is acquired at k = 5 or 6 for real z, for complex z less precision is acquired. k should be set to large values for complex z and l with large imaginary arguments.
*/
tau_K(z,l,n,k)={
if(k == 1,
-log(1+exp(-l*z)),
log(1 + tau_K(z+1,l,n,k-1)/beta_function(z+1,l,n)) - log(1+exp(-l*z))
)
}
/*
This is the actual Abel function
The variable z is the main variable we care about
The variable l is the multiplier
The variable n is the depth of iteration inherited from beta_function
The variable k is the depth of iteration inherited from tau_K
The functional equation this satisfies is exp(Abl_L(z,l,n,k)) = Abl_L(z+1,l,n,k); and this function approaches that solution for n,k to infinity
*/
Abl_L(z,l,n,k) ={
beta_function(z,l,n) + tau_K(z,l,n,k);
}
This is the code for approximating the functions I've proven are holomorphic; but sadly, my code is just horrible. Here, is attached some expected output, where you can see the functional equation being satisfied for about 10 - 13 significant digits.
Abl_L(1,log(2),100,5)
%52 = 0.1520155156321416705967746811
exp(Abl_L(0,log(2),100,5))
%53 = 0.1520155156321485241351294757
Abl_L(1+I,0.3 + 0.3*I,100,14)
%59 = 0.3353395055605129001249035662 + 1.113155080425616717814647305*I
exp(Abl_L(0+I,0.3 + 0.3*I,100,14))
%61 = 0.3353395055605136611147422467 + 1.113155080425614418399986325*I
Abl_L(0.5+5*I, 0.2+3*I,100,60)
%68 = -0.2622549204469267170737985296 + 1.453935357725113433325798650*I
exp(Abl_L(-0.5+5*I, 0.2+3*I,100,60))
%69 = -0.2622549205108654273925182635 + 1.453935357685525635276573253*I
Now, you'll notice I have to change the k value for different values. When the arguments z,l are further away from the real axis, we can make k very large (and we have to to get good accuracy), but it'll still overflow eventually; typically once we've achieved about 13-15 significant digits, is when the functions will start to blow up. You'll note, that setting k =60, means we're taking 60 logarithms. This already sounds like a bad idea, lol. Mathematically though, the value Abl_L(z,l,infinity,infinity) is precisely the function I want. I know that must be odd; nested infinite for-loops sounds like nonsense, lol.
I'm wondering if anyone can think of a way to avoid these overflow errors and obtaining a higher degree of accuracy. In a perfect world, this object most definitely converges, and this code is flawless (albeit, it may be a little slow); but we'd probably need to increase the stacksize indefinitely. In theory this is perfectly fine; but in reality, it's more than impractical. Is there anyway, as a programmer, one can work around this?
The only other option I have at this point is to try and create a bruteforce algorithm to discover the Taylor series of this function; but I'm having less than no luck at doing this. The process is very unique, and trying to solve this problem using Taylor series kind of takes us back to square one. Unless, someone here can think of a fancy way of recovering Taylor series from this expression.
I'm open to all suggestions, any comments, honestly. I'm at my wits end; and I'm wondering if this is just one of those things where the only solution is to increase the stacksize indefinitely (which will absolutely work). It's not just that I'm dealing with large numbers. It's that I need larger and larger values to compute a small value. For that reason, I wonder if there's some kind of quick work around I'm not seeing. The error Pari-GP spits out is always with tau_K, so I'm wondering if this has been coded suboptimally; and that I should add something to it to reduce stacksize as it iterates. Or, if that's even possible. Again, I'm a horrible programmer. I need someone to explain this to me like I'm in kindergarten.
Any help, comments, questions for clarification, are more than welcome. I'm like a dog chasing his tail at this point; wondering why he can't take 1000 logarithms, lol.
Regards.
EDIT:
I thought I'd add in that I can produce arbitrary precision but we have to keep the argument of z way off in the left half plane. If the variables n,k = -real(z) then we can produce arbitrary accuracy by making n as large as we want. Here's some output to explain this, where I've used \p 200 and we pretty much have equality at this level (minus some digits).
Abl_L(-1000,1+I,1000,1000)
%16 = -0.29532276871494189936534470547577975723321944770194434340228137221059739121428422475938130544369331383702421911689967920679087535009910425871326862226131457477211238400580694414163545689138863426335946 + 1.5986481048938885384507658431034702033660039263036525275298731995537068062017849201570422126715147679264813047746465919488794895784667843154275008585688490133825421586142532469402244721785671947462053*I
exp(Abl_L(-1001,1+I,1000,1000))
%17 = -0.29532276871494189936534470547577975723321944770194434340228137221059739121428422475938130544369331383702421911689967920679087535009910425871326862226131457477211238400580694414163545689138863426335945 + 1.5986481048938885384507658431034702033660039263036525275298731995537068062017849201570422126715147679264813047746465919488794895784667843154275008585688490133825421586142532469402244721785671947462053*I
Abl_L(-900 + 2*I, log(2) + 3*I,900,900)
%18 = 0.20353875452777667678084511743583613390002687634123569448354843781494362200997943624836883436552749978073278597542986537166527005507457802227019178454911106220050245899257485038491446550396897420145640 - 5.0331931122239257925629364016676903584393129868620886431850253696250415005420068629776255235599535892051199267683839967636562292529054669236477082528566454129529102224074017515566663538666679347982267*I
exp(Abl_L(-901+2*I,log(2) + 3*I,900,900))
%19 = 0.20353875452777667678084511743583613390002687634123569448354843781494362200997943624836883436552749978073278597542986537166527005507457802227019178454911106220050245980468697844651953381258310669530583 - 5.0331931122239257925629364016676903584393129868620886431850253696250415005420068629776255235599535892051199267683839967636562292529054669236477082528566454129529102221938340371793896394856865112060084*I
Abl_L(-967 -200*I,12 + 5*I,600,600)
%20 = -0.27654907399026253909314469851908124578844308887705076177457491260312326399816915518145788812138543930757803667195961206089367474489771076618495231437711085298551748942104123736438439579713006923910623 - 1.6112686617153127854042520499848670075221756090591592745779176831161238110695974282839335636124974589920150876805977093815716044137123254329208112200116893459086654166069454464903158662028146092983832*I
exp(Abl_L(-968 -200*I,12 + 5*I,600,600))
%21 = -0.27654907399026253909314469851908124578844308887705076177457491260312326399816915518145788812138543930757803667195961206089367474489771076618495231437711085298551748942104123731995533634133194224880928 - 1.6112686617153127854042520499848670075221756090591592745779176831161238110695974282839335636124974589920150876805977093815716044137123254329208112200116893459086654166069454464833417170799085356582884*I
The trouble is, we can't just apply exp over and over to go forward and expect to keep the same precision. The trouble is with exp, which displays so much chaotic behaviour as you iterate it in the complex plane, that this is doomed to work.
Well, I answered my own question. #user207421 posted a comment, and I'm not sure if it meant what I thought it meant, but I think it got me to where I want. I sort of assumed that exp wouldn't inherit the precision of its argument, but apparently that's true. So all I needed was to define,
Abl_L(z,l,n,k) ={
if(real(z) <= -max(n,k),
beta_function(z,l,n) + tau_K(z,l,n,k),
exp(Abl_L(z-1,l,n,k)));
}
Everything works perfectly fine from here; of course, for what I need it for. So, I answered my own question, and it was pretty simple. I just needed an if statement.
Thanks anyway, to anyone who read this.
How do I find the answer for: n! = Θ( )?
Even Big O is enough. All clues I found are complex math ideas.
What would be the correct approach to tackle this problem? recursion tree seems too much of a work
the goal is to compare between n! and n^logn
Θ(n!) is a perfectly fine, valid complexity, so n! = Θ(n!).
As Niklas pointed out, this is actually true for every function, although, for something like
6x² + 15x + 2, you could write Θ(6x² + 15x + 2), but it would generally be preferred to simply write Θ(x²) instead.
If you want to compare two functions, simply plotting it on WolframAlpha might be considered sufficient to see that Θ(n!) functions grow faster.
To mathematically determine the result, we can take the log of both, giving us log (n!) and log nlog n = log n . log n = (log n)2.
Then, since log(n!) = Θ(n log n), and n log n > (log n)2 for any large n, we could derive that Θ(n!) grows faster.
The derivation is perhaps non-trivial, and I'm slightly unsure whether it's actually possible, but we're a bit beyond the scope of Stack Overflow already (try the Mathematics site if you want more details).
If you want some sort of "closed form" expressions, you can get n! = Ω((sqrt(n/2))^n) and n! = O(n^n). Note sure those are more useful.
To derive them, see that (n/2)^(n/2) < n! < n^n.
To compare against n^(log n), use limit rules; you may also want to use n = e^(log n).
I had a test about asymptotic notations and there was a question:
Consider the following:
O(o(f(n)) = o(f(n))
Write in words the meaning of the statement, using conventions from asymptotic notation.
Is the statement true or false? Justify.
I got it wrong (don't exactly remember what I wrote), but I think is something like:
For any function g(n) = o(f(n)), there
is a function h(n) = o(f(n)) so that
h(n) = O(f(n)).
Is it correct?
And for (2), I'm not totally sure. Can someone help me with this one too?
Thanks in advance.
I think they were trying to ask a question about the relationship between Big O and little o asymptotic notation.
A) Big O bounding of a Little O bounded function reduces to/imples the Little O bound of that function.
B) True. Big O is a less "strict" bound in that it stipulates that there is an M and an x0 such that f(n) <= M * g(n) for x >= x0, whereas Little O stipulates that for all positive M, there is an x0 such that f(n) is upper-bounded by M * g(n).
Thus the "an M" of Big O is a subset of the "all M" of little O, and so O(o(f(n)) is equivalent to o(f(n)).
For the actual math and not my weak ascii, see the wikipedia page
Meaning in plain English :
The upper bound of a function that is strictly greater than f(n) is strictly greater than f(n)
Your statement could have been written as : For any function g(n)=o(f(n)) there exists h(n)=O(g(n)) which implies h(n) is also o(f(n)) => O(g(n)) = o(f(n)) => O(o(f(n))) = o(f(n))
Yes the statement is correct.
(of course the above statement assumes all the correct constants and the use of "strictly greater is fr readability and understanding : it should be "a strict upperbound")
Sorry if this seems like a bit of an aside, but I think it's a dodgy question (as Alexandre C has alluded to) as it's a pretty big abuse of notation.
The way big-O notation is commonly taught (especially in a computer science class) is as if O(f(n)) is a function. This should set off some alarm bells, as the statements "n = O(n)" and "2n = O(n)" are both true, but "n = 2n" is not. If we want to say "f(n) is big-O of g(n)" we technically shouldn't be saying "f(n) = O(g(n))", rather we should say "f(n) is an element of O(g(n))". The former is just a convenient shorthand.
So back to the actual question, O(o(f(n))) doesn't really mean a whole lot (or at least I've never seen a formal definition of big-O of a set of functions). But I guess the logical way to interpret it would be as per enjay's answer, with g(n) = o(f(n)).
I have a function that takes a floating point number and returns a floating point number. It can be assumed that if you were to graph the output of this function it would be 'n' shaped, ie. there would be a single maximum point, and no other points on the function with a zero slope. We also know that input value that yields this maximum output will lie between two known points, perhaps 0.0 and 1.0.
I need to efficiently find the input value that yields the maximum output value to some degree of approximation, without doing an exhaustive search.
I'm looking for something similar to Newton's Method which finds the roots of a function, but since my function is opaque I can't get its derivative.
I would like to down-thumb all the other answers so far, for various reasons, but I won't.
An excellent and efficient method for minimizing (or maximizing) smooth functions when derivatives are not available is parabolic interpolation. It is common to write the algorithm so it temporarily switches to the golden-section search (Brent's minimizer) when parabolic interpolation does not progress as fast as golden-section would.
I wrote such an algorithm in C++. Any offers?
UPDATE: There is a C version of the Brent minimizer in GSL. The archives are here: ftp://ftp.club.cc.cmu.edu/gnu/gsl/ Note that it will be covered by some flavor of GNU "copyleft."
As I write this, the latest-and-greatest appears to be gsl-1.14.tar.gz. The minimizer is located in the file gsl-1.14/min/brent.c. It appears to have termination criteria similar to what I implemented. I have not studied how it decides to switch to golden section, but for the OP, that is probably moot.
UPDATE 2: I googled up a public domain java version, translated from FORTRAN. I cannot vouch for its quality. http://www1.fpl.fs.fed.us/Fmin.java I notice that the hard-coded machine efficiency ("machine precision" in the comments) is 1/2 the value for a typical PC today. Change the value of eps to 2.22045e-16.
Edit 2: The method described in Jive Dadson is a better way to go about this. I'm leaving my answer up since it's easier to implement, if speed isn't too much of an issue.
Use a form of binary search, combined with numeric derivative approximations.
Given the interval [a, b], let x = (a + b) /2
Let epsilon be something very small.
Is (f(x + epsilon) - f(x)) positive? If yes, the function is still growing at x, so you recursively search the interval [x, b]
Otherwise, search the interval [a, x].
There might be a problem if the max lies between x and x + epsilon, but you might give this a try.
Edit: The advantage to this approach is that it exploits the known properties of the function in question. That is, I assumed by "n"-shaped, you meant, increasing-max-decreasing. Here's some Python code I wrote to test the algorithm:
def f(x):
return -x * (x - 1.0)
def findMax(function, a, b, maxSlope):
x = (a + b) / 2.0
e = 0.0001
slope = (function(x + e) - function(x)) / e
if abs(slope) < maxSlope:
return x
if slope > 0:
return findMax(function, x, b, maxSlope)
else:
return findMax(function, a, x, maxSlope)
Typing findMax(f, 0, 3, 0.01) should return 0.504, as desired.
For optimizing a concave function, which is the type of function you are talking about, without evaluating the derivative I would use the secant method.
Given the two initial values x[0]=0.0 and x[1]=1.0 I would proceed to compute the next approximations as:
def next_x(x, xprev):
return x - f(x) * (x - xprev) / (f(x) - f(xprev))
and thus compute x[2], x[3], ... until the change in x becomes small enough.
Edit: As Jive explains, this solution is for root finding which is not the question posed. For optimization the proper solution is the Brent minimizer as explained in his answer.
The Levenberg-Marquardt algorithm is a Newton's method like optimizer. It has a C/C++ implementation levmar that doesn't require you to define the derivative function. Instead it will evaluate the objective function in the current neighborhood to move to the maximum.
BTW: this website appears to be updated since I last visited it, hope it's even the same one I remembered. Apparently it now also support other languages.
Given that it's only a function of a single variable and has one extremum in the interval, you don't really need Newton's method. Some sort of line search algorithm should suffice. This wikipedia article is actually not a bad starting point, if short on details. Note in particular that you could just use the method described under "direct search", starting with the end points of your interval as your two points.
I'm not sure if you'd consider that an "exhaustive search", but it should actually be pretty fast I think for this sort of function (that is, a continuous, smooth function with only one local extremum in the given interval).
You could reduce it to a simple linear fit on the delta's, finding the place where it crosses the x axis. Linear fit can be done very quickly.
Or just take 3 points (left/top/right) and fix the parabola.
It depends mostly on the nature of the underlying relation between x and y, I think.
edit this is in case you have an array of values like the question's title states. When you have a function take Newton-Raphson.
As a programmer I think it is my job to be good at math but I am having trouble getting my head round imaginary numbers. I have tried google and wikipedia with no luck so I am hoping a programmer can explain in to me, give me an example of a number squared that is <= 0, some example usage etc...
I guess this blog entry is one good explanation:
The key word is rotation (as opposed to direction for negative numbers, which are as stranger as imaginary number when you think of them: less than nothing ?)
Like negative numbers modeling flipping, imaginary numbers can model anything that rotates between two dimensions “X” and “Y”. Or anything with a cyclic, circular relationship
Problem: not only am I a programmer, I am a mathematician.
Solution: plow ahead anyway.
There's nothing really magical to complex numbers. The idea behind their inception is that there's something wrong with real numbers. If you've got an equation x^2 + 4, this is never zero, whereas x^2 - 2 is zero twice. So mathematicians got really angry and wanted there to always be zeroes with polynomials of degree at least one (wanted an "algebraically closed" field), and created some arbitrary number j such that j = sqrt(-1). All the rules sort of fall into place from there (though they are more accurately reorganized differently-- specifically, you formally can't actually say "hey this number is the square root of negative one"). If there's that number j, you can get multiples of j. And you can add real numbers to j, so then you've got complex numbers. The operations with complex numbers are similar to operations with binomials (deliberately so).
The real problem with complexes isn't in all this, but in the fact that you can't define a system whereby you can get the ordinary rules for less-than and greater-than. So really, you get to where you don't define it at all. It doesn't make sense in a two-dimensional space. So in all honesty, I can't actually answer "give me an exaple of a number squared that is <= 0", though "j" makes sense if you treat its square as a real number instead of a complex number.
As for uses, well, I personally used them most when working with fractals. The idea behind the mandelbrot fractal is that it's a way of graphing z = z^2 + c and its divergence along the real-imaginary axes.
You might also ask why do negative numbers exist? They exist because you want to represent solutions to certain equations like: x + 5 = 0. The same thing applies for imaginary numbers, you want to compactly represent solutions to equations of the form: x^2 + 1 = 0.
Here's one way I've seen them being used in practice. In EE you are often dealing with functions that are sine waves, or that can be decomposed into sine waves. (See for example Fourier Series).
Therefore, you will often see solutions to equations of the form:
f(t) = A*cos(wt)
Furthermore, often you want to represent functions that are shifted by some phase from this function. A 90 degree phase shift will give you a sin function.
g(t) = B*sin(wt)
You can get any arbitrary phase shift by combining these two functions (called inphase and quadrature components).
h(t) = Acos(wt) + iB*sin(wt)
The key here is that in a linear system: if f(t) and g(t) solve an equation, h(t) will also solve the same equation. So, now we have a generic solution to the equation h(t).
The nice thing about h(t) is that it can be written compactly as
h(t) = Cexp(wt+theta)
Using the fact that exp(iw) = cos(w)+i*sin(w).
There is really nothing extraordinarily deep about any of this. It is merely exploiting a mathematical identity to compactly represent a common solution to a wide variety of equations.
Well, for the programmer:
class complex {
public:
double real;
double imaginary;
complex(double a_real) : real(a_real), imaginary(0.0) { }
complex(double a_real, double a_imaginary) : real(a_real), imaginary(a_imaginary) { }
complex operator+(const complex &other) {
return complex(
real + other.real,
imaginary + other.imaginary);
}
complex operator*(const complex &other) {
return complex(
real*other.real - imaginary*other.imaginary,
real*other.imaginary + imaginary*other.real);
}
bool operator==(const complex &other) {
return (real == other.real) && (imaginary == other.imaginary);
}
};
That's basically all there is. Complex numbers are just pairs of real numbers, for which special overloads of +, * and == get defined. And these operations really just get defined like this. Then it turns out that these pairs of numbers with these operations fit in nicely with the rest of mathematics, so they get a special name.
They are not so much numbers like in "counting", but more like in "can be manipulated with +, -, *, ... and don't cause problems when mixed with 'conventional' numbers". They are important because they fill the holes left by real numbers, like that there's no number that has a square of -1. Now you have complex(0, 1) * complex(0, 1) == -1.0 which is a helpful notation, since you don't have to treat negative numbers specially anymore in these cases. (And, as it turns out, basically all other special cases are not needed anymore, when you use complex numbers)
If the question is "Do imaginary numbers exist?" or "How do imaginary numbers exist?" then it is not a question for a programmer. It might not even be a question for a mathematician, but rather a metaphysician or philosopher of mathematics, although a mathematician may feel the need to justify their existence in the field. It's useful to begin with a discussion of how numbers exist at all (quite a few mathematicians who have approached this question are Platonists, fyi). Some insist that imaginary numbers (as the early Whitehead did) are a practical convenience. But then, if imaginary numbers are merely a practical convenience, what does that say about mathematics? You can't just explain away imaginary numbers as a mere practical tool or a pair of real numbers without having to account for both pairs and the general consequences of them being "practical". Others insist in the existence of imaginary numbers, arguing that their non-existence would undermine physical theories that make heavy use of them (QM is knee-deep in complex Hilbert spaces). The problem is beyond the scope of this website, I believe.
If your question is much more down to earth e.g. how does one express imaginary numbers in software, then the answer above (a pair of reals, along with defined operations of them) is it.
I don't want to turn this site into math overflow, but for those who are interested: Check out "An Imaginary Tale: The Story of sqrt(-1)" by Paul J. Nahin. It talks about all the history and various applications of imaginary numbers in a fun and exciting way. That book is what made me decide to pursue a degree in mathematics when I read it 7 years ago (and I was thinking art). Great read!!
The main point is that you add numbers which you define to be solutions to quadratic equations like x2= -1. Name one solution to that equation i, the computation rules for i then follow from that equation.
This is similar to defining negative numbers as the solution of equations like 2 + x = 1 when you only knew positive numbers, or fractions as solutions to equations like 2x = 1 when you only knew integers.
It might be easiest to stop trying to understand how a number can be a square root of a negative number, and just carry on with the assumption that it is.
So (using the i as the square root of -1):
(3+5i)*(2-i)
= (3+5i)*2 + (3+5i)*(-i)
= 6 + 10i -3i - 5i * i
= 6 + (10 -3)*i - 5 * (-1)
= 6 + 7i + 5
= 11 + 7i
works according to the standard rules of maths (remembering that i squared equals -1 on line four).
An imaginary number is a real number multiplied by the imaginary unit i. i is defined as:
i == sqrt(-1)
So:
i * i == -1
Using this definition you can obtain the square root of a negative number like this:
sqrt(-3)
== sqrt(3 * -1)
== sqrt(3 * i * i) // Replace '-1' with 'i squared'
== sqrt(3) * i // Square root of 'i squared' is 'i' so move it out of sqrt()
And your final answer is the real number sqrt(3) multiplied by the imaginary unit i.
A short answer: Real numbers are one-dimensional, imaginary numbers add a second dimension to the equation and some weird stuff happens if you multiply...
If you're interested in finding a simple application and if you're familiar with matrices,
it's sometimes useful to use complex numbers to transform a perfectly real matrice into a triangular one in the complex space, and it makes computation on it a bit easier.
The result is of course perfectly real.
Great answers so far (really like Devin's!)
One more point:
One of the first uses of complex numbers (although they were not called that way at the time) was as an intermediate step in solving equations of the 3rd degree.
link
Again, this is purely an instrument that is used to answer real problems with real numbers having physical meaning.
In electrical engineering, the impedance Z of an inductor is jwL, where w = 2*pi*f (frequency) and j (sqrt(-1))means it leads by 90 degrees, while for a capacitor Z = 1/jwc = -j/wc which is -90deg/wc so that it lags a simple resistor by 90 deg.