I have written a program in C where I allocate memory to store a matrix of dimensions n-by-n and then feed a linear algebra subroutine with. I'm having big troubles in understanding how to identify time complexity for these operations from a plot. Particularly, I'm interested in identify how CPU time scales as a function of n, where n is my size.
To do so, I created an array of n = 2, 4, 8, ..., 512 and I computed the CPU time for both the operations. I repeated this process 10000 times for each n and I took the mean eventually. I therefore come up with a second array that I can match with my array of n.
I've been suggested to print in double logarithmic plot, and I read here and here that, using this way, "powers shows up as a straight line" (2). This is the resulting figure (dgesv is the linear algebra subroutine I used).
Now, I'm guessing that my time complexity is O(log n) since I get straight lines for both my operations (I do not take into consideration the red line). I saw the shapes differences between, say, linear complexity, logarithmic complexity etc. But I still have doubts if I should say something about the time complexity of dgesv, for instance. I'm sure there's a way that I don't know at all, so I'd be glad if someone could help me in understanding how to look at this plot properly.
PS: if there's a specific community where to post this question, please let me know so I could move it avoiding much more mess here. Thanks everyone.
Take your yellow line, it appears to be going from (0.9, -2.6) to (2.7, 1.6), giving it a slope roughly equal to 2.5. As you're plotting log(t) versus log(n) this means that:
log(t) = 2.5 log(n) + c
or, exponentiating both sides:
t = exp(2.5 log(n) + c) = c' n^2.5
The power of 2.5 may be an underestimate as your dsegv likely has a cost of 2/3 n^3 (though O(n^2.5) is theoretically possible).
Related
This is a continuation of the two questions posted here,
Declaring a functional recursive sequence in Matlab
Nesting a specific recursion in Pari-GP
To make a long story short, I've constructed a family of functions which solve the tetration functional equation. I've proven these things are holomorphic. And now it's time to make the graphs, or at least, somewhat passable code to evaluate these things. I've managed to get to about 13 significant digits in my precision, but if I try to get more, I encounter a specific error. That error is really nothing more than an overflow error. But it's a peculiar overflow error; Pari-GP doesn't seem to like nesting the logarithm.
My particular mathematical function is approximated by taking something large (think of the order e^e^e^e^e^e^e) to produce something small (of the order e^(-n)). The math inherently requires samples of large values to produce these small values. And strangely, as we get closer to numerically approximating (at about 13 significant digits or so), we also get closer to overflowing because we need such large values to get those 13 significant digits. I am a god awful programmer; and I'm wondering if there could be some work around I'm not seeing.
/*
This function constructs the approximate Abel function
The variable z is the main variable we care about; values of z where real(z)>3 almost surely produces overflow errors
The variable l is the multiplier of the approximate Abel function
The variable n is the depth of iteration required
n can be set to 100, but produces enough accuracy for about 15
The functional equation this satisfies is exp(beta_function(z,l,n))/(1+exp(-l*z)) = beta_function(z+1,l,n); and this program approaches the solution for n to infinity
*/
beta_function(z,l,n) =
{
my(out = 0);
for(i=0,n-1,
out = exp(out)/(exp(l*(n-i-z)) +1));
out;
}
/*
This function is the error term between the approximate Abel function and the actual Abel function
The variable z is the main variable we care about
The variable l is the multiplier
The variable n is the depth of iteration inherited from beta_function
The variable k is the new depth of iteration for this function
n can be set about 100, still; but 15 or 20 is more optimal.
Setting the variable k above 10 will usually produce overflow errors unless the complex arguments of l and z are large.
Precision of about 10 digits is acquired at k = 5 or 6 for real z, for complex z less precision is acquired. k should be set to large values for complex z and l with large imaginary arguments.
*/
tau_K(z,l,n,k)={
if(k == 1,
-log(1+exp(-l*z)),
log(1 + tau_K(z+1,l,n,k-1)/beta_function(z+1,l,n)) - log(1+exp(-l*z))
)
}
/*
This is the actual Abel function
The variable z is the main variable we care about
The variable l is the multiplier
The variable n is the depth of iteration inherited from beta_function
The variable k is the depth of iteration inherited from tau_K
The functional equation this satisfies is exp(Abl_L(z,l,n,k)) = Abl_L(z+1,l,n,k); and this function approaches that solution for n,k to infinity
*/
Abl_L(z,l,n,k) ={
beta_function(z,l,n) + tau_K(z,l,n,k);
}
This is the code for approximating the functions I've proven are holomorphic; but sadly, my code is just horrible. Here, is attached some expected output, where you can see the functional equation being satisfied for about 10 - 13 significant digits.
Abl_L(1,log(2),100,5)
%52 = 0.1520155156321416705967746811
exp(Abl_L(0,log(2),100,5))
%53 = 0.1520155156321485241351294757
Abl_L(1+I,0.3 + 0.3*I,100,14)
%59 = 0.3353395055605129001249035662 + 1.113155080425616717814647305*I
exp(Abl_L(0+I,0.3 + 0.3*I,100,14))
%61 = 0.3353395055605136611147422467 + 1.113155080425614418399986325*I
Abl_L(0.5+5*I, 0.2+3*I,100,60)
%68 = -0.2622549204469267170737985296 + 1.453935357725113433325798650*I
exp(Abl_L(-0.5+5*I, 0.2+3*I,100,60))
%69 = -0.2622549205108654273925182635 + 1.453935357685525635276573253*I
Now, you'll notice I have to change the k value for different values. When the arguments z,l are further away from the real axis, we can make k very large (and we have to to get good accuracy), but it'll still overflow eventually; typically once we've achieved about 13-15 significant digits, is when the functions will start to blow up. You'll note, that setting k =60, means we're taking 60 logarithms. This already sounds like a bad idea, lol. Mathematically though, the value Abl_L(z,l,infinity,infinity) is precisely the function I want. I know that must be odd; nested infinite for-loops sounds like nonsense, lol.
I'm wondering if anyone can think of a way to avoid these overflow errors and obtaining a higher degree of accuracy. In a perfect world, this object most definitely converges, and this code is flawless (albeit, it may be a little slow); but we'd probably need to increase the stacksize indefinitely. In theory this is perfectly fine; but in reality, it's more than impractical. Is there anyway, as a programmer, one can work around this?
The only other option I have at this point is to try and create a bruteforce algorithm to discover the Taylor series of this function; but I'm having less than no luck at doing this. The process is very unique, and trying to solve this problem using Taylor series kind of takes us back to square one. Unless, someone here can think of a fancy way of recovering Taylor series from this expression.
I'm open to all suggestions, any comments, honestly. I'm at my wits end; and I'm wondering if this is just one of those things where the only solution is to increase the stacksize indefinitely (which will absolutely work). It's not just that I'm dealing with large numbers. It's that I need larger and larger values to compute a small value. For that reason, I wonder if there's some kind of quick work around I'm not seeing. The error Pari-GP spits out is always with tau_K, so I'm wondering if this has been coded suboptimally; and that I should add something to it to reduce stacksize as it iterates. Or, if that's even possible. Again, I'm a horrible programmer. I need someone to explain this to me like I'm in kindergarten.
Any help, comments, questions for clarification, are more than welcome. I'm like a dog chasing his tail at this point; wondering why he can't take 1000 logarithms, lol.
Regards.
EDIT:
I thought I'd add in that I can produce arbitrary precision but we have to keep the argument of z way off in the left half plane. If the variables n,k = -real(z) then we can produce arbitrary accuracy by making n as large as we want. Here's some output to explain this, where I've used \p 200 and we pretty much have equality at this level (minus some digits).
Abl_L(-1000,1+I,1000,1000)
%16 = -0.29532276871494189936534470547577975723321944770194434340228137221059739121428422475938130544369331383702421911689967920679087535009910425871326862226131457477211238400580694414163545689138863426335946 + 1.5986481048938885384507658431034702033660039263036525275298731995537068062017849201570422126715147679264813047746465919488794895784667843154275008585688490133825421586142532469402244721785671947462053*I
exp(Abl_L(-1001,1+I,1000,1000))
%17 = -0.29532276871494189936534470547577975723321944770194434340228137221059739121428422475938130544369331383702421911689967920679087535009910425871326862226131457477211238400580694414163545689138863426335945 + 1.5986481048938885384507658431034702033660039263036525275298731995537068062017849201570422126715147679264813047746465919488794895784667843154275008585688490133825421586142532469402244721785671947462053*I
Abl_L(-900 + 2*I, log(2) + 3*I,900,900)
%18 = 0.20353875452777667678084511743583613390002687634123569448354843781494362200997943624836883436552749978073278597542986537166527005507457802227019178454911106220050245899257485038491446550396897420145640 - 5.0331931122239257925629364016676903584393129868620886431850253696250415005420068629776255235599535892051199267683839967636562292529054669236477082528566454129529102224074017515566663538666679347982267*I
exp(Abl_L(-901+2*I,log(2) + 3*I,900,900))
%19 = 0.20353875452777667678084511743583613390002687634123569448354843781494362200997943624836883436552749978073278597542986537166527005507457802227019178454911106220050245980468697844651953381258310669530583 - 5.0331931122239257925629364016676903584393129868620886431850253696250415005420068629776255235599535892051199267683839967636562292529054669236477082528566454129529102221938340371793896394856865112060084*I
Abl_L(-967 -200*I,12 + 5*I,600,600)
%20 = -0.27654907399026253909314469851908124578844308887705076177457491260312326399816915518145788812138543930757803667195961206089367474489771076618495231437711085298551748942104123736438439579713006923910623 - 1.6112686617153127854042520499848670075221756090591592745779176831161238110695974282839335636124974589920150876805977093815716044137123254329208112200116893459086654166069454464903158662028146092983832*I
exp(Abl_L(-968 -200*I,12 + 5*I,600,600))
%21 = -0.27654907399026253909314469851908124578844308887705076177457491260312326399816915518145788812138543930757803667195961206089367474489771076618495231437711085298551748942104123731995533634133194224880928 - 1.6112686617153127854042520499848670075221756090591592745779176831161238110695974282839335636124974589920150876805977093815716044137123254329208112200116893459086654166069454464833417170799085356582884*I
The trouble is, we can't just apply exp over and over to go forward and expect to keep the same precision. The trouble is with exp, which displays so much chaotic behaviour as you iterate it in the complex plane, that this is doomed to work.
Well, I answered my own question. #user207421 posted a comment, and I'm not sure if it meant what I thought it meant, but I think it got me to where I want. I sort of assumed that exp wouldn't inherit the precision of its argument, but apparently that's true. So all I needed was to define,
Abl_L(z,l,n,k) ={
if(real(z) <= -max(n,k),
beta_function(z,l,n) + tau_K(z,l,n,k),
exp(Abl_L(z-1,l,n,k)));
}
Everything works perfectly fine from here; of course, for what I need it for. So, I answered my own question, and it was pretty simple. I just needed an if statement.
Thanks anyway, to anyone who read this.
One of my homework problems has me deriving the Big-Oh complexity of the function:
c^x + x(log(x))^2 + (10x)^c (where c is a constant > 1)
I know that of these three terms, c^x grows the fastest, and that leads me to believe that the complexity is simply c^x. However, I was skeptical we'd be given a question that easy to solve so I graphed just c^x (used 4 as c) versus the whole equation. As expected, the whole equation grew faster. But even after adding a large constant in front of c^x (1000*c^x), the full equation still seemed to grow faster in the long run. Am I relying too much on the graphs, or is my logic actually wrong?
Thanks!
The complexity IS c^x. The reason c^x take precedence is because it is an exponential and because of how fast exponentials grow as x gets bigger. When you look at the graphs the function as a whole will of course grow faster, but this difference becomes irrelevant as x grows larger.
Example: Assume the constant c = 10. Lets explore what happens as x gets larger.
10^5 = 100 thousand = 100,000
10^10 = 10 billion = 10,000,000,000
10^100 = 10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000
You get the point, exponentials grow so ridiculously fast that as x grows, the difference between c^x and the rest of the function becomes completely insignificant. Complexity theory is about finding the single term that will grow fastest, the rate of the function as a whole doesn't matter and this is why.
NOTE: sorry if this answer was to late to help with your homework
I want to very roughly simulate friction on particles from a top-down point of view. The particles should tend to come to a halt when they are going slow and experience less friction (relative to their velocity) the faster they are going. It should look something like this...
At the moment friction (a force applied to the particles every frame) = -(velocity*constant1 - velocity^2*constant2 )*deltatime
Can someone suggest a better way of doing this?
Usual friction
Usually friction is caused by speed and modeled in simple models by something like, say, - c v^2.
That means that if your particles have a force attracting them (e.g. gravity), then at some point friction evens it out and your particles reach a maximum constant velocity.
With the friction formula in your example, it eventually decreases and even reaches a positive value for big enough velocities, thus pushing the particles in the direction of their speed, which is kind of strange. In any case, friction forces that do not monotonously increase with speed are suspect.
The behaviour you describe by saying "experience less friction relative to their velocities the faster they are going", can be expressed mathematically by saying that you want a function that diverges less quickly than the identity. Formally, you are looking for a function f(x) such that f(x)/x converges towards 0 when x goes to infinity.
Let us leave aside functions that converge to a finite value (possibly 0), as they don't seem intuitive : the faster you go, the more friction you should have.
Functions you can use
Functions that diverge slower than x but still diverge are typically the powers of x with degrees in ]0,1[ (0 and 1 excluded). A good example is 0.5, a.k.a the square root of x.
Furthermore, another good fit is the log function, that diverges very slowly.
Then you can take any linear combination of the above, and even powers of the log.
Behaviour for small velocities
Now on your graph, the red line is always below the grey one, which isn't the case for a function x^a as described above. You probably want something that is 0 in x=0, since it doesn't make much sense even with your model to see a particle turn around and go back when it reaches speed 0. To comply to both requirements, you again have several options.
piecewise functions. Trivial, possibly easy to compute. Say you use x^0.83 on the interval [1,+inf[ and something that is 0 in x=0 and 1^0.83=1, in x=1, on the interval [0,1[. Typically something as trivial as x itself.
shifting the function, typically by 1 (that is enough for all the cases we look at). For the log this is pretty straightforward, since it diverges towards -inf in 0, and is 0 in 1, you should for your use never take log(x) but rather log(1+x). For powers, they behave like we want ("below the red line") for x > 1. To keep the property of a null friction when x=0, you need to shift in a fashion like : f(x)=(1+x)^a-1 with a in ]0,1[
Customizing to your taste
Now that you have all this, I would recommend you to try to plot a few of these functions. Pick the one that seems the most appropriate, you can typically chose how slow to increase by picking either the log (as slowest) or any power function (knowing that the bigger the power, the faster the increase in friction relative to velocity). You can also stretch the curve to increase faster or slower but keep the same "curvature", by dilating the curve, i.e. replace x by x/a or a * x. For example, with x^0.8, you can get this function : http://fooplot.com/plot/b7a0whkcdz
Only you know what kind of forces apply on the particles (which is with what you should compare the output values of the function, thus the f(x)), and what velocities are typical for the particles (thus over which x your function should span with acceptable values), so I cannot help you with that.
Then try some experiments with your game to adjust your parameters, and voilà ! You're done.
Some basic physic equations will help us.
Assuming we only deal with basic movement, with no external forces, it should go like this (the following calculation are there to clarify the final equation, you can skip this part):
Y-Axis forces: Normal(N) up, Gravitation(mg) down -> N = mg
X-Axis forces: Friction force to the left (or right), nothing to the other side -> -F(friction) = ma , F(friction) = N * μ(COF) -> F(friction) = mg * μ -> - mg * μ = ma
-> a = - μ * g ; v = v0 + at -> v = v0 - t(μ * g) .
V - the velocity of the object at time t = v(t) [Measuring in m/s].
V0 - The starting velocity of the object [Measuring in m/s].
t - Time passed since the starting of the movement [Measuring in s].
μ - Coefficient of Friction, a constant which represents how rough the surface is (0 - no friction at all) [Measuring in NaN - no units for COF].
g - Acceleration of Gravity (Constant) [Measuring in m/s^2], on Earth it's 9.81 m/s^2, but you can use 10.
Assuming μ to be 0.25 (COF of wood), the equation is:
V(t) = v0 - 2.5t
Why is O(log2N) = O(log3N) ?
I don't understand this. Does big O not mean upper bound of something?
Isn't log2N bigger than log3N ? When I graph them, log2N is above log3N .
Big O doesn't deal with constant factors, and the difference between Logx(n) and Logy(n) is a constant factor.
To put it a little differently, the base of the logarithm basically just modifies the slope of a line/curve on the graph. Big-O isn't concerned with the slope of the curve on the graph, only with the shape of the curve. If you can get one curve to match another by shifting its slope up or down, then as far as Big-O notation cares, they're the same function and the same curve.
To try to put this in perspective, perhaps a drawing of some of the more common curve shapes would be useful:
As noted above, only the shape of a line matters though, not its slope. In the following figure:
...all the lines are straight, so even though their slopes differ radically, they're still all identical as far as big-O cares--they're all just O(N), regardless of the slope. With logarithms, we get roughly the same effect--each line will be curved like the O(log N) line in the previous picture, but changing the base of the logarithm will rotate that curve around the origin so you'll (again) have he same shape of line, but at different slopes (so, again, as far as big-O cares, they're all identical). So, getting to the original question, if we change bases of logarithms, we get curves that look something like this:
Here it may be a little less obvious that all that's happening is a constant change in the slope, but that's exactly the difference here, just like with the straight lines above.
It is because changing base of logarithms is equal to multiplying it by a constant. And big O does not care about constants.
log_a(b) = log_c(b) / log_c(a)
So to get from log2(n) to log3(n) you need to multiply it by 1 / log(3) 2.
In other words log2(n) = log3(n) / log3(2).
log3(2) is a constant and O(cn) = O(n), thus O (log2(n)) = O (log3(n))
There are some good answer here already, so please read them too.
To understand why Log2(n) is O(log3(n)) you need to understand two things.
1) What is mean by BigO notation. I suggest reading this: http://en.wikipedia.org/wiki/Big_O_notation If you understnad this,you will know 2n and 16n+5 are both O(N)
2) how logarithms work. the difference between log2 (N) and log10(N) will be a simple ratio, easily calculated if you want it as per luk32's answer.
Since logs at different bases differ only a by a constant ratio, and Big O is indifferent to minor things like constant multiplying factors, you will often find O(logN) actually omits the base, because the choice of any constant base (eg 2,3,10,e) makes no difference in this context.
It depends on the context in which O notation is used. When you are using it in algorithmic complexity reasoning you are interested in the asymptotic behaviour of a function, ie how it grows/decreases when it tends to (plus or minus) infinity (or another point of accumulation).
Therefore whereas f(n) = 3n is always less than g(n) = 1000n they both appear in O(n) since they grow linearly (according to their expressions) asymptotically.
The same reasoning pattern can be taken for the logarithm case that you posted since different bases logarithms differ for a constant factor, but share the same asymptotical behaviour.
Changing context, if you were interested in computing the exact performance of an algorithm given your estimates being exact and not approximate, you would prefer the lower one of course. In general all computational complexity comparisons are approximation thus done via asymptotical reasoning.
I have been researching the log-sum-exp problem. I have a list of numbers stored as logarithms which I would like to sum and store in a logarithm.
the naive algorithm is
def naive(listOfLogs):
return math.log10(sum(10**x for x in listOfLogs))
many websites including:
logsumexp implementation in C?
and
http://machineintelligence.tumblr.com/post/4998477107/
recommend using
def recommend(listOfLogs):
maxLog = max(listOfLogs)
return maxLog + math.log10(sum(10**(x-maxLog) for x in listOfLogs))
aka
def recommend(listOfLogs):
maxLog = max(listOfLogs)
return maxLog + naive((x-maxLog) for x in listOfLogs)
what I don't understand is if recommended algorithm is better why should we call it recursively?
would that provide even more benefit?
def recursive(listOfLogs):
maxLog = max(listOfLogs)
return maxLog + recursive((x-maxLog) for x in listOfLogs)
while I'm asking are there other tricks to make this calculation more numerically stable?
Some background for others: when you're computing an expression of the following type directly
ln( exp(x_1) + exp(x_2) + ... )
you can run into two kinds of problems:
exp(x_i) can overflow (x_i is too big), resulting in numbers that you can't add together
exp(x_i) can underflow (x_i is too small), resulting in a bunch of zeroes
If all the values are big, or all are small, we can divide by some exp(const) and add const to the outside of the ln to get the same value. Thus if we can pick the right const, we can shift the values into some range to prevent overflow/underflow.
The OP's question is, why do we pick max(x_i) for this const instead of any other value? Why don't we recursively do this calculation, picking the max out of each subset and computing the logarithm repeatedly?
The answer: because it doesn't matter.
The reason? Let's say x_1 = 10 is big, and x_2 = -10 is small. (These numbers aren't even very large in magnitude, right?) The expression
ln( exp(10) + exp(-10) )
will give you a value very close to 10. If you don't believe me, go try it. In fact, in general, ln( exp(x_1) + exp(x_2) + ... ) will give be very close to max(x_i) if some particular x_i is much bigger than all the others. (As an aside, this functional form, asymptotically, actually lets you mathematically pick the maximum from a set of numbers.)
Hence, the reason we pick the max instead of any other value is because the smaller values will hardly affect the result. If they underflow, they would have been too small to affect the sum anyway, because it would be dominated by the largest number and anything close to it. In computing terms, the contribution of the small numbers will be less than an ulp after computing the ln. So there's no reason to waste time computing the expression for the smaller values recursively if they will be lost in your final result anyway.
If you wanted to be really persnickety about implementing this, you'd divide by exp(max(x_i) - some_constant) or so to 'center' the resulting values around 1 to avoid both overflow and underflow, and that might give you a few extra digits of precision in the result. But avoiding overflow is much more important about avoiding underflow, because the former determines the result and the latter doesn't, so it's much simpler just to do it this way.
Not really any better to do it recursively. The problem's just that you want to make sure your finite-precision arithmetic doesn't swamp the answer in noise. By dealing with the max on its own, you ensure that any junk is kept small in the final answer because the most significant component of it is guaranteed to get through.
Apologies for the waffly explanation. Try it with some numbers yourself (a sensible list to start with might be [1E-5,1E25,1E-5]) and see what happens to get a feel for it.
As you have defined it, your recursive function will never terminate. That's because ((x-maxlog) for x in listOfLogs) still has the same number of elements as listOfLogs.
I don't think that this is easily fixable either, without significantly impacting either the performance or the precision (compared to the non-recursive version).