can someone tell me the exact complexity of this recursion ?
this is actually formula for below question ( solved in recursive brute force way )
There is n steps stairs and a person standing at the bottom want to reach at top. Person can take max k steps at a time ( i.e. he can take 1, 2 , 3,.. upto k steps).Count how many ways person can climb the stairs.
Related
I have to write the function series : int -> int -> result list list, so the first int for the number of games and the second int for the points to earn.
I already thought about an empirical solution by creating all permutations and filtering the list, but I think this would be in ocaml very dirty solution with many lines of code. And I cant find another way to solve this problem.
The following types are given
type result = Win (* 3 points *)
| Draw (* 1 point *)
| Loss (* 0 points *)
so if i call
series 3 4
the solution should be:
[[Win ;Draw ;Loss]; [Win ;Loss ;Draw]; [Draw ;Win ;Loss];
[Draw ;Loss ;Win]; [Loss ;Win ;Draw]; [Loss ;Draw ;Win]]
Maybe someone can give me a hint or a code example how to start.
Consider calls of the form series n (n / 2), and consider cases where all the games were Draw or Loss. Under these restrictions the number of answers is proportional to 2^n/sqrt(n). (Guys online get this from Stirling's approximation.)
This doesn't include any series where anybody wins a game. So the actual result lists will be longer than this in general.
I conclude that the number of possible answers is gigantic, and hence that your actual cases are going to be small.
If your actual cases are small, there might be no problem with using a brute-force approach.
Contrary to your claim, brute-force code is usually quite short and easy to understand.
You can easily write a function to list all possible sequences of length n taken from Win, Lose, Draw. You can then filter them for the correct sum. Asymptotically this is probably only a little worse than the fastest algorithm, due to the near-exponential behavior described above.
A simple recursive solution would go along this way:
if there's 0 game to play and 0 point to earn, then there is exactly one (empty) solution
if there's 0 game to play and 1 or more points to earn, there is no solution.
otherwise, p points must be earned in g games: any solution for p points in g-1 game can be extended to a solution by adding a Loss in front of it. If p>=1, you can similarly add a Draw to any solution for p-1 in g-1 games, and if p>=3, there might also be possibilities starting with a Win.
I have written a program in C where I allocate memory to store a matrix of dimensions n-by-n and then feed a linear algebra subroutine with. I'm having big troubles in understanding how to identify time complexity for these operations from a plot. Particularly, I'm interested in identify how CPU time scales as a function of n, where n is my size.
To do so, I created an array of n = 2, 4, 8, ..., 512 and I computed the CPU time for both the operations. I repeated this process 10000 times for each n and I took the mean eventually. I therefore come up with a second array that I can match with my array of n.
I've been suggested to print in double logarithmic plot, and I read here and here that, using this way, "powers shows up as a straight line" (2). This is the resulting figure (dgesv is the linear algebra subroutine I used).
Now, I'm guessing that my time complexity is O(log n) since I get straight lines for both my operations (I do not take into consideration the red line). I saw the shapes differences between, say, linear complexity, logarithmic complexity etc. But I still have doubts if I should say something about the time complexity of dgesv, for instance. I'm sure there's a way that I don't know at all, so I'd be glad if someone could help me in understanding how to look at this plot properly.
PS: if there's a specific community where to post this question, please let me know so I could move it avoiding much more mess here. Thanks everyone.
Take your yellow line, it appears to be going from (0.9, -2.6) to (2.7, 1.6), giving it a slope roughly equal to 2.5. As you're plotting log(t) versus log(n) this means that:
log(t) = 2.5 log(n) + c
or, exponentiating both sides:
t = exp(2.5 log(n) + c) = c' n^2.5
The power of 2.5 may be an underestimate as your dsegv likely has a cost of 2/3 n^3 (though O(n^2.5) is theoretically possible).
I have come across two dynamic programming problems. One of the problems is
What is the number of possible ways to climb a staircase with n steps given that I can hop either 1 , 2 or 3 steps at a time.
The Dynamic programming approach for solving this problem goes as follows.
If C(n) is number of ways of climbing the staircase, then
C(n) = C(n-1) + C(n-2) + C(n-3) .
This is because , if we reach n-1 stairs, we can hop to n by 1 step hop or
if we reach n-2 stairs, we can hop to n by 2 step hop or
if we reach n-3 stairs, we can hop to n by 3 step hop
As I was about to think, I understood the above approach, I came across the coin change problem which is
What is the number of ways of representing n cents, given infinite number of 25 cent coins, 10 cent coins (dimes), 5 cent coins(nickels) and 1 cent coins
It turns out the solution for this problem is not similar to the one above and is bit complex. That is ,
C(n) = C(n-1) + C(n-5) + C(n-10) + C(n-25) is not true. I am still trying to understand the approach for solving this problem. But my question is How is the coin change problem different from the much simpler climbing steps problem?
In the steps problem, the order matters: (1,2) is not the same as (2,1). With the coin problem, only the number of each type of coin used matters.
Scott's solution is absolutely correct, and he mentions the crux of the difference between the two problems. Here's a little more color on the two problems to help intuitively understand the difference.
For Dynamic Programming problems that involve recursion, the trick is to get the subproblem right. Once the subproblem is correct, it is just a matter of building on top of that.
The Staircase problem deals with sequences so the subproblem is easier to see intuitively. For the Coin-change problem, we are dealing with counts so the subproblem is around whether or not to use a particular denomination. We compute one part of the solution using a denomination, and another without using it. That is a slightly more difficult insight to see, but once you see that, you can recursively compute the rest.
So here's one way to think about the two problems:
Staircase Sequence
Introduce one new step. The nth step has been added. How do we compute S[N]?
S[N] = S[N-1] + S[N-2] + S[N-3]
Coin Change
Introduce a new small coin denomination. Let's say a coin of denomination 'm' has newly been introduced.
How do we now compute C[n], knowing C[N, with all coins except m]?
All the ways to reach N without coin m still hold. But each new coin denomination 'm' fundamentally changes the ways to get to N. So to compute C[N using m] we have to recursively compute C[N-m, using the new coin m], and C[N-2m using m]...and so on.
C[N, with m] = C[N, without m] + C[N-m, with m]
Hope that helps.
I'm having difficulty determining the big O of simple recursive methods. I can't wrap my head around what happens when a method is called multiple times. I would be more specific about my areas of confusion, but at the moment I'm trying to answer some hw questions, and in lieu of not wanting to cheat, I ask that anyone responding to this post come up with a simple recursive method and provide a simple explanation of the big O of said method. (Preferably in Java... a language I'm learning.)
Thank you.
You can define the order recursively as well. For instance, let's say you have a function f. To calculate f(n) takes k steps. Now you want to calculate f(n+1). Lets say f(n+1) calls f(n) once, then f(n+1) takes k + some constant steps. Each invocation will take some constant steps extra, so this method is O(n).
Now look at another example. Lets say you implement fibonacci naively by adding the two previous results:
fib(n) = { return fib(n-1) + fib(n-2) }
Now lets say you can calculate fib(n-2) and fib(n-1) both in about k steps. To calculate fib(n) you need k+k = 2*k steps. Now lets say you want to calculate fib(n+1). So you need twice as much steps as for fib(n-1). So this seems to be O(2^N)
Admittedly, this is not very formal, but hopefully this way you can get a bit of a feel.
You might want to refer to the master theorem for finding the big O of recursive methods. Here is the wikipedia article: http://en.wikipedia.org/wiki/Master_theorem
You want to think of a recursive problem like a tree. Then, consider each level of the tree and the amount of work required. Problems will generally fall into 3 categories, root heavy (first iteration >> rest of tree), balanced (each level has equal amounts of work), leaf heavy (last iteration >> rest of tree).
Taking merge sort as an example:
define mergeSort(list toSort):
if(length of toSort <= 1):
return toSort
list left = toSort from [0, length of toSort/2)
list right = toSort from [length of toSort/2, length of toSort)
merge(mergeSort(left), mergeSort(right))
You can see that each call of mergeSort in turn calls 2 more mergeSorts of 1/2 the original length. We know that the merge procedure will take time proportional to the number of values being merged.
The recurrence relationship is then T(n) = 2*T(n/2)+O(n). The two comes from the 2 calls and the n/2 is from each call having only half the number of elements. However, at each level there are the same number of elements n which need to be merged, so the constant work at each level is O(n).
We know the work is evenly distributed (O(n) each depth) and the tree is log_2(n) deep, so the big O of the recursive function is O(n*log(n)).
I'm sure most are familiar with the closest pair problem, but is there another alogrithm or a way to modify the current CP algorithm to get the next closest pair?
Easy one, in O(n log(n)) :
find the closets pair (p1,p2) in O(n log(n))
compute all pairs with p1 or p2 (but not (p1,p2)) keep the closest one, let's call it E in O(n)
remove p1 and p2 in (1)
find the closets pair, compare it to E and keep the closest one, again in O(n log(n))
You now have the second closest one.
If constant number of minimal distances (next pairs) are needed, it is possible to modify steps 3-5, from Wikipedia article, redefining d_Lmin, d_Rmin, d_LRmin as constant length lists of minimum distances. That uses c*log(n) memory.
If next is used less than O(n) times, than reformulation of CR problem to return smaller distance larger than given d is same as next method. It can be implemeted with same approach as CR. I don't see theoretical guaranty that step 4 can be accomplished in linear time, but there is advantage not to check points in to small boxes.
If next is used more than O(n) times, than it is the best to calculate all distances and sort them (if memory is not a problem).