I was wondering, if my thought process looks odd.
After digging for a bit on time complexity for general code, say:
Alg(x):
statements...
functions...
...
Can we say that T(Alg(x)) = T1(statement) + T2(functions) + T3(...) then break them apart and keep going in depth until we can go no further? This can lead to the Halting Problem which I can see if my previous statement is correct.
From (1) (assuming it holds) then any alg. such that it's non-recursive can we say T(Alg(x)) is:
Alg(x):
block 1 -> {
for (i = 0; i < N; i++) {
for (j = 0; j < M; j++) {
statements
}
}
}
block 2 -> {
for j -> k // T(K)
call_1() // T(N_1)
call_2() // T(N_2)
}
T(Alg(x)) = T1(b1) + T2(b2)
where T1(b1) = T(N) * T2(f2) = T(N) * (T(M) * T3(s3)) as general where N, M are input sizes, and s3 are more breakable parts.
As for T2(b2) we have T(K) + T(N1) + T(N2)
I have the following problem to solve: given a number N and 1<=k<=N, count the number of possible sums of (1,...,k) which add to N. There may be equal factors (e.g. if N=3 and k=2, (1,1,1) is a valid sum), but permutations must not be counted (e.g., if N=3 and k=2, count (1,2) and (2,1) as a single solution). I have implemented the recursive Python code below but I'd like to find a better solution (maybe with dynamic programming? ). It seems similar to the triple step problem, but with the extra constraint of not counting permutations.
def find_num_sums_aux(n, min_k, max_k):
# base case
if n == 0:
return 1
count = 0
# due to lower bound min_k, we evaluate only ordered solutions and prevent permutations
for i in range(min_k, max_k+1):
if n-i>=0:
count += find_num_sums_aux(n-i, i, max_k)
return count
def find_num_sums(n, k):
count = find_num_sums_aux(n,1,k)
return count
This is a standard problem in dynamic programming (subset sum problem).
Lets define the function f(i,j) which gives the number of ways you can get the sum j using a subset of the numbers (1...i), then the result to your problem will be f(k,n).
for each number x of the range (1...i), x might be a part of the sum j or might not, so we need to count these two possibilities.
Note: f(i,0) = 1 for any i, which means that you can get the sum = 0 in one way and this way is by not taking any number from the range (1...i).
Here is the code written in C++:
int n = 10;
int k = 7;
int f[8][11];
//initializing the array with zeroes
for (int i = 0; i <= k; i++)
for (int j = 0; j <= n; j++)
f[i][j] = 0;
f[0][0] = 1;
for (int i = 1; i <= k; i++) {
for (int j = 0; j <= n; j++) {
if (j == 0)
f[i][j] = 1;
else {
f[i][j] = f[i - 1][j];//without adding i to the sum j
if (j - i >= 0)
f[i][j] = f[i][j] + f[i - 1][j - i];//adding i to the sum j
}
}
}
cout << f[k][n] << endl;//print f(k,n)
Update
To handle the case where we can repeat the elements like (1,1,1) will give you the sum 3, you just need to allow picking the same element multiple times by changing the following line of code:
f[i][j] = f[i][j] + f[i - 1][j - i];//adding i to the sum
To this:
f[i][j] = f[i][j] + f[i][j - i];
Given an integer n, find 2^n. Here are two methods I know:
Method 1
int a = 1;
for(int i = 0; i < n; ++i)
a = a << 1;
Method 2
int a = Math.pow(2,n);
Given how fast bitshifts are, I was wondering which method would be faster. Also, how does Math.pow() work and why do people generally say it is slow?
I know that the fibonacci algorithm can be programmed without recursion like this:
int fibo(int n){
if(n <= 1){
return n;
}
int fibo = 1;
int fiboPrev = 1;
for(int i = 2; i < n; ++i){
int temp = fibo;
fibo += fiboPrev;
fiboPrev = temp;
}
return fibo;
}
and also that the recursive fibonacci has a complexity of O(2^k) approximately, but for what I see the non-recursive algorithm is O(n); so it seems is way more efficient, is it ok my calculus or is there any hidden complexity on the non-recursive solution?
Evaluate the complexity of the implementation on its own. In this case, the complexity related to the input n is defined by the for loop, which is directly proportional to the size of n. Therefore, the complexity is O(n) - linear.
What is the Big-O time complexity ( O ) of the following recursive code?
public static int abc(int n) {
if (n <= 2) {
return n;
}
int sum = 0;
for (int j = 1; j < n; j *= 2) {
sum += j;
}
for (int k = n; k > 1; k /= 2) {
sum += k;
}
return abc(n - 1) + sum;
}
My answer is O(n log(n)). Is it correct?
Where I'm sitting...I think the runtime is O(n log n). Here's why.
You are making n calls to the function. The function definitely depends on n for the number of times the following two operations are made:
You loop up to 2*log(n) values to increment a sum.
For a worst case, n is extremely large, but the overall runtime doesn't change. A best case would be that n <= 2, such that only one operation is done (the looping would not occur).