I have the following pseudocode for incrementing a binary counter:
Increment(B)
i=0
while B[i]=1
flip B[i] to zero
increment i by 1
b[i]=1
I have been told that the runtime is O(log n), but I can't see why - the loop looks like it might visit all the bits.
What am I missing?
If you have a binary counter representing the number n, then there will be a total of Θ(log n) different bits (since each bit represents exponentially larger and larger values). If you look at the quantity b, the number of bits, then it should be easy to see that the runtime of the above algorithm is O(b), since each bit is visited at most once. However, since b = Θ(log n), the time complexity ends up being O(log n).
Hope this helps!
Related
int recursiveFunc(int n) {
if (n == 1) return 0;
for (int i = 2; i < n; i++)
if (n % i == 0) return i + recursiveFunc(n / i);
return n;
}
I know Complexity = length of tree from root node to leaf node * number of leaf nodes, but having hard time to come to an equation.
This one is tricky, because the runtime is highly dependent on what number you provide in as input in a way that most recursive functions are not.
For starters, notice that the way that this recursion works, it takes in a number and then either
returns without making any further calls if the number is prime, or
recursively calls itself on number divided by that proper factor.
This means that in one case, the function, called on a number n, will do Θ(n) work and make no calls (which happens if the number is prime), and in the other case will do Θ(d) work and then make a recursive call on the number n / d, which happens if n is composite and is the largest divisor of n.
One useful fact we'll use to analyze this function is that given a composite number n, the smallest factor d of n is never any greater than √n. If it were, then we would have that n = df for some other factor f, and since d is the smallest proper divisor, we'd have that f ≥ d, so df > √n √ n = n, which would be impossible.
With that in mind, we can argue that the worst-case runtime of this function is O(n), and in fact that happens when n is prime. Here's how to see this. Imagine the worst-case amount of time this function can take if it ends up making a recursive call. In that case, the function will do at most Θ(√n) work (let's assume our smallest divisor is as large as possible), then recursively makes a call on a number whose size is at most n / 2 (which is the absolute largest number we could get as part of the recursive call. In that case, we'd get this recurrence relation under the pessimistic assumption that we do the maximum work possible
T(n) = T(n / 2) + √n
This solves, by the Master Theorem, to Θ(√n), which is less work than what we'd do if we had a prime number as an input.
But what happens if, instead, we do the maximum amount of work possible for some number of iterations, and then end up with a prime number and stop? In that case, using the iteration method, we'd see that the work done would be
n1/2 + n1/4 + ... + n / 2k,
which would happen if we stopped after k iterations. In this case, notice that this expression is maximized when we pick k to be as small as possible - which would correspond to stopping as soon as possible, which happens if we pick a prime number for n.
So in this sense, the worst-case runtime of this function is Θ(n), which happens for n being a prime number, with composite numbers terminating much faster than this.
So how fast can this function be? Well, imagine, for example, that we have a number of the form pk, where p is some prime number. In that case, this function will do Θ(p) work to discover p as a prime factor, then recursively call itself on the number pk-1. If you think about what this will look like, this function will end up doing Θ(p) work Θ(k) times for a total runtime of Θ(pk). And since n = pk, we'd have k = logp n, so the runtime would be Θ(p logp n). That's minimized at either p = 2 or p = 3, and in either case gives us a runtime of Θ(log n) in this case.
I strongly suspect that's the best case here, though I'm not entirely sure. But what this does mean is that
the worst-case runtime is definitely Θ(n), occurring at prime numbers, and
the best-case runtime is O(log n), which I'm fairly certain is a tight bound but I'm not 100% sure how to prove.
Given an array of size n (1<=n<=200000) with each entry a[i] ( 1<=a[i]<=100), find all the number of subsequences that form Arithmetic progression.
Subsequences are the sequences where you can leave any number of elements in the original sequence.
For example, the sequence A,B,D is a subsequence of A,B,C,D,E,F obtained after removal of elements C, E and F. The relation of one sequence being the subsequence of another is a preorder.
I have written O(n^2) solution using DP. But n^2 = 10^10. So, It'll not get accepted.
Here is what I did.
Pseudocode:
for every element A[i]:
for every element A[k] such that k<i:
diff = A[i] - A[k] + 100: (adding 100, -ve differences A.P.)
dp[i][diff] += dp[i-1][diff] + 1;
for every element A[i]:
for every diff, d:
ans = ans + dp[i][d];
return ans;
This is giving correct output but TLE for 3 big cases.
P.S. Please suggest better solution..!!
Is divide and conquer optimization DP required here?? If yes, tell me how to build the solution.
I am trying to write a program to check whether a number N can be expressed as the sum of two cubes i.e. N = a^3 + b^3
This is my code with complexity O(n):
#include <iostream>
#include<math.h>
#define ll unsigned long long
using namespace std;
int main()
{
ios_base::sync_with_stdio(false);
bool flag=false;
ll t,N;
cin>>t;
while(t--)
{
cin>>N;
flag=false;
for(int i=1; i<=(ll)cbrtl(N/2); i++)
{
if(!(cbrtl(N-i*i*i)-(ll)cbrtl(N-i*i*i))) {flag=true; break;}
}
if(flag) cout<<"Yes\n"; else cout<<"No\n";
}
return 0;
}
As the time limit for code is 2s, This program is giving TLE? can anyone suggest a faster approch
I posted this also in StackExchange, so sorry if you consider duplicate, but I really don´t know if these are the same or different boards (Exchange and Overflow). My profile appears different here.
==========================
There is a faster algorithm to check if a given integer is a sum (or difference) of two cubes n=a^3+b^3
I don´t know if this algorithm is already known (probably yes, but I can´t find it on books or internet). I discovered and use it to compute integers until n < 10^18
This process uses a single trick
4(a^3+b^3)/(a+b) = (a+b)^2 + 3(a-b)^2)
We don´t know in advance what would be "a" and "b" and so what also would be "(a+b)", but we know that "(a+b)" should certainly divide (a^3+b^3) , so if you have a fast primes factorizing routine, you can quickly compute each one of divisors of (a^3+b^3) and then check if
(4(a^3+b^3)/divisor - divisor^2)/3 = square
When (and if) found a square, you have divisor=(a+b) and sqrt(square)=(a-b) , so you have a and b.
If not square found, the number is not sum of two cubes.
We know divisor < (4(a^3+b^3)^(1/3) and this limit improves the task, because when you are assembling divisors of (a^3+b^3) immediately discard those greater than limit.
Now some comparisons with other algorithms - for n = 10^18, by using brute force you should test all numbers below 10^6 to know the answer. On the other hand, to build all divisors of 10^18 you need primes until 10^9.
The max quantity of different primes you could fit into 10^9 is 10 (2*3*5*7*11*13*17*19*23*29 = 5*10^9) so we have 2^10-1 different combinations of primes (which assemble the divisors) to check in worst case, many of them discared because limit.
To compute prime factors I use a table with first 60.000.000 primes which works very well on this range.
Miguel Velilla
To find all the pairs of integers x and y that sum to n when cubed, set x to the largest integer less than the cube root of n, set y to 0, then repeatedly add 1 to y if the sum of the cubes is less than n, subtract 1 from x if the sum of the cubes is greater than n, and output the pair otherwise, stopping when x and y cross. If you only want to know whether or not such a pair exists, you can stop as soon as you find one.
Let us know if you have trouble coding this algorithm.
I have a pretty easy question (I think). As much as I've tried, I can not find an answer to this question.
I am creating a function, for which I want the user to enter two numbers. The first is the the number of terms of a certain infinite series to add together. The second is the number of digits the user would like the truncated sum to be accurate to.
Say the terms of the sequence are a_i. How much precision n, would be required in mpfr to ensure the result of adding these a_i from i=0 up to the user's entered value would be needed to guarantee the number of digits the user needs?
By the way, I'm adding the a_i in a naive way.
Any help will be much appreciated.
Thanks,
Rick
You can convert between decimal digits of precision, d, and binary digits of precision, b, with logarithms
b = d × log(10) / log(2)
A little rearranging shows why
b × log(2) = d × log(10)
log(2b) = log(10d)
2b = 10d
Each term of the series (and each addition) will introduce a rounding error at the least significant digit so, assuming each of the t terms involves n (two argument) arithmetic operations, you will want to add an extra
log(t * (n+2))/log(2)
bits.
You'll need to round the number of bits of precision up to be sure that you have enough room for your decimal digits of precision
b = ceil((d*log(10.0) + log(t*(n+2)))/log(2.0));
Finally, you should be aware that the terms may introduce cancellation errors, in which case this simple calculation will dramatically underestimate the required number of bits, even assuming I've got it right in the first place ;-)
If x is an n-bit integer. What is the size (in bits) of x2?
I think the answer is O(n); is that correct? The way I thought about it is adding a number to itself that number amount of times means that there will be n operations, therefore O(n). Is my understanding correct?
Let's suppose x has n bits. This means x = Θ(2n). Therefore, x2 = Θ(2n · 2n) = Θ(22n), so the number now has about twice as many bits as before. This means that if there were n bits to begin with, there are now about 2n = Θ(n) bits.
While the answer you gave of O(n) is correct, your reasoning is invalid. Note that the question isn't asking for how long it takes to compute x2, but rather the number of bits it contains. The time to compute x2 is a different question.
Hope this helps!