Given an array of size n (1<=n<=200000) with each entry a[i] ( 1<=a[i]<=100), find all the number of subsequences that form Arithmetic progression.
Subsequences are the sequences where you can leave any number of elements in the original sequence.
For example, the sequence A,B,D is a subsequence of A,B,C,D,E,F obtained after removal of elements C, E and F. The relation of one sequence being the subsequence of another is a preorder.
I have written O(n^2) solution using DP. But n^2 = 10^10. So, It'll not get accepted.
Here is what I did.
Pseudocode:
for every element A[i]:
for every element A[k] such that k<i:
diff = A[i] - A[k] + 100: (adding 100, -ve differences A.P.)
dp[i][diff] += dp[i-1][diff] + 1;
for every element A[i]:
for every diff, d:
ans = ans + dp[i][d];
return ans;
This is giving correct output but TLE for 3 big cases.
P.S. Please suggest better solution..!!
Is divide and conquer optimization DP required here?? If yes, tell me how to build the solution.
Related
Given an array A[] with n elements, the task is to find S mod (10^9+7), in which S is the smallest perfect square which is divisible by all the elements A[i] (1<=i<=n) of the given array.
So, the problem is very easy if the value of A[i] and n is small. But in this case, I don't know what to do when A[i] can up to 10^7 and n can up to 10^5. So everybody help me pls!
The smallest integer X which is a multiple of all the A_i is called the least common multiple of the A_i. It's also true that every common multiple of the A_i is divisible by X. So S is divisible by X, or equivalently S is a multiple of X.
The LCM can computed fairly efficiently by the algorithms mentioned in the wikipedia article, but remember our final goal is S, a perfect square, not X. Also, the size of X (and S) is likely to be enormous given the constraints in your problem.
Thus I think the correct approach is to use a modified Sieve of Eratosthenes (or just obtain from some online source a list of primes up to 3163) to completely factor all the A_i simultaneously into their prime power factorizations. Since the A_i < 107 you need only include primes <= 103.5. Now, with each A_i factored into its prime power factorization use the prime factorization method to find the LCM, but still retain this in prime power format, in other words don't yet multiply everything together. Next, scan through each of the powers and add 1 to any odd powers. Now you have the prime power factorization of S. Iterate through these prime powers, multiplying each one into the product and taking the product mod (109+7) at each step.
This problem gives you a positive integer number which is less than or equal to 100000 (10^5). You have to find out the following things for the number:
i. Is the number prime number? If it is a prime number, then print YES.
ii. If the number is not a prime number, then can we express the number as summation of unique prime numbers? If it is possible, then print YES. Here unique means, you can use any prime number only for one time.
If above two conditions fail for any integer number, then print NO. For more clarification please see the input, output section and their explanations.
Input
At first you are given an integer T (T<=100), which is the number of test cases. For each case you will be given a positive integer X which is less than or equal 100000.
Output
For every test case, print only YES or NO.
Sample
Input Output
3
7
6
10 YES
NO
YES
Case – 1 Explanation: 7 is a prime number.
Case – 2 Explanation: 6 is not a prime number. 6 can be expressed as 6 = 3 + 3 or 6 = 2 + 2 + 2. But you can’t use any prime number more than 1 time. Also there is no way to express 6 as two or three unique prime numbers summation.
Case – 3 Explanation: 10 is not prime number but 10 can be expressed as 10 = 3 + 7 or 10 = 2 + 3 + 5. In this two expressions, every prime number is used only for one time.
Without employing any mathematical tricks (not sure if any exist...you'd think as a mathematician I'd have more insight here), you will have to iterate over every possible summation. Hence, you'll definitely need to iterate over every possible prime, so I'd recommend the first step being to find all the primes at most 10^5. A basic (Sieve of Eratosthenes)[https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes] will probably be good enough, though faster sieves exist nowadays. I know your question is language agnostic, but you could consider the following as vectorized pseudocode for such a sieve.
import numpy as np
def sieve(n):
index = np.ones(n+1, dtype=bool)
index[:2] = False
for i in range(2, int(np.sqrt(n))):
if index[i]:
index[i**2::i] = False
return np.where(index)[0]
There are some other easy optimizations, but for simplicity this assumes that we have an array index where the indices correspond exactly to whether the number is prime or not. We start with every number being prime, mark 0 and 1 as not prime, and then for every prime we find we mark every multiple of it as not prime. The np.where() at the end just returns the indices where our index corresponds to True.
From there, we can consider a recursive algorithm for actually solving your problem. Note that you might feasibly have a huge number of distinct primes necessary. The number 26 is the sum of 4 distinct primes. It is also the sum of 3 and 23. Since the checks are more expensive for 4 primes than for 2, I think it's reasonable to start by checking the smallest number possible.
In this case, the way we're going to do that is to define an auxiliary function to find whether a number is the sum of precisely k primes and then sequentially test that auxiliary function for k from 1 to whatever the maximum possible number of addends is.
primes = sieve(10**5)
def sum_of_k_primes(x, k, excludes=()):
if k == 1:
if x not in excludes and x in primes:
return (x,)+excludes
else:
return ()
for p in (p for p in primes if p not in excludes):
if x-p < 2:
break
temp = sum_of_k_primes(x-p, k-1, (p,)+excludes)
if temp:
return temp
return ()
Running through this, first we check the case where k is 1 (this being the base case for our recursion). That's the same as asking if x is prime and isn't in one of the primes we've already found (the tuple excludes, since you need uniqueness). If k is at least 2, the rest of the code executes instead. We check all the primes we might care about, stopping early if we'd get an impossible result (no primes in our list are less than 2). We recursively call the same function for smaller k, and if we succeed we propagate that result up the call stack.
Note that we're actually returning the smallest possible tuple of unique prime addends. This is empty if you want your answer to be "NO" as specified, but otherwise it allows you to easily come up with an explanation for why you answered "YES".
partial = np.cumsum(primes)
def max_primes(x):
return np.argmax(partial > x)
def sum_of_primes(x):
for k in range(1, max_primes(x)+1):
temp = sum_of_k_primes(x, k)
if temp:
return temp
return ()
For the rest of the code, we store the partial sums of all the primes up to a given point (e.g. with primes 2, 3, 5 the partial sums would be 2, 5, 10). This gives us an easy way to check what the maximum possible number of addends is. The function just sequentially checks if x is prime, if it is a sum of 2 primes, 3 primes, etc....
As some example output, we have
>>> sum_of_primes(1001)
(991, 7, 3)
>>> sum_of_primes(26)
(23, 3)
>>> sum_of_primes(27)
(19, 5, 3)
>>> sum_of_primes(6)
()
At a first glance, I thought caching some intermediate values might help, but I'm not convinced that the auxiliary function would ever be called with the same arguments twice. There might be a way to use dynamic programming to do roughly the same thing but in a table with a minimum number of computations to prevent any duplicated efforts with the recursion. I'd have to think more about it.
As far as the exact output your teacher is expecting and the language this needs to be coded in, that'll be up to you. Hopefully this helps on the algorithmic side of things a little.
int recursiveFunc(int n) {
if (n == 1) return 0;
for (int i = 2; i < n; i++)
if (n % i == 0) return i + recursiveFunc(n / i);
return n;
}
I know Complexity = length of tree from root node to leaf node * number of leaf nodes, but having hard time to come to an equation.
This one is tricky, because the runtime is highly dependent on what number you provide in as input in a way that most recursive functions are not.
For starters, notice that the way that this recursion works, it takes in a number and then either
returns without making any further calls if the number is prime, or
recursively calls itself on number divided by that proper factor.
This means that in one case, the function, called on a number n, will do Θ(n) work and make no calls (which happens if the number is prime), and in the other case will do Θ(d) work and then make a recursive call on the number n / d, which happens if n is composite and is the largest divisor of n.
One useful fact we'll use to analyze this function is that given a composite number n, the smallest factor d of n is never any greater than √n. If it were, then we would have that n = df for some other factor f, and since d is the smallest proper divisor, we'd have that f ≥ d, so df > √n √ n = n, which would be impossible.
With that in mind, we can argue that the worst-case runtime of this function is O(n), and in fact that happens when n is prime. Here's how to see this. Imagine the worst-case amount of time this function can take if it ends up making a recursive call. In that case, the function will do at most Θ(√n) work (let's assume our smallest divisor is as large as possible), then recursively makes a call on a number whose size is at most n / 2 (which is the absolute largest number we could get as part of the recursive call. In that case, we'd get this recurrence relation under the pessimistic assumption that we do the maximum work possible
T(n) = T(n / 2) + √n
This solves, by the Master Theorem, to Θ(√n), which is less work than what we'd do if we had a prime number as an input.
But what happens if, instead, we do the maximum amount of work possible for some number of iterations, and then end up with a prime number and stop? In that case, using the iteration method, we'd see that the work done would be
n1/2 + n1/4 + ... + n / 2k,
which would happen if we stopped after k iterations. In this case, notice that this expression is maximized when we pick k to be as small as possible - which would correspond to stopping as soon as possible, which happens if we pick a prime number for n.
So in this sense, the worst-case runtime of this function is Θ(n), which happens for n being a prime number, with composite numbers terminating much faster than this.
So how fast can this function be? Well, imagine, for example, that we have a number of the form pk, where p is some prime number. In that case, this function will do Θ(p) work to discover p as a prime factor, then recursively call itself on the number pk-1. If you think about what this will look like, this function will end up doing Θ(p) work Θ(k) times for a total runtime of Θ(pk). And since n = pk, we'd have k = logp n, so the runtime would be Θ(p logp n). That's minimized at either p = 2 or p = 3, and in either case gives us a runtime of Θ(log n) in this case.
I strongly suspect that's the best case here, though I'm not entirely sure. But what this does mean is that
the worst-case runtime is definitely Θ(n), occurring at prime numbers, and
the best-case runtime is O(log n), which I'm fairly certain is a tight bound but I'm not 100% sure how to prove.
Original problem:
Let N be a positive integer (actually, N <= 2000) and P - set of all possible partitions of the N, where with and . Let A be the number of partitions . Find the A.
Input: N. Output: A - the number of partitions .
What have I tried:
I think that this problem can be solved by dynamic-based algorithm. Let p(n,a,b) be the function, which returns the number of partitons of n using only numbers a. . .b. Then we can compute the A with the code like:
int Ans = 2; // the 1+1+...+1=N & N=N partitions
for(int a = 2; a <= N/2; a += 1){ //a - from 2 to N/2
int b = a*2-1;
Ans += p[N][a][b]; // add all partitions using a..b to Answer
if(a < (a-1)*2-1){ // if a < previous b [ (a-1)*2-1 ]
Ans -= p[N][a][(a-1)*2-1]; // then we counted number of partitions
} // using numbers a..prev_b twice.
}
Next I tried to find the dynamic algorithm computing p(n,a,b) for any integer a <= b <= n. This paper (.pdf) provides the folowing algorithm:
, were I(n<=b) = 1 if n<=b and =0 otherwise.
Question(s):
How should I realize the algorithm from the paper? I'm new at d-p problems and as I can see, this problem has 3 dimensions (n,a & b), which is quite tricky for me.
How actually that algorithm works? I know how work the algorithms for computing p(n,0,b) or p(n,a,n), but a little explanation for p(n,a,b) will be very helpful.
Does original problem have simpler solution? I'm quite sure that there's another clean solution, but I didn't found it.
I calculated all A(1)-A(600) in 23 seconds with memoization approach (top-down dynamic programming). 3D table requires 1.7 GB of memory.
For reference: A[50] = 278, A(200)=465202, A(600)=38860513616
N=2000 requires too large table for 32-bit environment, and map approach worked too slow.
I can make 2D table with reasonable size, but this approach requires table zeroing at every iteration of external loop - slow again.
A(1000) = 107292471486730 in 131 sec. And I think that long arithmetic might be needed for larger values to avoid Int64 overflow.
I am trying to write a program to check whether a number N can be expressed as the sum of two cubes i.e. N = a^3 + b^3
This is my code with complexity O(n):
#include <iostream>
#include<math.h>
#define ll unsigned long long
using namespace std;
int main()
{
ios_base::sync_with_stdio(false);
bool flag=false;
ll t,N;
cin>>t;
while(t--)
{
cin>>N;
flag=false;
for(int i=1; i<=(ll)cbrtl(N/2); i++)
{
if(!(cbrtl(N-i*i*i)-(ll)cbrtl(N-i*i*i))) {flag=true; break;}
}
if(flag) cout<<"Yes\n"; else cout<<"No\n";
}
return 0;
}
As the time limit for code is 2s, This program is giving TLE? can anyone suggest a faster approch
I posted this also in StackExchange, so sorry if you consider duplicate, but I really don´t know if these are the same or different boards (Exchange and Overflow). My profile appears different here.
==========================
There is a faster algorithm to check if a given integer is a sum (or difference) of two cubes n=a^3+b^3
I don´t know if this algorithm is already known (probably yes, but I can´t find it on books or internet). I discovered and use it to compute integers until n < 10^18
This process uses a single trick
4(a^3+b^3)/(a+b) = (a+b)^2 + 3(a-b)^2)
We don´t know in advance what would be "a" and "b" and so what also would be "(a+b)", but we know that "(a+b)" should certainly divide (a^3+b^3) , so if you have a fast primes factorizing routine, you can quickly compute each one of divisors of (a^3+b^3) and then check if
(4(a^3+b^3)/divisor - divisor^2)/3 = square
When (and if) found a square, you have divisor=(a+b) and sqrt(square)=(a-b) , so you have a and b.
If not square found, the number is not sum of two cubes.
We know divisor < (4(a^3+b^3)^(1/3) and this limit improves the task, because when you are assembling divisors of (a^3+b^3) immediately discard those greater than limit.
Now some comparisons with other algorithms - for n = 10^18, by using brute force you should test all numbers below 10^6 to know the answer. On the other hand, to build all divisors of 10^18 you need primes until 10^9.
The max quantity of different primes you could fit into 10^9 is 10 (2*3*5*7*11*13*17*19*23*29 = 5*10^9) so we have 2^10-1 different combinations of primes (which assemble the divisors) to check in worst case, many of them discared because limit.
To compute prime factors I use a table with first 60.000.000 primes which works very well on this range.
Miguel Velilla
To find all the pairs of integers x and y that sum to n when cubed, set x to the largest integer less than the cube root of n, set y to 0, then repeatedly add 1 to y if the sum of the cubes is less than n, subtract 1 from x if the sum of the cubes is greater than n, and output the pair otherwise, stopping when x and y cross. If you only want to know whether or not such a pair exists, you can stop as soon as you find one.
Let us know if you have trouble coding this algorithm.