I was doing a ZTM course there he said that Quick Select algo at worst case can be O(1) Space complexity using tail recursion as it only makes one recursive calls.
His Code was
const quickSelect = function (nums, left, right, indexToFind) {
const partitionIndex = getPartition(nums, left, right);
if (partitionIndex === indexToFind) {
return nums[partitionIndex];
} else if (indexToFind < partitionIndex) {
return quickSelect(nums, left, partitionIndex - 1, indexToFind);
} else {
return quickSelect(nums, partitionIndex + 1, right, indexToFind);
}
};
And here is the optimized version of Quick Sort from GFG article it looks like quickSelect and one recursive call each time.
Code:
def quickSort(arr, low, high)
{
while (low < high):
''' pi is partitioning index, arr[p] is now
at right place '''
pi = partition(arr, low, high);
# If left part is smaller, then recur for left
# part and handle right part iteratively
if (pi - low < high - pi):
quickSort(arr, low, pi - 1);
low = pi + 1;
# Else recur for right part
else:
quickSort(arr, pi + 1, high);
high = pi - 1;
}
Why can't this be O(1) Space complexity then using Tail rec?
Iterative version of quick select using Lomuto partition scheme. I don't know if python optimization handles tail recursion.
def quickselect(a, k):
lo = 0
hi = len(a)-1
while (lo < hi):
p = a[hi]; # Lomuto partition
i = lo;
for j in range(lo, hi):
if (a[j] < p):
a[i],a[j] = a[j],a[i]
i += 1
a[i],a[hi] = a[hi],a[i]
if (k == i): # if pivot == kth element, return it
return a[k]
if (k < i): # loop on partition with kth element
hi = i - 1
else:
lo = i + 1
return a[k] # sorted to kth elemement, return it
Iterative version of quick select using Hoare partition scheme, it has to loop until lo == hi, since the pivot and values equal to pivot can end up anywhere after a partition step. However, it is faster than Lomuto partition scheme (at least with Python).
def quickselect(a, k):
lo = 0
hi = len(a)-1
while (lo < hi):
p = a[(lo + hi) // 2] # Hoare partition
i = lo - 1
j = hi + 1
while(1):
while(1):
i += 1
if(a[i] >= p):
break
while(1):
j -= 1
if(a[j] <= p):
break
if(i >= j):
break
a[i],a[j] = a[j],a[i]
if (k <= j): # loop on partition with kth element
hi = j
else:
lo = j + 1
return a[k] # sorted to kth elemement, return it
Related
I have a Computer Science Midterm tomorrow and I need help determining the complexity of these recursive functions. I know how to solve simple cases, but I am still trying to learn how to solve these harder cases. Any help would be much appreciated and would greatly help in my studies, Thank you!
fonction F(n)
if n == 0
return 1
else
return F(n-1) * n
fonction UniqueElements(A[0..n-1])
for i=0 to i <= n-2 do
for j=i+1 to j <= n-1 do
if A[i] == A[j]
return false
return true
fonction BinRec(n)
if n == 1
return 1
else
return BinRec(floor(n/2)) + 1
For hands on learning, you can plug the functions into a program, and test their worst case scenario performance.
When trying to calculate O by hand, here are some things to remember
The +, -, *, and / offsets can be ignored. So 1 to n+5 and 1 to 5n is considered equivalent to 1 to n.
Also, Only the highest order of magnitude counts, so for O 2^n + n^2 + n, 2^n grows the fastest, so it is equivalent to O 2^n
With recursive functions, you are looking at how many times the function is called in the method (the split count) and how much it needs to be called (the depth, usually is equal to list length). So the final O will be depth_count^split_count
With loops, each nested loop multiplies to the one it's in, and sequential loops add, so (1-n){(1-n){}} (1-n){} is (n * n) + n) => n^2 + n =(only highest growth counts)> n^2
PRACTICE! You will need to practice to get the hang of the gatchas of growth rate and how control flows interact. (so do online practice quizs)
function F(n){
count++
if (n == 0)
return 1
else
return F(n-1) * n
}
function UniqueElements(A){
for (var i=0 ; i <= A.length-2; i++){
for (var j=i+1;j <= A.length-1; j++){
if (A[i] == A[j]){
return false
}
}
}
return true
}
function BinRec(n) {
count++
if (n == 1)
return 1
else
return BinRec(Math.floor(n/2)) + 1
}
count = 0;
console.log(F(10));
console.log(count);
count = 0;
console.log(UniqueElements([1,2,3,5]));
console.log(count);
count = 0;
console.log(BinRec(40));
console.log(count);
This is a contest problem (ACM ICPC South America 2015), it was the hardest in the problem set.
Summary: Given integers N and K, count the number of sequences a of length N consisting of integers 1 ≤ ai ≤ K, subject to the condition that for any x in that sequence there has to be a pair i, j satisfying i < j and ai = x − 1 and aj = x, i.e. the last x is preceded by x − 1 at some point.
Example: for N = 1000 and K = 100 the solution should be congruent to 265428620 modulo (109 + 7). Other examples and details can be found in the problem description.
I tried everything in my knowledge, but I need pointers to know how to do it. I even printed some lists with brute force to find the pattern, but I didn't succeed.
I'm looking for an algorithm, or formula that allows me to get to the right solution for this problem. It can be any language.
EDIT:
I solved the problem using a formula I found on the internet (someone who explained this problem). However, just because I programmed it, doesn't mean I understand it, so the question remains open. My code is here (the online judge returns Accepted):
#include <bits/stdc++.h>
using namespace std;
typedef long long int ll;
ll mod = 1e9+7;
ll memo[5001][5001];
ll dp(int n, int k){
// K can't be greater than N
k = min(n, k);
// if N or K is 1, it means there's only one possible list
if(n <= 1 || k <= 1) return 1;
if(memo[n][k] != -1) return memo[n][k];
ll ans1 = (n-k) * dp(n-1, k-1);
ll ans2 = k * dp(n-1, k);
memo[n][k] = ((ans1 % mod) + (ans2 % mod)) % mod;
return memo[n][k];
}
int main(){
int n, q;
for(int i=0; i<5001; i++)
fill(memo[i], memo[i]+5001, -1);
while(scanf("%d %d", &n, &q) == 2){
for(int i=0; i<q; i++){
int k;
scanf("%d", &k);
printf("%s%lld", i==0? "" : " ", dp(n, k));
}
printf("\n");
}
return 0;
}
The most important lines are the recursive call, particularly, these lines
ll ans1 = (n-k) * dp(n-1, k-1);
ll ans2 = k * dp(n-1, k);
memo[n][k] = ((ans1 % mod) + (ans2 % mod)) % mod;
Here I show the brute force algorithm for the problem in python. It works for small numbers, but for very big numbers it takes too much time. For N=1000 and K=5 it is already infeasible (Needs more than 100 years time to calculate)(In C it should also be infeasible as C is only 100 times faster than Python). So the problem actually forces you to find a shortcut.
import itertools
def checkArr(a,K):
for i in range(2,min(K+1,max(a)+1)):
if i-1 not in a:
return False
if i not in a:
return False
if a.index(i-1)>len(a)-1-a[::-1].index(i):
return False
return True
def num_sorted(N,K):
result=0
for a in itertools.product(range(1,K+1), repeat=N):
if checkArr(a,K):
result+=1
return result
num_sorted(3,10)
It returns 6 as expected.
How to calculate complexity of ths recursive algorithm?
int findMax(int a[ ], int l, int r)
{
if (r – l == 1)
return a[ l ];
int m = ( l + r ) / 2;
int u = findMax(a, l, m);
int v = findMax(a, m, r);
if (u > v)
return u;
else
return v;
}
From the Master Theorem:
T(n) = a * T(n/b) + f(n)
Where:
a is number of sub-problems
f(n) is cost of operation outside the recursion; f(n) = O(nc)
n/b size of the sub-problem
The idea behind this function is that you repeat the operation on the first half of items (T(n/2)) and on the second half of items (T(n/2)). You get the results and compare them (O(1)) so you have:
T(n) = 2 * T(n/2) + O(1)
So f(n) = O(1) and in terms of n value we get O(n0) - we need that to calculate c. So a = 2 and b = 2 and c = 0. From the Master Theorem (as correctly pointed out in comments) we end up with case where c < logba as log22 = 0. In this case the complexity of whole recursive call is O(n).
what is the fastest method to calculate this, i saw some people using matrices and when i searched on the internet, they talked about eigen values and eigen vectors (no idea about this stuff)...there was a question which reduced to a recursive equation
f(n) = (2*f(n-1)) + 2 , and f(1) = 1,
n could be upto 10^9....
i already tried using DP, storing upto 1000000 values and using the common fast exponentiation method, it all timed out
im generally weak in these modulo questions, which require computing large values
f(n) = (2*f(n-1)) + 2 with f(1)=1
is equivalent to
(f(n)+2) = 2 * (f(n-1)+2)
= ...
= 2^(n-1) * (f(1)+2) = 3 * 2^(n-1)
so that finally
f(n) = 3 * 2^(n-1) - 2
where you can then apply fast modular power methods.
Modular exponentiation by the square-and-multiply method:
function powerMod(b, e, m)
x := 1
while e > 0
if e%2 == 1
x, e := (x*b)%m, e-1
else b, e := (b*b)%m, e//2
return x
C code for calculating 2^n
const int mod = 1e9+7;
//Here base is assumed to be 2
int cal_pow(int x){
int res;
if (x == 0) res=1;
else if (x == 1) res=2;
else {
res = cal_pow(x/2);
if (x % 2 == 0)
res = (res * res) % mod;
else
res = (((res*res) % mod) * 2) % mod;
}
return res;
}
I have 2 tables of values and want to scale the first one so that it matches the 2nd one as good as possible. Both have the same length. If both are drawn as graphs in a diagram they should be as close to each other as possible. But I do not want quadratic, but simple linear weights.
My problem is, that I have no idea how to actually compute the best scaling factor because of the Abs function.
Some pseudocode:
//given:
float[] table1= ...;
float[] table2= ...;
//wanted:
float factor= ???; // I have no idea how to compute this
float remainingDifference=0;
for(int i=0; i<length; i++)
{
float scaledValue=table1[i] * factor;
//Sum up the differences. I use the Abs function because negative differences are differences too.
remainingDifference += Abs(scaledValue - table2[i]);
}
I want to compute the scaling factor so that the remainingDifference is minimal.
Simple linear weights is hard like you said.
a_n = first sequence
b_n = second sequence
c = scaling factor
Your residual function is (sums are from i=1 to N, the number of points):
SUM( |a_i - c*b_i| )
Taking the derivative with respect to c yields:
d/dc SUM( |a_i - c*b_i| )
= SUM( b_i * (a_i - c*b_i)/|a_i - c*b_i| )
Setting to 0 and solving for c is hard. I don't think there's an analytic way of doing that. You may want to try https://math.stackexchange.com/ to see if they have any bright ideas.
However if you work with quadratic weights, it becomes significantly simpler:
d/dc SUM( (a_i - c*b_i)^2 )
= SUM( 2*(a_i - c*b_i)* -c )
= -2c * SUM( a_i - c*b_i ) = 0
=> SUM(a_i) - c*SUM(b_i) = 0
=> c = SUM(a_i) / SUM(b_i)
I strongly suggest the latter approach if you can.
I would suggest trying some sort of variant on Newton Raphson.
Construct a function Diff(k) that looks at the difference in area between your two graphs between fixed markers A and B.
mathematically I guess it would be integral ( x = A to B ){ f(x) - k * g(x) }dx
anyway realistically you could just subtract the values,
like if you range from X = -10 to 10, and you have a data point for f(i) and g(i) on each integer i in [-10, 10], (ie 21 datapoints )
then you just sum( i = -10 to 10 ){ f(i) - k * g(i) }
basically you would expect this function to look like a parabola -- there will be an optimum k, and deviating slightly from it in either direction will increase the overall area difference
and the bigger the difference, you would expect the bigger the gap
so, this should be a pretty smooth function ( if you have a lot of data points )
so you want to minimise Diff(k)
so you want to find whether derivative ie d/dk Diff(k) = 0
so just do Newton Raphson on this new function D'(k)
kick it off at k=1 and it should zone in on a solution pretty fast
that's probably going to give you an optimal computation time
if you want something simpler, just start with some k1 and k2 that are either side of 0
so say Diff(1.5) = -3 and Diff(2.9) = 7
so then you would pick a k say 3/10 of the way (10 = 7 - -3) between 1.5 and 2.9
and depending on whether that yields a positive or negative value, use it as the new k1 or k2, rinse and repeat
In case anyone stumbles upon this in the future, here is some code (c++)
The trick is to first sort the samples by the scaling factor that would result in the best fit for the 2 samples each. Then start at both ends iterate to the factor that results in the minimum absolute deviation (L1-norm).
Everything except for the sort has a linear run time => Runtime is O(n*log n)
/*
* Find x so that the sum over std::abs(pA[i]-pB[i]*x) from i=0 to (n-1) is minimal
* Then return x
*/
float linearFit(const float* pA, const float* pB, int n)
{
/*
* Algebraic solution is not possible for the general case
* => iterative algorithm
*/
if (n < 0)
throw "linearFit has invalid argument: expected n >= 0";
if (n == 0)
return 0;//If there is nothing to fit, any factor is a perfect fit (sum is always 0)
if (n == 1)
return pA[0] / pB[0];//return x so that pA[0] = pB[0]*x
//If you don't like this , use a std::vector :P
std::unique_ptr<float[]> targetValues_(new float[n]);
std::unique_ptr<int[]> indices_(new int[n]);
//Get proper pointers:
float* targetValues = targetValues_.get();//The value for x that would cause pA[i] = pB[i]*x
int* indices = indices_.get(); //Indices of useful (not nan and not infinity) target values
//The code above guarantees n > 1, so it is safe to get these pointers:
int m = 0;//Number of useful target values
for (int i = 0; i < n; i++)
{
float a = pA[i];
float b = pB[i];
float targetValue = a / b;
targetValues[i] = targetValue;
if (std::isfinite(targetValue))
{
indices[m++] = i;
}
}
if (m <= 0)
return 0;
if (m == 1)
return targetValues[indices[0]];//If there is only one target value, then it has to be the best one.
//sort the indices by target value
std::sort(indices, indices + m, [&](int ia, int ib){
return targetValues[ia] < targetValues[ib];
});
//Start from the extremes and meet at the optimal solution somewhere in the middle:
int l = 0;
int r = m - 1;
// m >= 2 is guaranteed => l > r
float penaltyFactorL = std::abs(pB[indices[l]]);
float penaltyFactorR = std::abs(pB[indices[r]]);
while (l < r)
{
if (l == r - 1 && penaltyFactorL == penaltyFactorR)
{
break;
}
if (penaltyFactorL < penaltyFactorR)
{
l++;
if (l < r)
{
penaltyFactorL += std::abs(pB[indices[l]]);
}
}
else
{
r--;
if (l < r)
{
penaltyFactorR += std::abs(pB[indices[r]]);
}
}
}
//return the best target value
if (l == r)
return targetValues[indices[l]];
else
return (targetValues[indices[l]] + targetValues[indices[r]])*0.5;
}