Example of recursive define in real system/ programs - recursion

I'm working on a solver to solve the recursive constraint. How the solver works is like that, for example given a sequence which is recursively defined as a(n) = a(n-1)*2 + 2. then the solver will find the exactly formula representing the a(n) based on n. You can look at this example in wolfram alpha for more understanding
In programming. the program which computes the sequence is likes this:
a[0] = 1;
for (i = 1; i <= n; i = i+1)
a[i] = 2*a[i-1]+2;
Then by using our solver, I can represent a[n] by n and add this feature to constraint.
However, I'm sort of finding the real world examples which includes the recursive sequences or linear recurrence relation like this. Do you know any piece of code which can be my test bench?

Related

Integral basis nf.zk versus nfbasis in Pari GP

I have been using the database of lmfdb.org to find the integral basis of a number field. Now, I want to utilize PARI/GP in multiplying algebraic integers. However, I have encountered a problem. PARI/GP uses the integral basis "nf.zk" in its computations, which apparently is not always the same as the "nfbasis(f)", which is the integral basis that lmfdb.org provides.
For example, we have the following code from PARI/GP:
? f = x^3 - x^2 + 2*x + 8
nf = nfinit(f)
nf.zk
%1 = [1, x, 1/2*x^2 - 1/2*x + 1]
? nfbasis(f)
%2 = [1, x, 1/2*x^2 - 1/2*x]
Now, my questions are:
Why are nf.zk and nfbasis(f) different?
Why does PARI/GP use nf.zk instead of nfbasis(f)?
Lastly, can I tell PARI/GP to use nfbasis(f) instead of nf.zk?
When we take the trouble to initialize an nf structure with nfinit, we perform precomputations to speed up later work. Here, nfinit first computes the integer basis by calling nfbasis, which returns the (canonical) HNF basis, then LLL-reduces it with respect to the T2 norm. The LLL-reduced basis is usually different from the HNF one, but it usually has smaller elements.
This LLL reduction can be expensive (in particular when the degree is large) but it ensures that time complexities are bounded in terms of the field discriminant instead of the size of the input polynomial.
I believe all polynomials defining number fields in the lmfdb were run through polredabs which ensures their coefficients are small (in terms of the field discriminant), but the HNF integer basis may still be much larger than the LLL one. Additionally, if an algebraic integer has small T2 norm, its expression in terms of the LLL-reduced basis is guaranteed to have small coefficients, whereas it can have much larger coefficients on the HNF basis.
In pari-2.14 (which is not released yet but available via git or through nightly snapshots on the PARI/GP website), you can use nfinit(, 4), which removes the LLL reduction step. This speeds up the initialization, but usually slows down every operation involving the resulting nf.
? f = x^3 - x^2 + 2*x + 8
? nfinit(f,4).zk
%2 = [1, x, 1/2*x^2 - 1/2*x]

Time Complexity of recursive Power Set function

I am having trouble with simplifying the time complexity for this recursive algorithm for finding the Power-Set of a given Input Set. I not entirely sure if what I have got is correct so far either.
It's described at the bottom of the page in this link: http://www.ecst.csuchico.edu/~akeuneke/foo/csci356/notes/ch1/solutions/recursionSol.html
By considering each step taken by the function for an arbitrarily chosen Input Set of size 4 and then translating that to an Input Set of size n, I came to the result that the time complexity in terms of Big-O notation for this algorithm is: 2nnn
Is this correct? And is there a specific way to approach finding the time-complexity of recursive functions?
The run-time is actually O(n*2n). The simple explanation is that this is an asymptotically optimal algorithm insofar as the total work it does is dominated by creating the subsets which feature directly in the final output of the algorithm, with the total length of the output generated being O(n*2n). We can also analyze an annotated implementation of the pseudo-code (in JavaScript) to show this complexity more rigorously:
function powerSet(S) {
if (S.length == 0) return [[]] // O(1)
let e = S.pop() // O(1)
let pSetWithoutE = powerSet(S); // T(n-1)
let pSet = pSetWithoutE // O(1)
pSet.push(...pSetWithoutE.map(set => set.concat(e))) // O(2*|T(n-1)| + ||T(n-1)||)
return pSet; // O(1)
}
// print example:
console.log('{');
for (let subset of powerSet([1,2,3])) console.log(`\t{`, subset.join(', '), `}`);
console.log('}')
Where T(n-1) represents the run-time of the recursive call on n-1 elements, |T(n-1)| represents the number of subsets in the power-set returned by the recursive call, and ||T(n-1)|| represents the total number of elements across all subsets returned by the recursive call.
The line with complexity represented in these terms corresponds to the second bullet point of step 2. of the pseudocode: returning the union of the powerset without element e, and that same powerset with every subset s unioned with e:
(1) U ((2) = {s in (1) U e})
This union is implemented in terms of push and concat operations. The push does the union of (1) with (2) in |T(n-1)| time as |T(n-1)| new subsets are being unioned into the power-set. The map of concat operations is responsible for generating (2) by appending e to every element of pSetWithoutE in |T(n-1)| + ||T(n-1)|| time. This second complexity corresponds to there being ||T(n-1)|| elements across the |T(n-1)| subsets of pSetWithoutE (by definition), and each of those subsets being increased in size by 1.
We can then represent the run-time on input size n in these terms as:
T(n) = T(n-1) + 2|T(n-1)| + ||T(n-1)|| + 1; T(0) = 1
It can be proven via induction that:
|T(n)| = 2n
||T(n)|| = n2n-1
which yields:
T(n) = T(n-1) + 2*2n-1 + (n-1)2n-2 + 1; T(0) = 1
When you solve this recurrence relation analytically, you get:
T(n) = n + 2n + n/2*2n = O(n2n)
which matches the expected complexity for an optimal power-set generation algorithm. The solution of the recurrence relation can also be understood intuitively:
Each of n iterations does O(1) work outside of generating new subsets of the power-set, hence the n term in the final expression.
In terms of the work done in generating every subset of the power-set, each subset is pushed once after it is generated through concat. There are 2n subsets pushed, producing the 2n term. Each of these subsets has an average length of n/2, giving a combined length of n/2*2n which corresponds to the complexity of all concat operations. Hence, the total time is given by n + 2n + n/2*2n.

how to solve mathematical expectation in hackerrank 20/20 hack february 2014

So this problem was given in Hackerrank 20/20 hack february :
Let’s consider a random permutation p1, p2, …, pN of numbers 1, 2, …, N and calculate the value F=(X2+…+XN-1)^K, where Xi equals 1 if one of the following two conditions holds: pi-1 < pi > pi+1 or pi-1 > pi < pi+1 and Xi equals 0 otherwise. What is the expected value of F?
Constraints: 1000 <= N <= 10^9, 1 <= K <= 5
I thought it was Eulerian number related problem. As the contest is over,I can see the solutions. But I don't understand any of them. Is there any tricks?
so a few words about my "solution" ;)
What I basically did:
1) write a brute force solver (obviously for N << 20)
-> this solver won't handle high values of N, as given in the constraints
2) analyze the output of the solutions to these (invalid) inputs
-> observe that with K=1, the output follows a straight line
-> K=2, is a quadratic function
-> K=3, is a cubic function, and so on
3) find the parameters for each function (K=1 - 5) by using a solver, or how I did it, wolfram alpha ;)
-> additionally I "normalized" each parameter to only have one division afterwards
4) use any programming language / big integer class to solve the correct inputs in O(1)
I'm pretty sure that one can come up with these parameters in a very clever way, but for me, during the contest, this solution was easy and fast enough without having to think too much about the "why" ;)

Fast Fourier Transform Pseudocode?

The purpose of the following code is to convert a polynomial from coefficient representation into value representation by dividing it into its odd and even powers and then recursing on the smaller polynomials.
function FFT(A, w)
Input: Coefficient representation of a polynomials A(x) of degree ≤ n-1, where n
is a power of 2w, an nth root of unity.
Output: Value representation A(w^0),...,A(w^(n-1))
if w = 1; return A(1)
express A(x) in the form A_e(x^2) and xA_o(x^2) /*where A_e are the even powers and A_o
the odd.*/
call FFT(A_e,w^2) to evaluate A_e at even of powers of w
call FFT(A_o,w^2) to evaluate A_o at even powers of w
for j = 0 to n-1;
compute A(w^j) = A_e(w^(2j))+w^j(A_o(w^(2j)))
return A(w^0),...,A(w^(n-1))
What is the for loop being used for?
Why is the pseudocode only adding the smaller polynomials, doesn't it need to subtract them too? (to calculate A(-x)). Isn't that what the algorithm completely based on? Adding and subtracting the smaller polynomials to reduce the points in half?*
Why are powers of "w" being evaluated as opposed to "x"?
I am not a too sure if this belongs here, since the question is quite mathematical. If you feel this question is off-topic, I would appreciate it if you moved it to a site where you felt this question would be more appropriate, rather that just closing it.
*Psuedocode was gotten from Algorithms by S. Dasgupta. Page 71.
The loop is for recursion.
No need to add for negative x; the FFT transforms from time to frequency space.

Proving worst case running time of QuickSort

I am trying to perform asymptotic analysis on the following recursive function for an efficient way to power a number. I am having trouble determining the recurrence equation due to having different equations for when the power is odd and when the power is even. I am unsure how to handle this situation. I understand that the running time is theta(logn) so any advice on how to proceed to this result would be appreciated.
Recursive-Power(x, n):
if n == 1
return x
if n is even
y = Recursive-Power(x, n/2)
return y*y
else
y = Recursive-Power(x, (n-1)/2)
return y*y*x
In any case, the following condition holds:
T(n) = T(floor(n/2)) + Θ(1)
where floor(n) is the biggest integer not greater than n.
Since floor doesn't have influence on results, the equation is informally written as:
T(n) = T(n/2) + Θ(1)
You have guessed the asymptotic bound correctly. The result could be proved using Substitution method or Master theorem. It is left as an exercise for you.

Resources