Solve sigma example - math

I have a sigma example:
And I don't have any idea how to solve it. Can you help me with the code, please?
(Code pascal, java or c++)

Expanding the inner term, you get m^3 - 3m^2n + 3mn^2 - n^3, which yields a double summation of m^5, -3m^4n, 3m^3n^2 and -m^2n^3. These summations are separable, meaning that they are the product of a sum on m of a power of m and a sum on n of a power of n.
You can evaluate these sums by means of the Faulhaber formulas up to degree five, which are polynomial expressions. Evaluate them by Horner's method.
int F1(int n) { return (n + 1) * n / 2; }
int F2(int n) { return ((2 * n + 3) * n + 1) * n / 6; }
int F3(int n) { return ((n + 2) * n + 1) * n * n / 4; }
...
int S= F5(20) * 30 - 3 * F4(20) * F1(30) + 3 * F3(20) * F2(30) - F2(20) * F3(30);
Using the obvious method of summation, the inner loop will evaluate 30 cubes of a difference, for a total of 60 additions and 60 multiplications, and the outer loop will repeat this 20 times, with extra multiplications and additions, for a total of 1220 + and 1240 *.
Compare to the above method, performing 18 +, 30 * and 7 divisions in total (independently of the values of m and n).

Related

Which one is faster? O(2^n) or O(n!)

I'm studying algorithm complexity and I am trying to figure out this one question that runs in my mind- is O(n!) faster than O(2^n) or is it the opposite way around?
O(2^n) is 2 * 2 * 2 * ... where O(n!) is 1 * 2 * 3 * 4 * ...
O(n!) will quickly grow much larger - so O(2^n) is faster.
For example: 2^10 = 1024 and 10! = 3628800
You can try working with Stirling's approximation for n!
https://en.wikipedia.org/wiki/Stirling%27s_approximation
n! = (n / e)^n * sqrt(2 * Pi * n) * (1 + o(n))
Now, let's compare
O(n!) <=> O(2^n)
In order to find out the right letter <, = or > let's compute limit
lim (n! / 2^n) =
n -> +inf
lim (n / e)^n * sqrt(2 * pi * n) / 2^n >=
n -> +inf
lim n^n / (2 * e)^n >= // when n > 4 * e
n -> +inf
lim (4 * e)^n / (2 * e)^n =
n -> +inf
lim 2^n = +inf
n -> +inf
So
lim (n! / 2^n) = +inf
n -> +inf
which means that O(n!) > O(2^)

How to convert a bitstream to a base20 number?

Given is a bitstream (continuous string of bits too long to be processed at once) and the result should be a matching stream of base20 numbers.
The process is simple for a small number of bits:
Assuming most significant bit right:
110010011 = decimal 403 (1 * 1 + 1 * 2 + 1 * 16 + 1 * 128 + 1 * 256)
403 / 20 = 20 R 3
20 / 20 = 1 R 0
1 / 20 = 0 R 1
Result is [3, 0, 1] = 3 * 1 + 0 * 20 + 1 * 400
But what if the bits are too much to be converted to a decimal number in one step?
My approach was doing both processes in a loop: Convert the bits to decimal and converting the decimal down to base20 numbers. This process requires the multipliers (position values) to be lowered while walking through the bits, because otherwise, they'll quickly increase too much to be calculated probably. The 64th bit would have been multiplied by 2^64 and so on.
note: I understood the question that a bitstream is arriving of unknown length and during an unknown duration and a live conversion from base 2 to base 20 should be made.
I do not believe this can be done in a single go. The problem is that base 20 and base 2 have no common ground and the rules of modular arithmetic do not allow to solve the problem cleanly.
(a+b) mod n = ( (a mod n) + (b mod n) ) mod n
(a*b) mod n = ( (a mod n) * (b mod n) ) mod n
(a^m) mod n = ( (a mod n)^m ) mod n
Now if you have a number A written in base p and q (p < q) as
A = Sum[a[i] p^i, i=0->n] = Sum[b[i] q^i, i=0->n]
Then we know that b[0] = A mod q. However, we do not know A and hence, the above tells us that
b[0] = A mod q = Sum[a[i] p^i, i=0->n] mod q
= Sum[ (a[i] p^i) mod q, i=0->n] mod q
= Sum[ ( (a[i] mod q) (p^i mod q) ) mod q, i=0->n] mod q
This implies that:
If you want to know the lowest digit b0 of a number in base q, you need to have the knowledge of the full number.
This can only be simplified if q = pm as
b[0] = A mod q = Sum[a[i] p^i, i=0->n] mod q
= Sum[ (a[i] p^i) mod q, i=0->n] mod q
= Sum[ a[i] p^i, i=0->m-1]
So in short, since q = 20 and p = 2. I have to say, no, it can not be done in a single pass. Furthermore, remind yourself that I only spoke about the first digit in base q and not yet the ith digit.
As an example, imagine a bit stream of 1000 times 0 followed by a single 1. This resembles the number 21000. The first digit is easy, but to get any other digit ... you are essentially in a rather tough spot.

Interpolation considering acceleration [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I don't really know if what I want to do is considered interpolation but I'll try to explain.
Now when I want to go from point A to point B(for simplicity consider only 1 coordinate space) in time T I compute position using linear interpolation formula:
P(t) = A + (B-A) * (t / T), T != 0
This works fine in most cases, but I want to cosider acceleration and braking like this:
first x% of the time it will be acceleration from vi speed to v speed
next y% of the time it will be constant v speed
last z% of the time it will be deceleration to reach vf speed at t = T
How can I compute P(t), t in [0, T] considering acceleration and braking?
Consider we have the following points in time:
t0 = 0 is the beginning of the movement
ta is the point when acceleration ends
td is the point when decceleration begins
T is the end of the movement
Then we have three segments of the movement. [t0, ta], (ta, td], (td, T]. Each can be specified separately. For the acceleration / decceleration we need to calculate the acceleration aa and the decceleration ad as follows:
aa = (v - vi) / (ta - t0)
ad = (vf - v) / (T - td)
According to your question, all values are given.
Then the movement can be expressed as:
P(t) :=
if(t < ta)
1 / 2 * aa * t^2 + vi * t + A
else if(t < td)
v * (t - ta) + 1 / 2 * aa * ta^2 + vi * ta + A
// this is the length of the first part
else
1 / 2 * ad * (t - td)^2 + v * (t - td)
+ v * (td - ta) + 1 / 2 * aa * ta^2 + vi * ta + A
//those are the lengths of the first two parts
If we precompute the lengths of the parts as
s1 := 1 / 2 * aa * ta^2 + vi * ta + A
s2 := v * (td - ta)
then the formula becomes a bit shorter:
P(t) :=
if(t < ta)
1 / 2 * aa * t^2 + vi * t + A
else if(t < td)
v * (t - ta) + s1
else
1 / 2 * ad * (t - td)^2 + v * (t - td) + s1 + s2
Here is an example plot:
However, it is very likely that the movement does not hit B at T except you chose proper values. That is because the equation is over-specified. You can e.g. calculate v based on B instead of specifying it.
Edit
The calculation of v to reach a specific B is:
v = (2 * A - 2 * B - td * vf + T * vf + ta * vi) / (ta - td - T)

Generating random numbers from various distributions in CUDA

I am playing around with doing MCMC on the GPU, and need implementations for various samplers, written for CUDA.
Most of the posts I've seen on StackOverflow relate to uniform, binomial, and normal sampling. Are there any libraries that allow me the simplicity and variety of the d-p-q-r functions in R (See this page)?
I would like to be able to sample from Gamma, Normal, Binomial, and the inverse distributions used in Bayesian problems (inverse chi square, inverse gamma), and would prefer not to have to write my own using inverse probability transforms and acceptance-rejection sampling.
For the Gamma distribution, this is what I use at the moment.
It is the GSL function modified to work with CuRAND.
__device__ double ran_gamma (curandState localState, const double a, const double b){
/* assume a > 0 */
if (a < 1){
double u = curand_uniform_double(&localState);
return ran_gamma (localState, 1.0 + a, b) * pow (u, 1.0 / a);
}
{
double x, v, u;
double d = a - 1.0 / 3.0;
double c = (1.0 / 3.0) / sqrt (d);
while (1){
do{
x = curand_normal_double(&localState);
v = 1.0 + c * x;
} while (v <= 0);
v = v * v * v;
u = curand_uniform_double(&localState);
if (u < 1 - 0.0331 * x * x * x * x)
break;
if (log (u) < 0.5 * x * x + d * (1 - v + log (v)))
break;
}
return b * d * v;
}
}

How to implement exponentiation of a rational number without nth root?

Its available for me only log(base "e"), sin, tan and sqrt (only square root) functions and the basic arithmetical operators (+ - * / mod). I have also the "e" constant.
I'm experimenting several problems with Deluge (zoho.com) for these restrictions. I must implement exponentiation of rational (fraction) bases and exponents.
Say you want to calculate pow(A, B)
Consider the representation of B in base 2:
B = b[n] * pow(2, n ) +
b[n-1] * pow(2, n - 1) +
...
b[2] * pow(2, 2 ) +
b[1] * pow(2, 1 ) +
b[0] * pow(2, 0 ) +
b[-1] * pow(2, -1 ) +
b[-2] * pow(2, -2 ) +
...
= sum(b[i] * pow(2, i))
where b[x] can be 0 or 1 and pow(2, y) is an integer power of two (i.e., 1, 2, 4, 1/2, 1/4, 1/8).
Then,
pow(A, B) = pow(A, sum(b[i] * pow(2, i)) = mul(pow(A, b[i] * pow(2, i)))
And so pow(A, B) can be calculated using only multiplications and square root operations
If you have a function F() that does e^x, where e is the constant, and x is any number, then you can do this: (a is base, b is exponent, ln is log-e)
a^b = F(b * ln(a))
If you don't have F() that does e^x, then it gets trickier. If your exponent (b) is rational, then you should be able to find integers m and n so that b = m/n, using a loop of some sort. Once you have m and n, you make another loop which multiples a by itself m times to get a^m, then multiples a by itself n times to get a^n, then divide a^m/a^n to get a^(m/n), which is a^b.

Resources