Calculating varied summation formula? - math

I'm trying to pragmatically emulate my calculator's summation function without the use of loops, the reason why is that they become very expensive once the function gets bloated. So far, I know of the formula n(n+1)/2, but that only works if the function looks like:
from X = 1 to 100, Σ (X), result = 5050.
Without a loop, is there a way to implement a function where:
from X = 1 to 100, Σ (X^2+X)?
EDIT: Note that the formula must account for all possible function bodies.
Thanks for the answers

The formula Σ (X^2+X) is equal to Σ (X) + Σ (X^2). You already know how to calculate Σ (X).
As for the Σ (X^2), this is known as Square Pyramidal Number. You can see a longer explanation here, but the formula is:
n3/3 + n2/2 + n/6
Together, that's
n3/3 + n2/2 + n/6 + n(n+1)/2
Or
(n3+2n)/3 + n2

Related

Evaluate the Binomial polynomial expression in R

I need to calculate the binomial polynomial expression in r. I can calculate the polynomial expression using polynomial() function in r. But on top of evaluating the expression in polynomial, I want the expression must hold the binomial expression as well.
For example: in binomial,
we know
1+1 = 0, which is also 1 XOR 1 = 0,
Now, if we do the same in polynomial expressions, it can be done in the following way:
(1+x) + x = 1
Here we suppose,
x + x is similar to 1 + 1 which is equal to zero. Or in other words x XOR x = 0.
Before, I have added the whole of the code in R, maybe there were few people who did not understand the question, so they might thought it is better to close the question.
I need to know how to implement the XOR operation in binomial polynomial expression in R.
Need to apply in following manner:
let f(x) = (1 + x + x^3) and g(x) = (x + x^3),
Therefore for the sum of f(x) and g(x), I need to do the following:
f(x) + g(x) = (1 + x + x^3) + (x + x^3)
= 1 + (1 + 1)x + (1 + 1)x^3 (using addition modulo 2 in Z2)
= 1 + (0)x + (0)x^3
= 1.
I hope, this time I more clear of what exactly I want and my question is more understandable.
Thanks in Advance
XOR <- function(x,y) (x+y) %% 2
would give you an XOR function fitting your definition.
Adding a solution to my own question.
Basically we need to first calculate the polynomial. Simply how we do. That is a first step. For example for adding f(x) and g(x), create a function like below
bPolynomial<-function(f, g){
K <- polynomial()
K <- (1 + x + x^3) + (x + x^3) # where f is (1 + x + x^3), and g is (x + x^3)
}
Then second is, extract the coefficients from the above polynomial and reduce them to modulo 2, using below code:
coeff <- coefficients(C_D) %% 2
print(coeff)
C_D <- polynomial(c(coeff))
That is it. You will get the desired result. I do feel stupid for getting stuck on something which is very basic. But implementation with mathematics computation sometimes make people confuse, same happened with me..!!
Hope it will be helpful to other people. Thanks.

Rewriting sine using simprocs in Isabelle

I want to implement a simproc capable of rewriting the argument of sin into a linear combination x + k * pi + k' * pi / 2 (where ideally k' = 0 or k' = 1) and then apply existing lemmas about additions of arguments in sines.
The steps could be as follows:
Pattern match the goal to extract the argument of sin(expr):
fun dest_sine t =
case t of
(#{term "(sin):: real ⇒ real"} $ t') => t'
| _ => raise TERM ("dest_sine", [t]) ;
Prove that for some x, k, k': expr = x + k*pi + k' * pi/2.
Use existing lemmas to rewrite to a simpler trigonometric function:
fun rewriter x k k' =
if (k mod 2 = 0 andalso k' = 0) then #{term "sin"} $ x
else if (k mod 2 = 0 andalso k' = 1) then #{term "cos"} $ x
else if (k mod 2 = 1 andalso k' = 0) then #{term "-sin"} $ x
else #{term "-cos"} $ x
I'm stuck at step two. The idea is to use algebra simplifications to obtain the x,k,k' where the theorem holds. I believe schematic goals should do this but I haven't ever used them.
My thoughts
Could I rather assume that the expression is of this form and let the simplifier find it so that the simproc can be triggered?
If I first start assuming the linear form x + k*pi + k' * pi/2 then:
Extract x,k,k' from this combination.
Apply rewriter and obtain the corresponding term to be rewritten two.
Apply in a sequence: rules dealing with + pi/2, rules dealing with + 2 pi
I would start easy and ignore the pi / 2 part for now.
You probably want to build a simproc that matches on anything of the form sin x. Then you want to write a conversion that takes that term x (which is assumed to be a sum of several terms) and brings it into the form a + of_int b * p.
A conversion is essentially a function of type cterm → thm which takes a cterm ct and returns a theorem of the form ct ≡ …, i.e. it's a form of deterministic rewriting (a conversion can also fail by throwing a CTERM exception, by convention). There are a lot of combinators for building and using these in Pure/conv.ML.
This is probably a bit fiddly. You essentially have to descend through the term and, for each atom (i.e. anything not of the form _ + _) you have to figure out whether it can be brought into the form of_int … * pi (e.g. again by writing a conversion that does this transformation – to make it easy you can omit this part so that your procedure only works if the terms are already in that form) and then group all the terms of the form of_int … * pi to the right and all the terms not of that form to the left using associativity and commutativity.
I would suggest this:
Define a function SIN_SIMPROC_ATOM x n = x + of_int n * pi
Write a conversion sin_atom_conv that rewrites of_int n * pi to SIN_SIMPROC_ATOM 0 n and everything else into SIN_SIMPROC_ATOM x 0
Write a conversion that descends through +, applies sin_atom_conv to every atom, and then applies some kind of combination rule like SIN_SIMPROC_ATOM x1 n1 + SIN_SIMPROC_ATOM x2 n2 = SIN_SIMPROC_ATOM (x1 + x2) (n1 + n2)
In the end, you have rewritten your entire form to the form sin (SIN_SIMPROC_ATOM x n), and then you can apply some suitable rule to that.
It's not quite clear to me how to best handle the parity of n. You could rewrite sin (SIN_SIMPROC_ATOM x n) = (-1) ^ nat ¦n¦ * sin x but I'm not sure if that's what the user really wants in most cases. It might make more sense to only do that if you can deduce the parity of n statically (e.g. by using the simplifier) and then directly simplify to sin x or -sin x.
The situation becomes even more complicated if you want to include halves of π. You can of course extend SIN_SIMPROC_ATOM by a second term for halves of π (and one for doubles of π as well to make it more uniform). Or you could ad all of them together so that you just have a single integer n that describes your multiples of π/2, and k multiples of π simply contribute 2k to that term. And then you have to figure out what n mod 4 is – possibly again with the simplifier or with some clever static method.

Calculating the coefficients of a separable state

Given a separable 2-qubit state
φ = φ0 ⊗ φ1
with
φi= ai0|0> + ai1|1>
φ thus can be written as
φ = b00|00> + b01|01> + b10|10> + b11|11>
with
bij = a0ia1j.
Now let some bij be given, i.e. an arbitrary 2-qubit state
φ = b00|00> + b01|01> + b10|10> + b11|11>
Let B = (bij). By Schmidt decomposition there are 2x2 matrices U, V, Σ, such that
U, V unitary
Σ positive semidefinite diagonal
B = U ∘ Σ ∘ V*
Let σ0, σ1 be the two diagonal elements of Σ.
The state φ = b00|00> + b01|01> + b10|10> + b11|11> is entangled if and only if σ0 + σ1 > 1.
QUESTION
Given a state φ = b00|00> + b01|01> + b10|10> + b11|11> and its Schmidt decomposition B = U ∘ Σ ∘ V*, such that σ0 + σ1 ≤ 1, i.e. the state is separable. This means there are φi= ai0|0> + ai1|1>, such that φ can be written as
φ = φ0 ⊗ φ1
How do I calculate A = (aij) from B = (bij), i.e. from U, V, Σ?
This is the reverse of
bij = a0ia1j
given that bij defines a separable state.
If you're given a pure state and promised that it's separable, you don't need the Schmidt decomposition to compute the parts. Just lay the amplitudes out in a grid, read off the proportions between the columns for one part and read off the proportions between the rows for the other.
That is to say, the statement that a 2-qubit system φ is separable so φ = αβ guarantees that φ₀₀/φ₀₁ = φ₁₀/φ₁₁ = β₀/β₁ and that φ₀₀/φ₁₀ = φ₀₁/φ₁₁ = α₀/α₁. And knowing α₀/α₁ is enough to solve for α, except for the global phase factor. (Note: work with proportions α₀:α₁ instead of ratios α₀/α₁ if α₁ might be zero.)
This generalizes to systems with more qubits. A given subset of qubits is separable if and only if grouping by all the other qubits gives you a bunch of parts with agreeing proportions between their pieces. And the proportions between the pieces constrain everything except the global phase factor.
Using the Schmidt decomposition as a shortcut
The Schmidt decomposition does make this easier. It does all the hard 'reconstructing the proportions' work. If a pure system is separable then its SVD decomposition should only have one non-zero singular value, and that singular value should equal 1. So you have something like:
|1 0 0 ...|
U |0 0 0 ...| V
|0 0 0 ...|
|... . ...|
But that's just multiplying the first column of U by the first row of V! So we have a system with n*m entries being created from a system with n entries and a system with m entries... Yup, the first column and the first row contain the amplitudes of α and β.
Example
My circuit simulator Quirk has built-in inline amplitude displays that perform this kind of separation (without doing an SVD). You can see the code that does it on github, though it's all GPU based so not particularly clear.
(It was by far the most complicated display to write, since it has to do the grouping then compare all the groups. But some groups might have no amplitude so they have to be ignored, and there may be noise in the system from float errors so you should focus on the big groups and... blergh.)
Also you can play with it in the simulator itself. Here's an example circuit using those displays:
You might also find this blog post intuitively useful.

Simplifying Recurrence Relation c(n) = c(n/2) + n^2

I'm really confused on simplifying this recurrence relation: c(n) = c(n/2) + n^2.
So I first got:
c(n/2) = c(n/4) + n^2
so
c(n) = c(n/4) + n^2 + n^2
c(n) = c(n/4) + 2n^2
c(n/4) = c(n/8) + n^2
so
c(n) = c(n/8) + 3n^2
I do sort of notice a pattern though:
2 raised to the power of whatever coefficient is in front of "n^2" gives the denominator of what n is over.
I'm not sure if that would help.
I just don't understand how I would simplify this recurrence relation and then find the theta notation of it.
EDIT: Actually I just worked it out again and I got c(n) = c(n/n) + n^2*lgn.
I think that is correct, but I'm not sure. Also, how would I find the theta notation of that? Is it just theta(n^2lgn)?
Firstly, make sure to substitute n/2 everywhere n appears in the original recurrence relation when placing c(n/2) on the lhs.
i.e.
c(n/2) = c(n/4) + (n/2)^2
Your intuition is correct, in that it is a very important part of the problem. How many times can you divide n by 2 before we reach 1?
Let's take 8 for an example
8/2 = 4
4/2 = 2
2/2 = 1
You see it's 3, which as it turns out is log(8)
In order to prove the theta notation, it might be helpful to check out the master theorem. This is a very useful tool for proving complexity of a recurrence relation.
Using the master theorem case 3, we can see
a = 1
b = 2
logb(a) = 0
c = 2
n^2 = Omega(n^2)
k = 9/10
(n/2)^2 < k*n^2
c(n) = Theta(n^2)
The intuition as to why the answer is Theta(n^2) is that you have n^2 + (n^2)/4 + (n^2)/16 + ... + (n^2)/2^(2n), which won't give us logn n^2s, but instead increasingly smaller n^2s
Let's answer a more generic question for recurrences of the form:
r(n) = r(d(n)) + f(n). There are some restrictions for the functions, that need further discussion, e.g. if x is a fix point of d, then f(x) should be 0, otherwise there isn't any solution. In your specific case this condition is satisfied.
Rearranging the equation we get that r(n) - r(d(n)) = f(n), and we get the intuition that r(n) and r(d(n)) are both a sum of some terms, but r(n) has one more term than r(d(n)), that's why the f(n) as the difference. On the other hand, r(n) and r(d(n)) have to have the same 'form', so the number of terms in the previously mentioned sum has to be infinite.
Thus we are looking for a telescopic sum, in which the terms for r(d(n)) cancel out all but one terms for r(n):
r(n) = f(n) + a_0(n) + a_1(n) + ...
- r(d(n)) = - a_0(n) - a_1(n) - ...
This latter means that
r(d(n)) = a_0(n) + a_1(n) + ...
And just by substituting d(n) into the place of n into the equation for r(n), we get:
r(d(n)) = f(d(n)) + a_0(d(n)) + a_1(d(n)) + ...
So by choosing a_0(n) = f(d(n)), a_1(n) = a_0(d(n)) = f(d(d(n))), and so on: a_k(n) = f(d(d(...d(n)...))) (with k+1 pieces of d in each other), we get a correct solution.
Thus in general, the solution is of the form r(n) = sum{i=0..infinity}(f(d[i](n))), where d[i](n) denotes the function d(d(...d(n)...)) with i number of iterations of the d function.
For your case, d(n)=n/2 and f(n)=n^2, hence you can get the solution in closed form by using identities for geometric series. The final result is r(n)=4/3*n^2.
Go for advance Master Theorem.
T(n) = aT(n/b)+n^klog^p
where a>0 b>1 k>0 p=real number.
case 1: a>b^k
T(n) = 0(n^logba) b is in base.
case 2 a=b^k
1. p>-1 T(n) than T(n)=0(n^logba log^p+1)
2. p=-1 Than T(n)=0(n^logba logn)
3. p<-1 than T(n)=0(n^logba)
case 3: a<b^k
1.if p>=0 than T(n)=0(n^k log^p n)
2 if p<0 than T(n)=O(n^k)
forgave Constant bcoz constant doesn't change time complexity or constant change processor to processor .(i.e n/2 ==n*1/2 == n)

How to obtain the numerical solution of these differential equations with matlab

I have differential equations derived from epidemic spreading. I want to obtain the numerical solutions. Here's the equations,
t is a independent variable and ranges from [0,100].
The initial value is
y1 = 0.99; y2 = 0.01; y3 = 0;
At first, I planned to deal these with ode45 function in matlab, however, I don't know how to express the series and the combination. So I'm asking for help here.
**
The problem is how to express the right side of the equations as the odefun, which is a parameter in the ode45 function.
**
Matlab has functions to calculate binomial coefficients (number of combinations) and the finite series can be expressed just as matrix multiplication. I'll demonstrate how that works for the sum in the first equation. Note the use of the element-wise "dotted" forms of the arithmetic operators.
Calculate a row vector coefs with the constant coefficients in the sum as:
octave-3.0.0:33> a = 0:20;
octave-3.0.0:34> coefs = log2(a * 0.05 + 1) .* bincoeff(20, a);
The variables get combined into another vector:
octave-3.0.0:35> y1 = 0.99;
octave-3.0.0:36> y2 = 0.01;
octave-3.0.0:37> z = (y2 .^ a) .* ((1 - y2) .^ a) .* (y1 .^ a);
And the sum is then just evaluated as the inner product:
octave-3.0.0:38> coefs * z'
The other sums are similar.
function demo(a_in)
X = [0;0;0];
T = [0:.1:100];
a = a_in; % for nested scope
[Xout, Tout ]= ode45( #myFunc, T, X );
function [dxdt] = myFunc( t, x )
% nested function accesses "a"
dxdt = 0*x + a;
% Todo: real value of dxdt.
end
end
What about this, and you simply need to fill in the dxdt from your math above? It remains to be seen if the numerical roundoff matters...
Edit: there's a serious issue due to the 1=y1+y2+y3 constraint. Is that even allowed, since you have an IVP with 3 initial values given and 3 first order ODE's? If that constraint is a natural consequence of the equations, it may not be needed.

Resources