Evaluate the Binomial polynomial expression in R - r

I need to calculate the binomial polynomial expression in r. I can calculate the polynomial expression using polynomial() function in r. But on top of evaluating the expression in polynomial, I want the expression must hold the binomial expression as well.
For example: in binomial,
we know
1+1 = 0, which is also 1 XOR 1 = 0,
Now, if we do the same in polynomial expressions, it can be done in the following way:
(1+x) + x = 1
Here we suppose,
x + x is similar to 1 + 1 which is equal to zero. Or in other words x XOR x = 0.
Before, I have added the whole of the code in R, maybe there were few people who did not understand the question, so they might thought it is better to close the question.
I need to know how to implement the XOR operation in binomial polynomial expression in R.
Need to apply in following manner:
let f(x) = (1 + x + x^3) and g(x) = (x + x^3),
Therefore for the sum of f(x) and g(x), I need to do the following:
f(x) + g(x) = (1 + x + x^3) + (x + x^3)
= 1 + (1 + 1)x + (1 + 1)x^3 (using addition modulo 2 in Z2)
= 1 + (0)x + (0)x^3
= 1.
I hope, this time I more clear of what exactly I want and my question is more understandable.
Thanks in Advance

XOR <- function(x,y) (x+y) %% 2
would give you an XOR function fitting your definition.

Adding a solution to my own question.
Basically we need to first calculate the polynomial. Simply how we do. That is a first step. For example for adding f(x) and g(x), create a function like below
bPolynomial<-function(f, g){
K <- polynomial()
K <- (1 + x + x^3) + (x + x^3) # where f is (1 + x + x^3), and g is (x + x^3)
}
Then second is, extract the coefficients from the above polynomial and reduce them to modulo 2, using below code:
coeff <- coefficients(C_D) %% 2
print(coeff)
C_D <- polynomial(c(coeff))
That is it. You will get the desired result. I do feel stupid for getting stuck on something which is very basic. But implementation with mathematics computation sometimes make people confuse, same happened with me..!!
Hope it will be helpful to other people. Thanks.

Related

Pytorch derivative calculation

I have this simple pytorch code:
x = torch.arange(3,dtype=float)
x.requires_grad_(True)
y = 3*x + x.sum()
y.backward(torch.ones(3))
x.grad
This gives me [6,6,6], but shouldn't it be [4,4,4] ?
Because if we have f(x)=3 * x0 + 3 * x1 + 3 * x2 + x0+x1+x2, partial derivatives would be 3+1=4 ?
The result is correct, and here is why.
I will refer to the first element of your results, and you can extend to the other elements. You want to compute dy1/dx1, but this is not the correct way. The result your code computes is dy1/dx1+ dy2/dx1 + dy3/dx1.
The ones you pass in the .backward implies that the result computed would be dot_product(ones, dy/dx). Note that dy/dx is a 3x3 matrix.

R Optimize linear equations coefficients with constraints

Say I have n linear equations of the form:
ax1 + bx2 + cx3 = y1
-ax1 + bx2 + cx3 = y2
-ax1 -bx2 + cx3 = y3
Here is n=3 and a,b,c are known and fixed.
I'm looking for the optimal values for x1,x2,x3 such that their ranges are within [-r,r] for some positive r and the sum sum(y1,y2,y3) is maximized.
Is there a package for R which can handle such optimization problems?
You can use the optim in R function for this purpose.
If you are trying to maximize sum(y1,y2,y3), this actually simplifies the problem to maximize (ax1 + bx2 + 3*cx3) such that x1,x2,x3 ∈ [-r,r]
You can use below code to find the optimal values. Note that the optim function minimizes by default, so I am returning the negative value of the sum in the function.
max_sum <- function(x){
a <- 2; b<- -3; c<-2;
y <- a*x[1]+b*x[2]+3*c*x[3]
return( -1*y ) }
r <- 5
optim(par=c(0,0,0), max_sum,lower= (-1*r),upper = r)
$par
[1] 5 -5 5

Underflow in R, sum of logartihm of probabilities

How to calculate the logarithm of the sum of the probabilities, i.e. ln(p1 + p2), where p1 = a very small number and p2 = a very small number. Using the values of lp1 = ln(p1) and lp2 = ln(p2)
If you p1 and p2 are very small numbers underflow will happen. How to avoid this?
In general the following tips are useful for taking logs in r:
If you are taking log(1+x) for a very small x there is a function log1p that is more accurate (see also expm1).
log(x^a) = a*log(x)
log(a*x) = log(a) + log(x)
Calculating log(x) for small x is fine. log(1e-308) does not suffer from underflow. Calculating exp(-1e308) is different, but that is far smaller than any representable answer anyway.
One way to solve your question might be (assuming p1 and p2 are less than $10^-308$) is to calculate log(p2) and p1/p2, and then
log(p1 + p2) = log(1 + p1/p2) + log(p2)
calculate the first term using log1p and you already have the second.

Calculating varied summation formula?

I'm trying to pragmatically emulate my calculator's summation function without the use of loops, the reason why is that they become very expensive once the function gets bloated. So far, I know of the formula n(n+1)/2, but that only works if the function looks like:
from X = 1 to 100, Σ (X), result = 5050.
Without a loop, is there a way to implement a function where:
from X = 1 to 100, Σ (X^2+X)?
EDIT: Note that the formula must account for all possible function bodies.
Thanks for the answers
The formula Σ (X^2+X) is equal to Σ (X) + Σ (X^2). You already know how to calculate Σ (X).
As for the Σ (X^2), this is known as Square Pyramidal Number. You can see a longer explanation here, but the formula is:
n3/3 + n2/2 + n/6
Together, that's
n3/3 + n2/2 + n/6 + n(n+1)/2
Or
(n3+2n)/3 + n2

Simplifying Recurrence Relation c(n) = c(n/2) + n^2

I'm really confused on simplifying this recurrence relation: c(n) = c(n/2) + n^2.
So I first got:
c(n/2) = c(n/4) + n^2
so
c(n) = c(n/4) + n^2 + n^2
c(n) = c(n/4) + 2n^2
c(n/4) = c(n/8) + n^2
so
c(n) = c(n/8) + 3n^2
I do sort of notice a pattern though:
2 raised to the power of whatever coefficient is in front of "n^2" gives the denominator of what n is over.
I'm not sure if that would help.
I just don't understand how I would simplify this recurrence relation and then find the theta notation of it.
EDIT: Actually I just worked it out again and I got c(n) = c(n/n) + n^2*lgn.
I think that is correct, but I'm not sure. Also, how would I find the theta notation of that? Is it just theta(n^2lgn)?
Firstly, make sure to substitute n/2 everywhere n appears in the original recurrence relation when placing c(n/2) on the lhs.
i.e.
c(n/2) = c(n/4) + (n/2)^2
Your intuition is correct, in that it is a very important part of the problem. How many times can you divide n by 2 before we reach 1?
Let's take 8 for an example
8/2 = 4
4/2 = 2
2/2 = 1
You see it's 3, which as it turns out is log(8)
In order to prove the theta notation, it might be helpful to check out the master theorem. This is a very useful tool for proving complexity of a recurrence relation.
Using the master theorem case 3, we can see
a = 1
b = 2
logb(a) = 0
c = 2
n^2 = Omega(n^2)
k = 9/10
(n/2)^2 < k*n^2
c(n) = Theta(n^2)
The intuition as to why the answer is Theta(n^2) is that you have n^2 + (n^2)/4 + (n^2)/16 + ... + (n^2)/2^(2n), which won't give us logn n^2s, but instead increasingly smaller n^2s
Let's answer a more generic question for recurrences of the form:
r(n) = r(d(n)) + f(n). There are some restrictions for the functions, that need further discussion, e.g. if x is a fix point of d, then f(x) should be 0, otherwise there isn't any solution. In your specific case this condition is satisfied.
Rearranging the equation we get that r(n) - r(d(n)) = f(n), and we get the intuition that r(n) and r(d(n)) are both a sum of some terms, but r(n) has one more term than r(d(n)), that's why the f(n) as the difference. On the other hand, r(n) and r(d(n)) have to have the same 'form', so the number of terms in the previously mentioned sum has to be infinite.
Thus we are looking for a telescopic sum, in which the terms for r(d(n)) cancel out all but one terms for r(n):
r(n) = f(n) + a_0(n) + a_1(n) + ...
- r(d(n)) = - a_0(n) - a_1(n) - ...
This latter means that
r(d(n)) = a_0(n) + a_1(n) + ...
And just by substituting d(n) into the place of n into the equation for r(n), we get:
r(d(n)) = f(d(n)) + a_0(d(n)) + a_1(d(n)) + ...
So by choosing a_0(n) = f(d(n)), a_1(n) = a_0(d(n)) = f(d(d(n))), and so on: a_k(n) = f(d(d(...d(n)...))) (with k+1 pieces of d in each other), we get a correct solution.
Thus in general, the solution is of the form r(n) = sum{i=0..infinity}(f(d[i](n))), where d[i](n) denotes the function d(d(...d(n)...)) with i number of iterations of the d function.
For your case, d(n)=n/2 and f(n)=n^2, hence you can get the solution in closed form by using identities for geometric series. The final result is r(n)=4/3*n^2.
Go for advance Master Theorem.
T(n) = aT(n/b)+n^klog^p
where a>0 b>1 k>0 p=real number.
case 1: a>b^k
T(n) = 0(n^logba) b is in base.
case 2 a=b^k
1. p>-1 T(n) than T(n)=0(n^logba log^p+1)
2. p=-1 Than T(n)=0(n^logba logn)
3. p<-1 than T(n)=0(n^logba)
case 3: a<b^k
1.if p>=0 than T(n)=0(n^k log^p n)
2 if p<0 than T(n)=O(n^k)
forgave Constant bcoz constant doesn't change time complexity or constant change processor to processor .(i.e n/2 ==n*1/2 == n)

Resources