Shortcut for solving a XOR series - math

Is there a shortcut formula to compute the following series?
c^(a+r)^(a+2r)^(a+3r)^...^(a+nr)
c, a, n, r > 0
XOR(bitwise) is denoted by ^
Addition(algebric) is denoted by +

Related

Derivatives of probability distributions w.r.t. parameters in R?

I need the (analytical) derivatives of the PDFs/log PDFs/CDFs of the most common probability distributions w.r.t. to their parameters in R. Is there any way to use these functions?
The gamlss.dist package provides the derivatives of the log PDFs of many probability distribution (code for the normal distribution). Is there anything similar for PDFs/CDFs?
Edit: Admittedly, the derivatives of the PDFs can be obtained from the derivatives of the log PDFs by a simple application of the chain rule, but I don't think a similar thing is possible for the CDFs...
OP mentioned that calculating the derivatives once is OK, so I'll talk about that. I use Maxima but the same thing could be done with Sympy or other computer algebra systems, and it might even be possible in R; I didn't investigate.
In Maxima, probability distributions are in the distrib add-on package which you load via load(distrib). You can find documentation for all the cdf functions by entering ?? cdf_ at the interactive input prompt.
Maxima applies partial evaluation to functions -- if some variables don't have defined values, that's OK, the result has those variables undefined in it. So you can say diff(cdf_foo(x, a, b), a) to get a derivative wrt a for example, with free variables x, a, and b.
You can generate code via grind, which produces output suitable for Maxima, but other languages will understand the expressions.
There are several ways to do this stuff. Here's just a first attempt.
(%i1) load (distrib) $
(%i2) fundef (cdf_weibull);
(%o2) cdf_weibull(x, a, b) := if maybe((a > 0) and (b > 0)) = false
then error("cdf_weibull: parameters a and b must be greater than 0")
x a
else (1 - exp(- (-) )) unit_step(x)
b
(%i3) assume (a > 0, b > 0);
(%o3) [a > 0, b > 0]
(%i4) diff (cdf_weibull (x, a, b), a);
a
x
- --
a a a
b log(b) x x log(x)
(%o4) - %e unit_step(x) (--------- - ---------)
a a
b b
(%i5) grind (%);
-%e^-(x^a/b^a)*unit_step(x)*((log(b)*x^a)/b^a-(x^a*log(x))/b^a)$
(%o5) done
(%i6) diff (cdf_weibull (x, a, b), b);
a
x
- --
a
(- a) - 1 a b
(%o6) - a b x %e unit_step(x)
(%i7) grind (%);
-a*b^((-a)-1)*x^a*%e^-(x^a/b^a)*unit_step(x)$
(%o7) done

How to specify minimum or maximum possible values in a forecast?

Is there a way to specify minimum or maximum possible values in a forecast done with ETS/ARIMA models?
Such as when forecasting a trend in % that can only go between 0% and 100%.
I am using R package forecast (and function forecast).
If your time series y has a natural bound [a, b], you should take a "logit-alike" transform first:
f <- function (x, a, b) log((x - a) / (b - x))
yy <- f(y, a, b)
Then the resulting yy is unbounded on (-Inf, Inf), suitable for Gaussian error assumption. Use yy for time series modelling, and take back-transform later on the prediction / forecast:
finv <- function (x, a, b) (b * exp(x) + a) / (exp(x) + 1)
y <- finv(yy, a, b)
Note, the above transform f (hence finv) is monotone, so if the 95%-confidence interval for yy is [l, u], the corresponding confidence interval for y is [finv(l), finv(u)].
If your y is only bounded on one side, consider "log-alike" transform.
bounded on [a, Inf), consider yy <- log(y - a);
bounded on (-Inf, a], consider yy <- log(a - y).
Wow, I didn't know Rob Hyndman has a blog. Thanks to #ulfelder for providing it. I added it here to make my answer more solid: Forecasting within limits.
This one is more specific, which I have not covered. What to do when data need a log transform but it can take 0 somewhere. I would just add a small tolerance, say yy <- log(y + 1e-7) to proceed.

Remainder sequences

I would like to compute the remainder sequence of two polynomials as used by GCD. If I understood the Wikipedia article about Pseudo-remainder sequence, one way to compute it is to use Euclid's algorithm:
gcd(a, b) := if b = 0 then a else gcd(b, rem(a, b))
meaning I will collect that rem() parts. If however the coefficients are integers, the intermediate fractions grow very quickly so then there are the so-called "Pseudo-remainder sequences" which try to keep the coefficients in small integers.
My question is, if I understood correctly (did I?), the two above sequences differ only by constant factor but when I try to run the following example I get different results, why? The first remainder sequence differs by -2, ok, but why is the second sequence so different? I presume subresultants() works correctly, but why does that g % (f % g) not work?
f = Poly(x**2*y + x**2 - 5*x*y + 2*x + 1, x, y)
g = Poly(2*x**2 - 12*x + 1, x)
print
print subresultants(f, g)[2]
print subresultants(f, g)[3]
print
print f % g
print g % (f % g)
which results in
Poly(-2*x*y - 16*x + y - 1, x, y, domain='ZZ')
Poly(-9*y**2 - 54*y + 225, x, y, domain='ZZ')
Poly(x*y + 8*x - 1/2*y + 1/2, x, y, domain='QQ')
Poly(2*x**2 - 12*x + 1, x, y, domain='QQ')
the two above sequences differ only by constant factor
For polynomials of one variable, they do. For multivariate polynomials, they don't.
The division of multivariable polynomials is a somewhat tricky business: result depends on the chosen order of monomials (by default, sympy uses lexicographic order). When you ask it to divide 2*x**2 - 12*x + 1 by x*y + 8*x - 1/2*y + 1/2, it observes that the leading monomial of the denominator is x*y, and there is no monomial in the numerator that is divisible by x*y. So the quotient is zero, and everything is a remainder.
The computation of subresultants (as it's implemented in sympy) treats polynomials in x,y as single-variable polynomials in x whose coefficients happen to come from the ring of polynomials in y. It is certain to produce a sequence of subresultants whose degree with respect to x keeps decreasing until it reaches 0: the last polynomial of the sequence will not have x in it. The degree with respect to y may (and does) go up, since the algorithm has no problem multiplying the terms by any polynomials in y in order to get x to drop out.
The upshot is that both computations work correctly, they just do different things.

plotting matrix equation in R

I'm new to R and I need to plot the quadratic matrix equation:
x^T A x + b^T x + c = 0
in R^2, with A being a 2x2, b a 2x1, and c a constant. The equation is for a boundary that defines classes of points. I need to plot that boundary for x0 = -6...6, x1 = -4...6. My first thought was generate a bunch of points and see where they are zero, but it depends on the increment between the numbers (most likely I'm not going guess what points are zero).
Is there a better way than just generating a bunch of points and seeing where it is zero or multiplying it out? Any help would be much appreciated,
Thank you.
Assuming you have a symmetric matrix A,
eg
# A = | a b/2 |
# | b/2 c |
and your equation represents a conic section, you can use the conics package
What you need is a vector of coefficients c(a,b,c,d,e,f) representing
a.x^2 + b*x*y + c*y^2 + d*x + e*y + f
In your case, say you have
A <- matrix(c(2,1,1,2))
B <- c(-20,-28)
C <- 10
# create the vector
v <- append(c(diag(A),B,C),A[lower.tri(A)]*2), 1)
conicPlot(v)
You could easily wrap the multiplication out into a simple function
# note this does no checking for symmetry or validity of arguments
expand.conic <- function(A, B, C){
append(c(diag(A),B,C),A[lower.tri(A)]*2), 1)
}

How to obtain the numerical solution of these differential equations with matlab

I have differential equations derived from epidemic spreading. I want to obtain the numerical solutions. Here's the equations,
t is a independent variable and ranges from [0,100].
The initial value is
y1 = 0.99; y2 = 0.01; y3 = 0;
At first, I planned to deal these with ode45 function in matlab, however, I don't know how to express the series and the combination. So I'm asking for help here.
**
The problem is how to express the right side of the equations as the odefun, which is a parameter in the ode45 function.
**
Matlab has functions to calculate binomial coefficients (number of combinations) and the finite series can be expressed just as matrix multiplication. I'll demonstrate how that works for the sum in the first equation. Note the use of the element-wise "dotted" forms of the arithmetic operators.
Calculate a row vector coefs with the constant coefficients in the sum as:
octave-3.0.0:33> a = 0:20;
octave-3.0.0:34> coefs = log2(a * 0.05 + 1) .* bincoeff(20, a);
The variables get combined into another vector:
octave-3.0.0:35> y1 = 0.99;
octave-3.0.0:36> y2 = 0.01;
octave-3.0.0:37> z = (y2 .^ a) .* ((1 - y2) .^ a) .* (y1 .^ a);
And the sum is then just evaluated as the inner product:
octave-3.0.0:38> coefs * z'
The other sums are similar.
function demo(a_in)
X = [0;0;0];
T = [0:.1:100];
a = a_in; % for nested scope
[Xout, Tout ]= ode45( #myFunc, T, X );
function [dxdt] = myFunc( t, x )
% nested function accesses "a"
dxdt = 0*x + a;
% Todo: real value of dxdt.
end
end
What about this, and you simply need to fill in the dxdt from your math above? It remains to be seen if the numerical roundoff matters...
Edit: there's a serious issue due to the 1=y1+y2+y3 constraint. Is that even allowed, since you have an IVP with 3 initial values given and 3 first order ODE's? If that constraint is a natural consequence of the equations, it may not be needed.

Resources