Is there any way to get the integrated equation in R? - r

The integrate() function returns the integrated value, but what if the user wants to take the integrated equation for an interval?
For example, the normal case of integrate() is like below:
integrate(f = function(x){2 * x}, lower = 1, upper = 2)
>3 with absolute error < 3.3e-14
But I want to write something like this:
integrate(f = function(x){2 * x}, lower = t, upper = t + 1)
to get
2 * t + 1
Thanks

The Ryacas package does symbolic computation:
install.packages("Ryacas")
library(Ryacas)
help(pac=Ryacas)
yacas("Integrate(x,t,t+1)2*x")
# expression((t + 1)^2 - t^2)
Simplify("%") # apply simplification to last result
# expression(2 * t + 1)

Related

Multi-parameter optimization in R

I'm trying to estimate parameters that will maximize the likelihood of a certain event. My objective function looks like that:
event_prob = function(p1, p2) {
x = ((1-p1-p2)^4)^67 *
((1-p1-p2)^3*p2)^5 *
((1-p1-p2)^3*p1)^2 *
((1-p1-p2)^2*p1*p2)^3 *
((1-p1-p2)^2*p1^2) *
((1-p1-p2)*p1^2*p2)^2 *
(p1^3*p2) *
(p1^4)
return(x)
}
In this case, I'm looking for p1 and p2 [0,1] that will maximize this function. I tried using optim() in the following manner:
aaa = optim(c(0,0),event_prob)
but I'm getting an error "Error in fn(par, ...) : argument "p2" is missing, with no default".
Am I using optim() wrong? Or is there a different function (package?) I should be using for multi-parameter optimization?
This problem can in fact be solved analytically.
The objective function simplifies to
F(p1,p2) = (1-p1-p2)^299 * p1^19 * p2^11
which is to be maximised over the region
C = { (p1,p2) | 0<=p1, 0<=p2, p1+p2<=1 }
Note that F is 0 if p1=0 or p2 =0 or p1+p2 = 1, while if none of those are true then F is positive. Thus the maximum of F occurs in the interior of C
Taking the log
f(p1,p2) = 299*log(1-p1-p2) + 19*log(p1) + 11*log(p2)
In fact it is as easy to solve the more general problem: maximise f over C where
f( p1,..pN) = b*log( 1-p1-..-pn) + Sum{ a[j]*log(p[j])}
where b and each a[j] is positive and
C = { (p1,..pN) | 0<pj, j=1..N and p1+p2+..pN<1 }
The critical point occurs where all the partial derivatives of f are zero, which is at
-b/(1-p1-..-pn) + a[j]/p[j] = 0 j=1..N
which can be written as
b*p[j] + a[j]*(p1+..p[N]) = a[j] j=1..N
or
M*p = a
where M = b*I + a*Ones', and Ones is a vector with each component 1
The inverse of M is
inv(M) = (1/b)*(I - a*Ones'/(b + Ones'*a))
Thus the unique critical point is
p^ = inv(M)*a
= a/(b + Sum{i|a[i]})
Since there is a maximum, and only one critical point, the critical point must be the maximum.
Based on Erwin Kalvelagen's comment: Redefine your function event_prob:
event_prob = function(p) {
p1 = p[1]
p2 = p[2]
x = ((1-p1-p2)^4)^67 *
((1-p1-p2)^3*p2)^5 *
((1-p1-p2)^3*p1)^2 *
((1-p1-p2)^2*p1*p2)^3 *
((1-p1-p2)^2*p1^2) *
((1-p1-p2)*p1^2*p2)^2 *
(p1^3*p2) *
(p1^4)
return(x)
}
You may want to set limits to ensure that p1 and p2 fulfill your constraints:
optim(c(0.5,0.5),event_prob,method="L-BFGS-B",lower=0,upper=1)

How to solve set of equations for two unknowns using R?

I have two equations. They are as follows:
( 1 - 0.25 ^ {1/alpha} ) * lambda = 85
( 1 - 0.75 ^ {1/alpha} ) * lambda = 11
I would like to compute the values of alpha and lambda by solving the above two equations. How do I do this using R?
One approach is to translate it into an optimization problem by introducing an loss function:
loss <- function(X) {
L = X[1]
a = X[2]
return(sum(c(
(1 - 0.25^(1/a))*L - 85,
(1 - 0.75^(1/a))*L - 11
)^2))
}
nlm(loss, c(-1,-1))
If the result returned from nlm() has a minimum near zero, then estimate will be a vector containing lambda and alpha. When I tried this, I got an answer that passed the sniff test:
> a = -1.28799
> L = -43.95321
> (1 - 0.25^(1/a))*L
[1] 84.99999
> (1 - 0.75^(1/a))*L
[1] 11.00005
#olooney's answer is best.
Another way to solve these equations is to use uniroot function. We can cancel the lambda values and can use the uniroot to find the value of alpha. Then substitute back to find lambda.
f <- function(x) {
(11/85) - ((1 - (0.75) ^ (1/x)) / (1 - (0.25) ^ (1/x)) )
}
f_alpha <- uniroot(f, lower = -10, upper = -1, extendInt = "yes")
f_lambda <- function(x) {
11 - ((1 - (0.75) ^ (1/f_alpha$root)) * x)
}
lambda = uniroot(f_lambda, lower = -10, upper = -2, extendInt = "yes")$root
sprintf("Alpha equals %f", f_alpha$root)
sprintf("Lambda equals %f", lambda)
results in
[1] "Alpha equals -1.287978"
[1] "Lambda equals -43.952544"

How do you simplify the difference between functions of x in R with respect to a Calculus context?

First of all, this looks like a fair amount of Calculus, so I predict that it would get forwarded to Cross-Validated by someone who thinks that this is TL;DR. But I think this is a programming question, so here me out.
Imagine that I have the following functions in terms of x: f(x), g(x), h(x) ...
f(x) = 2x^2 + 4x - 30
g(x) = x^2 - x + 12
h(x) = f(x) - g(x) = (2x^2 + 4x - 30) - (x^2 - x + 12) = x^2 + 5*x - 42
Note: If I were to compute g(x) - f(x) here I would get a different polynomial, but I would get the same roots so it doesn't really matter because if I took the coefficients from g(x) - f(x), then polyroot() would return the same x-intercept intersection points as f(x) = g(x).
I am able to resolve h(x) = (2x^2 + 4x - 30) - (x^2 - x + 12), but I can't resolve it to x^2 + 5*x - 42 which is just a more simplified version of the same function of h(x). But I need it in this form to compute the intersections of these functions where I need the coefficients of the difference function. Then I would use the points of intersection to compute the difference integral over the greater function minus the smaller function over the range where the functions intersect, and this difference integral is simply the area between the functions.
So my goal is to compute the area between two intersecting functions.
My problem is that I want to automate the whole process, and I want to simply the h(x) difference function to 1*x^2 + 5*(x) - 42, where the coefficients of this polynomial function in increasing order are -42, 5, 1 in that order.
So let's just write the code:
fx <- function(x){2*x^2 + 4*x - 30}
gx <- function(x){1*x^2 - 1*x + 12}
hx <- function(x){fx - gx} # doesn't work because I can't pass it to curve(hx)
hx <- function(x){(2*x^2 + 4*x - 30) - (1*x^2 - 1*x + 12)} # works
but it is not in the form that I want.
> hx
function(x){(2*x^2 + 4*x - 30) - (1*x^2 - 1*x + 12)}
<bytecode: 0x000000001c0bfc10>
Errors:
> curve(hx)
Error in expression(fx) - expression(gx) :
non-numeric argument to binary operator
See this is why I need the coefficients.
> z <- polyroot(c(-42, 5, 1)) # polyroot functions give you the x-intercepts of a polynomial function.
> z
[1] 4.446222-0i -9.446222+0i
Of course I could just compute "x^2 + 5*x - 42" on pen and paper, but they say that programmers always want to find the most efficient algorithmic process with the least amount of work.
Now I need to see which function is greater than the other, over the given range. Two ways visually or incrementally. (This is for the Calculus II part.)
x = seq(from = -9.4, to = 4.4, by = 0.2)
fx_range = 2*x^2 + 4*x - 30
> table(fx_range >= gx_range)
FALSE
70
> table(gx_range >= fx_range)
TRUE
70
It looks like the g(x) function is greater than or equal to the f(x) function over the range of the intersection points. So should evalulate the integral of g(x) - f(x) according to calculus. I was just doing f(x) - g(x) earlier for the polyroot function.
Areabetween curves = (from -9.446222 to 4.446222) ∫[g(x) - f(x)]dx
= (from -9.446222 to 4.446222) ∫[(x^2 - x + 12) - (2*x^2 + 4*x - 30)]
gx_minus_fx = function(x){(x^2 - x + 12) - (2*x^2 + 4*x - 30)}
Area = integrate(gx_minus_fx, lower = -9.446222, upper = 4.446222)
Area
446.8736 with absolute error < 5e-12 # This is exactly what I wanted to compute!
Now let's graphically check if I was supposed to subtract g(x) - f(x):
> curve(fx, main = "Functions with their Intersection Points", xlab = "x", ylab = "Functions of x", from = -9.446222, to = 4.446222)
> curve(gx, col = "red", add = TRUE)
> legend("topright", c("f(x) = 2x^2 + 4x - 30", "g(x) = x^2 - x + 12"), fill = c("black", "red"))
Yeah, I did it right!
So again, what I would like help with is figuring out how I could simplify
h(x) = f(x) - g(x) to x^2 + 5*x - 42.
This appears to be an algebraic problem. I showed that I could do high-level Calculus 2 in R, and I would just like to know if there is a way that I can automate this whole process for the h(x) function.
Thank you!!!

Get derivative in R

I'm trying to take the derivative of an expression:
x = read.csv("export.csv", header=F)$V1
f = expression(-7645/2* log(pi) - 1/2 * sum(log(w+a*x[1:7644]^2)) + (x[2:7645]^2/(w + a*x[1:7644]^2)),'a')
D(f,'a')
x is simply an integer vector, a and w are the variables I'm trying to find by deriving. However, I get the error
"Function '[' is not in Table of Derivatives"
Since this is my first time using R I'm rather clueless what to do now. I'm assuming R has got some problem with my sum function inside of the expression?
After following the advice I now did the following:
y <- x[1:7644]
z <- x[2:7645]
f = expression(-7645/2* log(pi) - 1/2 * sum(log(w+a*y^2)) + (z^2/(w + a*y^2)),'a')
Deriving this gives me the error "sum is not in the table of derivatives". How can I make sure the expression considers each value of y and z?
Another Update:
y <- x[1:7644]
z <- x[2:7645]
f = expression(-7645/2* log(pi) - 1/2 * log(w+a*y^2) + (z^2/(w + a*y^2)))
d = D(f,'a')
uniroot(eval(d),c(0,1000))
I've eliminated the "sum" function and just entered y and z. Now, 2 questions:
a) How can I be sure that this is still the expected behaviour?
b) Uniroot doesn't seem to like "w" and "a" since they're just symbolic. How would I go about fixing this issue? The error I get is "object 'w' not found"
This should work:
Since you have two terms being added f+g, the derivative D(f+g) = D(f) + D(g), so let's separate both like this:
g = expression((z^2/(w + a*y^2)))
f = expression(- 1/2 * log(w+a*y^2))
See that sum() was removed from expression f, because the multiplying constant was moved into the sum() and the D(sum()) = sum(D()). Also the first constant was removed because the derivative is 0.
So:
D(sum(-7645/2* log(pi) - 1/2 * log(w+a*y^2)) + (z^2/(w + a*y^2)) = D( constant + sum(f) + g ) = sum(D(f)) + D(g)
Which should give:
sum(-(1/2 * (y^2/(w + a * y^2)))) + -(z^2 * y^2/(w + a * y^2)^2)
expression takes only a single expr input, not a vector, and it is beyond r abilities to vectorize that.
you can also do this with a for loop:
foo <- c("1+2","3+4","5*6","7/8")
result <- numeric(length(foo))
foo <- parse(text=foo)
for(i in seq_along(foo))
result[i] <- eval(foo[[i]])

Solving a system of differential equations in R

I have a simple flux model in R. It boils down to two differential equations that model two state variables within the model, we'll call them A and B. They are calculated as simple difference equations of four component fluxes flux1-flux4, 5 parameters p1-p5, and a 6th parameter, of_interest, that can take on values between 0-1.
parameters<- c(p1=0.028, p2=0.3, p3=0.5, p4=0.0002, p5=0.001, of_interest=0.1)
state <- c(A=28, B=1.4)
model<-function(t,state,parameters){
with(as.list(c(state,parameters)),{
#fluxes
flux1 = (1-of_interest) * p1*(B / (p2 + B))*p3
flux2 = p4* A #microbial death
flux3 = of_interest * p1*(B / (p2 + B))*p3
flux4 = p5* B
#differential equations of component fluxes
dAdt<- flux1 - flux2
dBdt<- flux3 - flux4
list(c(dAdt,dBdt))
})
I would like to write a function to take the derivative of dAdt with respect to of_interest, set the derived equation to 0, then rearrange and solve for the value of of_interest. This will be the value of the parameter of_interest that maximizes the function dAdt.
So far I have been able to solve the model at steady state, across the possible values of of_interest to demonstrate there should be a maximum.
require(rootSolve)
range<- seq(0,1,by=0.01)
for(i in range){
of_interest=i
parameters<- c(p1=0.028, p2=0.3, p3=0.5, p4=0.0002, p5=0.001, of_interest=of_interest)
state <- c(A=28, B=1.4)
ST<- stode(y=y,func=model,parms=parameters,pos=T)
out<- c(out,ST$y[1])
Then plotting:
plot(out~range, pch=16,col='purple')
lines(smooth.spline(out~range,spar=0.35), lwd=3,lty=1)
How can I analytically solve for the value of of_interest that maximizes dAdt in R? If an analytical solution is not possible, how can I know, and how can I go about solving this numerically?
Update: I think this problem can be solved with the deSolve package in R, linked here, however I am having trouble implementing it using my particular example.
Your equation in B(t) is just-about separable since you can divide out B(t), from which you can get that
B(t) = C * exp{-p5 * t} * (p2 + B(t)) ^ {of_interest * p1 * p3}
This is an implicit solution for B(t) which we'll solve point-wise.
You can solve for C given your initial value of B. I suppose t = 0 initially? In which case
C = B_0 / (p2 + B_0) ^ {of_interest * p1 * p3}
This also gives a somewhat nicer-looking expression for A(t):
dA(t) / dt = B_0 / (p2 + B_0) * p1 * p3 * (1 - of_interest) *
exp{-p5 * t} * ((p2 + B(t) / (p2 + B_0)) ^
{of_interest * p1 * p3 - 1} - p4 * A(t)
This can be solved by integrating factor (= exp{p4 * t}), via numerical integration of the term involving B(t). We specify the lower limit of the integral as 0 so that we never have to evaluate B outside the range [0, t], which means the integrating constant is simply A_0 and thus:
A(t) = (A_0 + integral_0^t { f(tau; parameters) d tau}) * exp{-p4 * t}
The basic gist is B(t) is driving everything in this system -- the approach will be: solve for the behavior of B(t), then use this to figure out what's going on with A(t), then maximize.
First, the "outer" parameters; we also need nleqslv to get B:
library(nleqslv)
t_min <- 0
t_max <- 10000
t_N <- 10
#we'll only solve the behavior of A & B over t_rng
t_rng <- seq(t_min, t_max, length.out = t_N)
#I'm calling of_interest ttheta
ttheta_min <- 0
ttheta_max <- 1
ttheta_N <- 5
tthetas <- seq(ttheta_min, ttheta_max, length.out = ttheta_N)
B_0 <- 1.4
A_0 <- 28
#No sense storing this as a vector when we'll only ever use it as a list
parameters <- list(p1 = 0.028, p2 = 0.3, p3 = 0.5,
p4 = 0.0002, p5 = 0.001)
From here, the basic outline is:
Given the parameter values (in particular ttheta), solve for BB over t_rng via non-linear equation solving
Given BB and the parameter values, solve for AA over t_rng by numerical integration
Given AA and your expression for dAdt, plug & maximize.
derivs <-
sapply(tthetas, function(th){
#append current ttheta
params <- c(parameters, ttheta = th)
#declare a function we'll use to solve for B (see above)
b_slv <- function(b, t)
with(params, b - B_0 * ((p2 + b)/(p2 + B_0)) ^
(ttheta * p1 * p3) * exp(-p5 * t))
#solving point-wise (this is pretty fast)
# **See below for a note**
BB <- sapply(t_rng, function(t) nleqslv(B_0, function(b) b_slv(b, t))$x)
#this is f(tau; params) that I mentioned above;
# we have to do linear interpolation since the
# numerical integrator isn't constrained to the grid.
# **See below for note**
a_int <- function(t){
#approximate t to the grid (t_rng)
# (assumes B is monotonic, which seems to be true)
# (also, if t ends up negative, just assign t_rng[1])
t_n <- max(1L, which.max(t_rng - t >= 0) - 1L)
idx <- t_n:(t_n+1)
ts <- t_rng[idx]
#distance-weighted average of the local B values
B_app <- sum((-1) ^ (0:1) * (t - ts) / diff(ts) * BB[idx])
#finally, f(tau; params)
with(params, (1 - ttheta) * p1 * p3 * B_0 / (p2 + B_0) *
((p2 + B_app)/(p2 + B_0)) ^ (ttheta * p1 * p3 - 1) *
exp((p4 - p5) * t))
}
#a_int only works on scalars; the numeric integrator
# requires a version that works on vectors
a_int_v <- function(t) sapply(t, a_int)
AA <- exp(-params$p4 * t_rng) *
sapply(t_rng, function(tt)
#I found the subdivisions constraint binding in some cases
# at the default value; no trouble at 1000.
A_0 + integrate(a_int_v, 0, tt, subdivisions = 1000L)$value)
#using the explicit version of dAdt given as flux1 - flux2
max(with(params, (1 - ttheta) * p1 * p3 * BB / (p2 + BB) - p4 * AA))})
Finally, simply run `tthetas[which.max(derivs)]` to get the maximizer.
Note:
This code is not optimized for efficiency. There are a few places where there are some potential speed-ups:
probably faster to run the equation solver recursively, as it'll converge faster with better initial guesses -- using the previous value instead of the initial value is surely better
Will be faster to simply use Riemann sums to integrate; the tradeoff is in accuracy, but should be fine if you have a dense enough grid. One beauty of Riemann is you won't have to interpolate at all, and numerically they're simple linear algebra. I ran this with t_N == ttheta_N == 1000L and it ran within a few minutes.
Probably possible to vectorize a_int directly instead of just sapplying on it, which concomitant speed-up by more direct appeal to BLAS.
Loads of other small stuff. Pre-compute ttheta * p1 * p3 since it's re-used so much, etc.
I didn't bother including any of that stuff, though, because you're honestly probably better off porting this to a faster language -- Julia is my own pet favorite, but of course R speaks well with C++, C, Fortran, etc.

Resources