Summary of problem
My objective is to create a function called newton.raphson to implement the Newton-Raphson root-finding algorithm.
Root Finding Algorithm: x1 = X0 - f(xo)/f'(x0)
I have 2 arguments:
iter = number of iteration (value = 10^5)
epsilon = for the tolerance (value = 10^-10)
Can not depend on variables outside of the function
newton.raphson <- function(f, x0, iter=1e5, epsilon=1e-10) {
x <- x0
h <- 1e-5
for (t in 1:iter) {
drvt <- f((x+h)) - f((x-h)) / (2 * h)
update <- x - f(x)/ drvt
if (abs(update) < epsilon) {
break
}
x <- update
}
root <- x
return(root)
}
# Define some function to test
f <- function(x) {
x^2 - 4 * x - 7
}
I get the following results:
> newton.raphson(f, 0)
[1] 2.000045
> newton.raphson(f, 3)
[1] 5.000024
But results should be:
-1.316625
5.316625
Your derivative calculation is a little bit broken - you forgot parenthesis around the difference between f(x+h) and f(x-h):
drvt <- ( f(x+h) - f(x-h) ) / (2 * h)
Also, you should compare the difference between the old and new root approximation to the tolerance. In order to make things more clear, rename your misleading update variable to something like new.x. Then, your should check if (abs(new.x - x) < epsilon).
I have the following for-loop which does exactly what I want:
for (t in 3:20){
XX <- c(rep(0,22))
for (k in (1:(t-2))){
XX[k] <- (theta^(k-1) * (P[t-k] - P[t-k-1]))
}
X[t] = (1-theta) * sum(XX) + theta^(t-1)
P[t] <- D[t] + (0.7/0.3) * X[t] - 0.1*3^2*1*(20-t-1 + (1/0.3))
}
However, once I make a function out of and use the function the results are suddenly wrong:
BGJS <- function(theta){
for (t in 3:20){
XX <- c(rep(0,22))
for (k in (1:(t-2))){
XX[k] <- (theta^(k-1) * (P[t-k] - P[t-k-1]))
}
X[t] = (1-theta) * sum(XX) + theta^(t-1)
P[t] <- D[t] + (0.7/0.3) * X[t] - 0.1*3^2*1*(20-t-1 + (1/0.3))
}
}
Can someone find the mistake?
Change <- and = to <<- as your scope is different in functions.
?"<<-"
The operators <- and = assign into the environment in which they are evaluated. ...
The operators <<- and ->> are normally only used in functions, and cause a search to be made through parent environments for an existing definition of the variable being assigned. ...
Here is my function that does a loop:
answer = function(a,n) {
for (k in 0:n) {
x =+ (a^k)/factorial(k)
}
return(x)
}
answer(1,2) should return 2.5 as it is the calculated value of
1^0 / 0! + 1^1 / 1! + 1^2 / 2! = 1 + 1 + 0.5 = 2.5
But I get
answer(1,2)
#[1] 0.5
Looks like it fails to accumulate all three terms and just stores the newest value every time. += does not work so I used =+ but it is still not right. Thanks.
answer = function(a,n) {
x <- 0 ## initialize the accumulator
for (k in 0:n) {
x <- x + (a^k)/factorial(k) ## note how to accumulate value in R
}
return(x)
}
answer(1, 2)
#[1] 2.5
There is "vectorized" solution:
answer = function(a,n) {
x <- a ^ (0:n) / factorial(0:n)
return(sum(x))
}
In this case you don't need to initialize anything. R will allocate memory behind that <- and sum.
You are using Taylor expansion to approximate exp(a). See this Q & A on the theme. You may want to pay special attention to the "numerical convergence" issue mentioned in my answer.
I'm trying to modeling a prey-prey-predator system using differential equations based on the LV model. For the sake of the precision, i need to use the runge-kutta4 method.
But given the equations, some of the populations become quickly negative.
So I tried to use the events/root system of ODE but it seems that rk4 and rootfun are not compatibles...
eventFunc <- function(t, y, p){
if (y["N1"] < 0) { y["N1"] = 0 }
if (y["N2"] < 0) { y["N2"] = 0 }
if (y["P"] < 0) { y["P"] = 0 }
return(y)
}
rootFunction <- function(t, y, p){
if (y["P"] < 0) {y["P"] = 0}
if (y["N1"] < 0) {y["N1"] = 0}
if (y["N2"] < 0) {y["N2"] = 0}
return(y)
}
out <- ode(func=Model_T2.2,
method="rk4",
y=state,
parms=parameters,
times=times,
events = list(func = eventFunc,
root = TRUE),
rootfun = rootFunction
)
This code give me the followin error :
Error in checkevents(events, times, Ynames, dllname) :
either 'events$time' should be given and contain the times of the events, if 'events$func' is specified and no root function or your solver does not support root functions
Is there any solution to use rk4 and forbid the functions to go under 0?
Thanks in advance.
For those who might ask, here is what works :
if(!require(ggplot2)) {
install.packages("ggplot2"); require(ggplot2)}
if(!require(deSolve)) {
install.packages("deSolve"); require(deSolve)}
Model_T2.2 <- function(t, state, par){
with(as.list(c(state, par)), {
response1 <- (a1 * N1)/(1+(a1*h1*N1)+(a2*h2*N2))
response2 <- (a2 * N2)/(1+(a1*h1*N1)+(a2*h2*N2))
dN1 = r1*N1 * (1 - ((N1 + A12 * N2)/K1)) - response1 * P
dN2 = r2*N2 * (1 - ((N1 + A21 * N2)/K2)) - response2 * P
dP = ((E1 * response1) + (E2 * response2)) * P - Mp
return(list(c(dN1, dN2, dP)))
})
}
parameters<-c(
r1=1.42, r2=0.9,
A12=0.6, A21=0.5,
K1=50, K2=50,
a1=0.77, a2=0.77,
b1 = 1, b2=1,
h1=1.04, h2=1.04,
o1=0, o2=0,
Mp=0.22,
E1=0.36, E2=0.36
)
## inital states
state<-c(
P=10,
N1=30,
N2=30
)
times <- seq(0, 30, by=0.5)
out <- ode(func=Model_T2.2,
method="rk4",
y=state,
parms=parameters,
times=times,
events = list(func = eventFunc,
root = TRUE),
rootfun = rootFunction
)
md <- melt(as.data.frame(out), id.vars=1, measure.vars = c("N1", "N2", "P"))
pl <- ggplot(md, aes(x=time, y=value, colour=variable))
pl <- pl + geom_line() + geom_point() + scale_color_discrete(name="Population")
pl
And the result in a graph :
Evolution of prey1, prey2 and predator populations
As you can see, the population of predators become negative which is clearly impossible in the real world.
Edit : missing variables, sorry about that.
This is a problem you will have with all explicit solvers like rk4. Reducing the time step will help, up to a point. Better use a solver with an implicit method, lsoda seems universally available in one form or another.
Another way to explicitly force positive values is to parametrize them as exponentials. Set N1=exp(U1), N2=exp(U2) then the ODE function code translates to (as dN = exp(U)*dU = N*dU)
N1 <- exp(U1)
N2 <- exp(U2)
response1 <- (a1)/(1+(a1*h1*N1)+(a2*h2*N2))
response2 <- (a2)/(1+(a1*h1*N1)+(a2*h2*N2))
dU1 = r1 * (1 - ((N1 + A12 * N2)/K1)) - response1 * P
dU2 = r2 * (1 - ((N1 + A21 * N2)/K2)) - response2 * P
dP = ((E1 * response1*N1) + (E2 * response2*N2)) * P - Mp
For the output you have then of course to reconstruct N1, N2 from the solutions U1, U2.
Thanks to J_F, I am now able to run my L-V model.
The radau (not randau as you mentionned) function indeed accept root function and events ans implicitly implements the runge-kutta method.
Thanks again, hope this will help someone in the future.
I'm writing a function for Gaussian mixture models with spherical covariance structures--ie $\Sigma_k = \sigma_k^2 I$. This particular function is similar to the mclust package with identifier VII.
http://en.wikipedia.org/wiki/Mixture_model
Anyways, the problem I'm having is running into infinite values for the weight matrix. Definition: Let W be an n x m matrix where n = 1, ..., n (number of obs) and m = 1, ..., m (number of mixtues). Each element of W (ie w_ij) can essentially be defined as a specific form of:
w_im = \frac{a / b * exp(c)}{\sum_i=1^m [a_i / b_i * exp(c_i)]}
Computing this numerically is giving me infinite values. So I'm trying to use the log-identity log(x+y) = log(x) + log(1 + y/x). But the issue is that it's not as simple as log(x+y) but rather log(\sum_i=1^m [a_i / b_i * exp(c_i)]).
Here's some code define:
n_im = a / b * exp(c) ;
d_.m = \sum_i=1^m [a_i / b_i * exp(c_i)] ; and
c_mat[i,j] as the value of the exponent for the [i,j]th term.
n_mat[, i] <- log(a[i]) - log(b[i]) - c[,i] # numerator of w_im
internal_vec1[i] <- (a[i] * b[1])/ (a[1] * b[i]) # an internal for the step below
c_mat2 <- cbind(rep(1, n), c_mat[,1] - c_mat[,-1]) # since e^a / e^b = e^(a-b)
for (i in 1:n) {
d_vec[i] <- n_mat[i,1] + log(sum(internal_vec1 * exp(c_mat2[i,)))
} ## still getting infinite values
I'm trying to define the problem as briefly as possible. the entire function is obviously much larger than this. But, since the problem I'm running into is specifically dealing with infinite (and 1/infinity) values, I'm hoping this snippet is sufficient. Anyone with a coding trick here?
Here is the solution!! (I've spent way too damn long on this)
**The first function log_plus() solves the simple problem where you want log(\sum_{i=1)^n x_i)
**The second function log_plus2() solves the more complicated problem described above where you want log(\sum_{i=1}^n [a_i / b_i * exp(c_i)])
log_plus <- function(xvec) {
m <- length(xvec)
x <- log(xvec[1])
for (j in 2:m) {
sum_j <- sum(xvec[1:j-1])
x <- x + log(1 + xvec[j]/sum_j)
}
return(x)
}
log_plus2 <- function(a, b, c) {
# assumes intended input of form sum(a/b * e^c)
if ((length(a) != length(b)) || (length(a) != length(c))) {
stop("Input equal length vectors")
}
if (!(all(c > 0) || all(c < 0))) {
stop("All values of c must be either > 0 or < 0.")
}
m <- length(a)
# initilialize log sum
x <- log(a[1]) - log(b[1]) + c[1]
# aggregate / loop log sum
for (j in 2:m) {
# build denominator
b2 <- b[1:j-1]
for (i in 1:j-1) {
d1 <- 0
c2 <- c[1:i]
if (all(c2 > 0)) {
c_min <- min(c2[1:j-1])
c2 <- c2 - c_min
} else if (all(c2 < 0)) {
c_min <- max(c2[1:j-1])
c2 <- c2 - c_min
}
d1 <- d1 + a[i] * prod(b2[-i]) * exp(c2[i])
}
den <- b[j] * (d1)
num <- a[j] * prod(b[1:j-1]) * exp(c[j] - c_min)
x <- x + log(1 + num / den)
}
return(x)
}