How do I implement a numerically stable weighted logaddexp? - math

What is the most numerically stable way of calculating:
log[(wx * exp(x) + wy * exp_y)/(wx + wy)]
where the weights wx, wy > 0?
Without the weights, this function is logaddexp and could be implemented in Python with NumPy as:
tmp = x - y
return np.where(tmp > 0,
x + np.log1p(np.exp(-tmp)),
y + np.log1p(np.exp(tmp)))
How should I generalize this to the weighted version?

You could use the original logaddexp function for thus purpose, if you rewrite the weighted expression as,
This is equivalent to,
logaddexp( x + log(w_x), y + log(w_y) ) - log(w_x + w_y)
which should be as numerically stable as the original logaddexp implementation.
Note: I'm referring to the numpy.logaddexp function that takes in x and y, not x and exp_y, as you mention in the question.

def weighted_logaddexp(x, wx, y, wy):
# Returns:
# log[(wx * exp(x) + wy * exp_y)/(wx + wy)]
# = log(wx/(wx+wy)) + x + log(1 + exp(y - x + log(wy)-log(wx)))
# = log1p(-wy/(wx+wy)) + x + log1p((wy exp_y) / (wx exp(x)))
if wx == 0.0:
return y
if wy == 0.0:
return x
total_w = wx + wy
first_term = np.where(wx > wy,
np.log1p(-wy / total_w),
np.log1p(-wx / total_w))
exp_x = np.exp(x)
exp_y = np.exp(y)
wx_exp_x = wx * exp_x
wy_exp_y = wy * exp_y
return np.where(wy_exp_y < wx_exp_x,
x + np.log1p(wy_exp_y / wx_exp_x),
y + np.log1p(wx_exp_x / wy_exp_y)) + first_term
Here's how I compared the two solutions:
import math
import numpy as np
import mpmath as mp
from tools.numpy import weighted_logaddexp
def average_error(ideal_function, test_function, n_args):
x_y = [np.linspace(0.1, 3, 20) for _ in range(n_args)]
xs_ys = np.meshgrid(*x_y)
def e(*args):
return ideal_function(*args) - test_function(*args)
e = np.frompyfunc(e, n_args, 1)
error = e(*xs_ys) ** 2
return np.mean(error)
def ideal_function(x, wx, y, wy):
return mp.log((mp.exp(x) * wx + mp.exp(y) * wy) / mp.fadd(wx, wy))
def test_function(x, wx, y, wy):
return np.logaddexp(x + math.log(wx), y + math.log(wy)) - math.log(wx + wy)
mp.prec = 100
print(average_error(ideal_function, weighted_logaddexp, 4))
print(average_error(ideal_function, test_function, 4))

Related

geom_function with user written function in ggplot

I will be highly grateful for your help. I am learning how to use geom_function in r. Following is my function:
x0 <- 0.5
x1 <- 1
x2 <- 2
x3 <- 3
x <- c(x0, x1, x2, x3)
myfn <- function(w, b, a, x){
w^(1-b)/(1-b)-a*w-(w>100 & w<=200)*x[3]*(w-100)-(w>200)*x[3]*100-x[4]*(w-200)
}
My objective is to plot above function using geom_function to see how this function behaves with different values of arguments a and b and following is my code:
y=seq(0,1000,5)
ggplot()+
xlim(c(0,1000))+
geom_function(fun=myfn(w=y, b=-4, a=0.5, x=x))
Problem: I feel my logic is correct but when I execute above code, I get nothing. I will be highly grateful for the help. Thank you very much for the help in advance. Any help or direction will be highly appreciated.
Your function myfn is a function of w where a, b and x are parameters. To plot this function over the range of c(0, 1000) pass your function to the fun argument and the parameters as a list via the args argument and set the range via xlim:
x0 <- 0.5
x1 <- 1
x2 <- 2
x3 <- 3
x <- c(x0, x1, x2, x3)
myfn <- function(w, b, a, x) {
w^(1 - b) / (1 - b) - a * w - (w > 100 & w <= 200) * x[3] * (w - 100) - (w > 200) * x[3] * 100 - x[4] * (w - 200)
}
library(ggplot2)
ggplot() +
xlim(c(0, 1000)) +
geom_function(fun = myfn, args = list(b = -4, a = 0.5, x = x))
A second option would be to make use of a lambda function like so:
ggplot() +
xlim(c(0, 1000)) +
geom_function(fun = ~ myfn(.x, b = -4, a = 0.5, x = x))
myfn <- function(x, a, b, c) {
x^(1 - b) / (1 - b) - a * x - (x > 100 & x <= 200) * c[3] * (x - 100) - (x > 200) * c[3] * 100 - c[4] * (x - 200) # outcome is y
}
ggplot() +
xlim(c(0, 1000)) +
geom_function(fun = ~ myfn(x = .x, a = 0.5, b = -4, c = c(0.5, 1, 2, 3)))
If you do not want to add the variables through the args list you can add the variables to your function like this. Note I changed some of the variable names to make it more clear what the actual x and y are in the plot. Also x by the OP is just a list with 4 constants, I provided them as a and b under the name c.

Failing to optimise negative binomial model using optim

I am trying to manually optimise a negative binomial regression model using the optim package in R trying to predict a count variable y using a matrix of factors X using the following code:
# generating some fake data
n <- 1000
X <- matrix(NA, ncol = 5, nrow = n)
X[,1] <- 1
X[,2] <- sample(size = n, x = c(0,1), replace = TRUE)
X[,3] <- sample(size = n, x = c(0,1), replace = TRUE)
X[,4] <- sample(size = n, x = c(0,1), replace = TRUE)
X[,5] <- sample(size = n, x = c(0,1), replace = TRUE)
beta0 <- 3
beta1 <- -2
beta2 <- -2
beta3 <- -4
beta4 <- -0.9
k <- 0.9
## draws from negative binomial distribution
mu <- exp(beta0 + beta1 * X[,2] + beta2 * X[,3] + beta3 * X[,4] + beta4 * X[,5])
theta <- mu + mu ^2 / k
# dependent variable
y <- rnegbin(n, mu = mu, theta = theta)
# function to be optimised
negbin_ll <- function(y, X, theta){
beta <- theta[1:ncol(X)]
alpha <- theta[ncol(X) + 1]
logll <- y * log(alpha) + y *( beta %*% t(X) ) - (y + (1 / alpha ) ) * log( 1 + alpha * exp(beta %*% t(X))) + lgamma(y + (1 / alpha)) - lgamma ( y + 1) - lgamma ( 1 / alpha)
logll <- sum( logll )
return(logll)
}
stval <- rep(0, ncol(X) + 1)
res <-
optim(
stval,
negbin_ll,
y = y,
X = X,
control = list(fnscale = -1),
hessian = TRUE,
method = "BFGS"
)
The code should produce point estimates from the optimisation process, but instead fails when executing the optim-function with the error in optim(stval, negbin_ll, y = y, X = X, control = list(fnscale = -1), : initial value in 'vmmin' is not finite.
I already tried to change log(gamma(...)) to lgamma(...) in the likelihood function and tried many other ways, but I fail to get estimates.
Changing the start values of optim also does not help.
Do you have any idea if there is any particularity to the likelihood function that leads to values being treated in any odd fashion?
Help would be much appreciated.
optim tries several points to get to the minimum, in your case it hits some non-positive values in the arguments inside the logs. One way is to discard the values that return any non-positive inside the problematic functions by returning a negative (in your case) large number, like -lenght(series)*10^6. Remade the log-likelihood function, like this it kinda works:
negbin_ll <- function(y, X, theta){
beta <- theta[1:ncol(X)]
alpha <- theta[ncol(X) + 1]
if(any(alpha<=0)) return(-length(y)*10^6)
if(any(1 + alpha * exp(beta %*% t(X))<=0)) return(-length(y)*10^6)
logll <- y * log(alpha) + y *( beta %*% t(X) ) - (y + (1 / alpha ) ) * log( 1 + alpha * exp(beta %*% t(X))) + lgamma(y + (1 / alpha)) - lgamma ( y + 1) - lgamma ( 1 / alpha)
logll <- sum( logll )
return(logll)
}

Plotting fitted values from regression

Hey I have following code in R
S0 = 40
r = log(1 + 0.07)
sigma = 0.3
K = 45
n_steps_per_year = 4
dt = 1 / n_steps_per_year
T = 3
n_steps = n_steps_per_year * T
R = n_paths
Q = 70
P = 72
n_paths = P * Q
d = exp(-r * dt)
N = matrix(rnorm(n_paths * n_steps, mean = 0, sd = 1), n_paths, n_steps)
paths_S = matrix(nrow = n_paths, ncol = n_steps + 1, S0)
for(i in 1:n_paths){
for(j in 1:n_steps){
paths_S[i, j + 1] = paths_S[i, j] * exp((r - 0.5 * sigma ^ 2) * dt + sigma * sqrt(dt) * N[i, j])
}
}
I = apply(K - paths_S, c(1,2), max, 0)
V = matrix(nrow = n_paths, ncol = n_steps + 1)
V[, n_steps + 1] = I[, n_steps + 1]
dV = d * V[, n_steps + 1]
model = lm(dV ~ poly(paths_S[, n_steps], 10))
pred = predict(model, data.frame(x = paths_S[, n_steps]))
plot(paths_S[, n_steps], d * V[, n_steps + 1])
lines(paths_S[, n_steps], pred)
but when I run the last two lines then I get very strange plot (multiple lines instead of one line). What is going on?
You did not provide n_paths, lets assume:
n_paths = 7
set.seed(111)
Then running your code, before you plot, you need to order your x values before plotting:
o = order(paths_S[,12])
plot(paths_S[o, n_steps], d * V[o, n_steps + 1],cex=0.2,pch=20)
lines(paths_S[o, n_steps], pred[o],col="blue")

How to implement Euler method in R

I am trying to implement this Euler Method procedure but I am unable to get the required graphs.
solve_logistic <- function(N0, r = 1, delta_t = 0.01, times = 1000) {
N <- rep(N0, times)
dN <- function(N) r * N * (1 - N)
for (i in seq(2, times)) {
# Euler
N[i] <- N[i-1] + delta_t * dN(N[i-1])
# Improved Euler
# k <- N[i-1] + delta_t * dN(N[i-1])
# N[i] <- N[i-1] + 1 /2 * delta_t * (dN(N[i-1]) + dN(k))
# Runge-Kutta 4th order
# k1 <- dN(N[i-1]) * delta_t
# k2 <- dN(N[i-1] + k1/2) * delta_t
# k3 <- dN(N[i-1] + k2/2) * delta_t
# k4 <- dN(N[i-1] + k3) * delta_t
#
# N[i] <- N[i-1] + 1/6 * (k1 + 2*k2 + 2*k3 + k4)
}
N
}
This is the graph I want to make:
And you can also view the original source which I am following for this graph
Your interest for epedimiological model is a good thing.
To obtain a similar graph as you show, you need to code first the analytical solution of N(t) which is given on the reference web site.
logistic <- function(N0, r, t){
return(1 / (1 + ((1-N0)/N0) * exp(- r * t)))
}
Moreover you should be careful with absisse informations.
r <- 1
t <- 1:1000
N0 <- 0.03
delta_t <- 0.01
plot(t * delta_t, logistic(N0 = N0, r = r, t = t * delta_t), type = "l",
ylim = c(0, 1),
ylab = "N(t)",
xlab = "times")
lines(t * delta_t, solve_logistic(N0 = N0, times = max(t)),
col = "red", lty = 2)
It gives you part of the graphic, now you are able to compute error of the method and test with another delta.
The Euler method is a numerical method for EDO resolution based on Taylor expansion like gradient descent algorithm
.
solve_logistic <- function(N0, r = 1, delta_t = 0.01, times = 1000) {
N <- rep(N0, times)
dN <- function(N) r * N * (1 - N)
for (i in seq(2, times)) {
# Euler (you follow the deepest slope with a small step delta)
N[i] <- N[i-1] + delta_t * dN(N[i-1])
}
N
}

2 Dimension Runge-Kutta Method on Mathematica 8

I have a problem while programing in Mathematica 8, here is my code:
f[t_, y_] := {y, y};
RungeKutta3[a_, b_, Alpha_, n_, f_] :=
Module[{h, j, k1, k2, k3},
h = (b - a)/n;
Y = T = Table[0, {100 + 1}];
Y[[1]] = Alpha;
T[[1]] = a;
For[j = 1, j <= n, ++j,
k1 = f[T[[j]], Y[[j]]];
k2 = f[T[[j]] + h/2, Y[[j]] + k1*h/2];
k3 = f[T[[j]] + h, Y[[j]] + (-k1 + 2 k2)h];
Y[[j + 1]] = Y[[j]] + h/6(k1 + 4 k2 + k3);
(* Print[j, "----->", Y[[j]]];*)
T[[j + 1]] = T[[j]] + h;
];];
RungeKutta3[0., 1., {300., 500}, 2, f];
The thing is, I'm trying to implement a Runge-Kutta method. And I was successful actually, but only with a function f[x_] that had 1 dimension. This code is for 2 dimensions, but it simply doesn't work and I don't know why. Here is an example for a code with 1 dimension only (notice that I have to change the first line to define the function and the last line, when I call "RungeKutta3").
f[t_, y_] := y;
RungeKutta3[a_, b_, Alpha_, n_, f_] :=
Module[{h, j, k1, k2, k3},
h = (b - a)/n;
Y = T = Table[0, {100 + 1}];
Y[[1]] = Alpha;
T[[1]] = a;
For[j = 1, j <= n, ++j,
k1 = f[T[[j]], Y[[j]]];
k2 = f[T[[j]] + h/2, Y[[j]] + k1*h/2];
k3 = f[T[[j]] + h, Y[[j]] + (-k1 + 2 k2)*h];
Y[[j + 1]] = Y[[j]] + h/6*(k1 + 4 k2 + k3);
(* Print[j, "----->", Y[[j]]];*)
T[[j + 1]] = T[[j]] + h;
];];
RungeKutta3[0., 1., 300., 100, f];
To sum up, how do I implemented the Runge-Kutta method for a function with 2 dimensions??
If you could help me out I would be grateful.
Thanks in advance!
PS: the Runge-Kutta method is order 3
----------------------
Problem solved! Check the code, if anybody needs help with anything, just ask!
f[t_, y1_, y2_] := 3 t*y2 + Log[y1] + 4 y1 - 2 t^2 * y1 - Log[t^2 + 1] - t^2;
F[t_, {y1_, y2_}] := {y2, f[t, y1, y2]};
RungeKutta3[a_, b_, [Alpha]_, n_, f_] :=
Module[{h, j, k1, k2, k3, Y, T, R},
h = (b - a)/n;
Y = T = Table[0, {n + 1}];
Y[[1]] = [Alpha]; T[[1]] = a;
For[j = 1, j <= n, ++j,
k1 = f[T[[j]], Y[[j]]];
k2 = f[T[[j]] + h/2, Y[[j]] + k1*h/2];
k3 = f[T[[j]] + h, Y[[j]] + (-k1 + 2 k2)*h];
Y[[j + 1]] = Y[[j]] + h/6*(k1 + 4 k2 + k3);
T[[j + 1]] = T[[j]] + h;
];
R = Table[0, {n + 1}];
For[j = 1, j <= n + 1, j++, R[[j]] = Y[[j]][[1]]];
Print[ListPlot[Transpose[{T, R}]]]
];
RungeKutta3[0., 1, {1., 0.}, 1000, F];
I know basically have a mathematica program that can solve ANY 2nd order equation! Through Runge-Kutta method. just insert your function on
f[t_, y1_, y2_]:= [Insert your function here]
where t is the independent value, y1 is the function itself y(t), y2 is y'(t).
Call the function through:
RungeKutta3[a, b, [Alpha], n, F];
where a is the initial "t" value, b the final "t" value, [Alpha] the initial value of your function and the first derivative (given in the form {y1(a),y2(a0)}), n the number of points equally spaced you want to represent. F is the function you have to insert despite of the function you give to f
Any questions feel free to ask!!
PS: The Runge-Kutta problem solves differential equations with problems of initial values, i used this program as a base to solve a problem of boundary values, if you want it just text me!
Doesn't your code just implement what is already built into Mathematica, namely, if you were to use the option
Method -> {"ExplicitRungeKutta", "DifferenceOrder" -> 3}
to NDSolve?
(This is not to suggest there's no value in "rolling your own": perhaps you want to do it as a learning exercise for yourself or for students, or as a student yourself.)

Resources