I have this equation :
f(x) = i * ln(j * x + k)
With these two conditions : f(0) = 6 & f(1) = 12
After several hours of research, I can not find how to optimize the parameters i, j & k which respect the conditions with RStudio.
I know how to do it with Excel, but I want to succeed in doing it with R.
Anyone have any idea to fix this problem with R?
i can help you with the monte carlo method
so :
after math calcul you find :
i=log(k)/6
k=exp(72*log(j+k))
so you apply the monte carlo method :
a=data.frame(k=round(runif(1000000,-2,2),4),j=round(runif(1000000,-2,2),4))
a$k2=round(exp(72*log(a$j+a$k)),4)
a=a[-which(is.na(a$k2)==TRUE),] # you delete the NA coz of negatif number in log
library(tidyverse) # to use "near" function
a[which(near(a$k,a$k2,0.001)==TRUE),]
Given the constraints you can solve for i and j in terms of k:
f(0) = 6
=> i*ln( j*0 + k) = 6
=> i*ln(k) = 6
=> i = 6/ln(k)
f(1) = 12
=> i*ln( j*1 + k) = 12
=> (6/ln(k))*ln(j+k) = 12
=> ln(j+k) = 2*ln(k)
=> j+k = k*k
=> j = k*k-k
So
f(x) = (6/ln(k))*ln( (k*(k-1)*x + k)
As a check
f(0) = (6/ln(k))*ln( (k*(k-1)*0 + k)
= (6/ln(k))*ln(k) = 6
f(1) = (6/ln(k))*ln( (k*(k-1)*1 + k)
= (6/ln(k))*ln( k*k)
= (6/ln(k))*2*ln(k)
= 12
However I do not understand what you want to optimize.
optim
Define f as the function in the question except we explicitly list all arguments and ss as the residual sum of squares. Then minimize ss using an arbitrary value for i (since we have two equations and 3 unknowns). Below we show the solution for j and k (in the par component of the output) using i = 10 as an input.
f <- function(x, i, j, k) i * log(j * x + k)
ss <- function(p, i) (f(x = 0, i = i, j = p[1], k = p[2]) - 6)^2 +
(f(x = 1, i = i, j= p[1], k = p[2]) - 12)^2
optim(1:2, ss, i = 10)
giving:
$par
[1] 1.497972 1.822113
$value
[1] 9.894421e-09
$counts
function gradient
59 NA
$convergence
[1] 0
$message
NULL
nlsLM
Alternately we can use nonlinear least squares. This is slightly easier to specify since we don't need to define ss but it does require a package. We use nlsLM instead of nls in the core of R since nls does not handle zero residual problems well.
library(minpack.lm)
nlsLM(y ~ f(x, i, j, k), data = list(y = c(6, 12), x = 0:1, i = 10),
start = c(j = 1, k = 2))
giving:
Nonlinear regression model
model: y ~ f(x, i, j, k)
data: list(y = c(6, 12), x = 0:1, i = 10)
j k
1.50 1.82
residual sum-of-squares: 0
Number of iterations to convergence: 4
Achieved convergence tolerance: 1.49e-08
Related
I want to solve the following stochastic differential equation with R:
\frac{dx}{dt}=f(x)+sigma*dW
f(x)= a+bx+cx^2 (for x \leq 1) f(x)= a+bx (for x > 1)
and
sigma=d^2
where (a, b, c, and d are constants).
I tried using:
f = expression(a+bx+cx^2)
s = expression(d^2)
solution <- sde.sim(X0=0.6, t0=0, N=2000, delta=0.01, drift = f, sigma = s )
But how do I include the second case (when x>1)?
Sorry for the poor inclusion of the mathematical expression. I do not how to write latex here.
Maybe something like this where (x <= 1) would evaluate to 0 or 1 depending on the case.
f = expression(1+ 2 * x + (x <= 1) * 3*x^2)
s = expression(2^2)
solution <- sde.sim(X0=0.6, t0=0, N=2000, delta=0.01, drift = f, sigma = s)
I am trying to obtain estimated constrained coefficients using RSS. The beta coefficients are constrained between [0,1] and sum to 1. Additionally, my third parameter is constrained between (-1,1). Utilizing the below I can obtain a nice solution using simulated variables but when implementing the methodology on my real data set I keep arriving at a non-unique solution. In turn, I'm wondering if there is a more numerically stable way to obtain my estimated parameters.
set.seed(234)
k = 2
a = diff(c(0, sort(runif(k-1)), 1))
n = 1e4
x = matrix(rnorm(k*n), nc = k)
a2 = -0.5
y = a2 * (x %*% a) + rnorm(n)
f = function(u){sum((y - u[3] * (x %*% u[1:2]))^2)}
g = function(v){
v1 = v[1]
v2 = v[2]
u = vector(mode = "double", length = 3)
# ensure in (0,1)
v1 = 1 / (1 + exp(-v1))
# ensure add up to 1
u[1:2] = c(v1, 1 - sum(v1))
# ensure between [-1,1]
u[3] = (v2^2 - 1) / (v2^2 + 1)
u
}
res = optim(rnorm(2), function(v) f(g(v)), hessian = TRUE, method = "BFGS")
eigen(res$hessian)$values
res$convergence
rbind(Est = res$par, SE = sqrt(diag(solve(res$hessian))))
rbind(g(res$par),c(a,a2))
Hats off to http://zoonek.free.fr/blosxom/R/2012-06-01_Optimization.html
Since there has been no direct answer to your question so far, I'd like to show a way how to implement a parameter-constrained model in Stan/RStan. You should give this a try using your real data.
Doing Bayesian inference has the advantage of giving you posterior probabilities for your (constrained) model parameters. Point estimates including confidence intervals can then be easily calculated.
First off, we load the library and set RStan to store the compiled model and use multiple cores (if available).
library(rstan);
rstan_options(auto_write = TRUE);
options(mc.cores = parallel::detectCores());
We now define our Stan model. In this case, it's very simple, and we can make use of RStan's simplex data type for vectors of non-negative values that sum to one.
model <- "
data {
int<lower=1> n; // number of observations
int<lower=0> k; // number of parameters
matrix[n, k] X; // data
vector[n] y; // response
}
parameters {
real a2; // a2 is a free scaling parameter
simplex[k] a; // a is constrained to sum to 1
real sigma; // residuals
}
model {
// Likelihood
y ~ normal(a2 * (X * a), sigma);
}"
Stan supports various constrained data types; I'd recommend taking a lot at the Stan manual for more complex examples.
Using the sample data from your original question, we can run our model:
# Sample data
set.seed(234);
k = 2;
a = diff(c(0, sort(runif(k-1)), 1));
n = 1e4;
x = matrix(rnorm(k * n), nc = k);
a2 = -0.5;
y = a2 * (x %*% a) + rnorm(n);
# Fit stan model
fit <- stan(
model_code = model,
data = list(
n = n,
k = k,
X = x,
y = as.numeric(y)),
iter = 4000,
chains = 4);
Running the model will only take a few seconds (after the parser has internally translated and compiled the model in C++), and the full results (posterior distributions for all parameters conditional on the data) are stored in fit.
We can inspect the contents of fit using summary:
# Extract parameter estimates
pars <- summary(fit)$summary;
pars;
# mean se_mean sd 2.5% 25%
#a2 -0.4915289 1.970327e-04 0.014363398 -0.5194985 -0.5011471
#a[1] 0.7640606 2.273282e-04 0.016348488 0.7327691 0.7527457
#a[2] 0.2359394 2.273282e-04 0.016348488 0.2040952 0.2248482
#sigma 1.0048695 8.746869e-05 0.007048116 0.9909698 1.0001889
#lp__ -5048.4273105 1.881305e-02 1.204892294 -5051.4871931 -5048.9800451
# 50% 75% 97.5% n_eff Rhat
#a2 -0.4916061 -0.4819086 -0.4625947 5314.196 1.0000947
#a[1] 0.7638723 0.7751518 0.7959048 5171.881 0.9997468
#a[2] 0.2361277 0.2472543 0.2672309 5171.881 0.9997468
#sigma 1.0048994 1.0095420 1.0187554 6492.930 0.9998086
#lp__ -5048.1238783 -5047.5409682 -5047.0355381 4101.832 1.0012841
You can see that a[1]+a[2]=1.
Plotting parameter estimates including confidence intervals is also easy:
plot(fit);
The simplest way to solve optimization problems with equality and inequality constraints will most likely be through the "augmented Lagrangian" approach. In R this is, for example, realized in the alabama package.
# function and gradient
fn = function(u){sum((y - u[3] * (x %*% u[1:2]))^2)}
gr = function(u) numDeriv::grad(fn, u)
# constraint sum(u) == 1
heq = function(u) sum(u) - 1
# constraints 0 <= u[1],u[2] <= 1; -1 <= u[3] <= 1
hin = function(u) c(u[1], u[2], 1-u[1], 1-u[2], u[3]+1, 1-u[3])
sol_a = alabama::auglag(c(0.5, 0.5, 0), fn, gr, hin=hin, heq=heq)
sol_a
## $par
## [1] 1.0000000 0.3642904 -0.3642904
## $value
## [1] 10094.74
## ...
## $hessian
## [,1] [,2] [,3]
## [1,] 15009565054 9999999977 9999992926
## [2,] 9999999977 10000002578 9999997167
## [3,] 9999992926 9999997167 10000022569
For other packages containing an "augmented Lagrangian" procedure see the CRAN Task View on optimization.
I am trying to obtain estimated constrained coefficients using RSS. The beta coefficients are constrained between [0,1] and sum to 1. Additionally, my third parameter is constrained between (-1,1). Utilizing the below I can obtain a nice solution using simulated variables but when implementing the methodology on my real data set I keep arriving at a non-unique solution. In turn, I'm wondering if there is a more numerically stable way to obtain my estimated parameters.
set.seed(234)
k = 2
a = diff(c(0, sort(runif(k-1)), 1))
n = 1e4
x = matrix(rnorm(k*n), nc = k)
a2 = -0.5
y = a2 * (x %*% a) + rnorm(n)
f = function(u){sum((y - u[3] * (x %*% u[1:2]))^2)}
g = function(v){
v1 = v[1]
v2 = v[2]
u = vector(mode = "double", length = 3)
# ensure in (0,1)
v1 = 1 / (1 + exp(-v1))
# ensure add up to 1
u[1:2] = c(v1, 1 - sum(v1))
# ensure between [-1,1]
u[3] = (v2^2 - 1) / (v2^2 + 1)
u
}
res = optim(rnorm(2), function(v) f(g(v)), hessian = TRUE, method = "BFGS")
eigen(res$hessian)$values
res$convergence
rbind(Est = res$par, SE = sqrt(diag(solve(res$hessian))))
rbind(g(res$par),c(a,a2))
Hats off to http://zoonek.free.fr/blosxom/R/2012-06-01_Optimization.html
Since there has been no direct answer to your question so far, I'd like to show a way how to implement a parameter-constrained model in Stan/RStan. You should give this a try using your real data.
Doing Bayesian inference has the advantage of giving you posterior probabilities for your (constrained) model parameters. Point estimates including confidence intervals can then be easily calculated.
First off, we load the library and set RStan to store the compiled model and use multiple cores (if available).
library(rstan);
rstan_options(auto_write = TRUE);
options(mc.cores = parallel::detectCores());
We now define our Stan model. In this case, it's very simple, and we can make use of RStan's simplex data type for vectors of non-negative values that sum to one.
model <- "
data {
int<lower=1> n; // number of observations
int<lower=0> k; // number of parameters
matrix[n, k] X; // data
vector[n] y; // response
}
parameters {
real a2; // a2 is a free scaling parameter
simplex[k] a; // a is constrained to sum to 1
real sigma; // residuals
}
model {
// Likelihood
y ~ normal(a2 * (X * a), sigma);
}"
Stan supports various constrained data types; I'd recommend taking a lot at the Stan manual for more complex examples.
Using the sample data from your original question, we can run our model:
# Sample data
set.seed(234);
k = 2;
a = diff(c(0, sort(runif(k-1)), 1));
n = 1e4;
x = matrix(rnorm(k * n), nc = k);
a2 = -0.5;
y = a2 * (x %*% a) + rnorm(n);
# Fit stan model
fit <- stan(
model_code = model,
data = list(
n = n,
k = k,
X = x,
y = as.numeric(y)),
iter = 4000,
chains = 4);
Running the model will only take a few seconds (after the parser has internally translated and compiled the model in C++), and the full results (posterior distributions for all parameters conditional on the data) are stored in fit.
We can inspect the contents of fit using summary:
# Extract parameter estimates
pars <- summary(fit)$summary;
pars;
# mean se_mean sd 2.5% 25%
#a2 -0.4915289 1.970327e-04 0.014363398 -0.5194985 -0.5011471
#a[1] 0.7640606 2.273282e-04 0.016348488 0.7327691 0.7527457
#a[2] 0.2359394 2.273282e-04 0.016348488 0.2040952 0.2248482
#sigma 1.0048695 8.746869e-05 0.007048116 0.9909698 1.0001889
#lp__ -5048.4273105 1.881305e-02 1.204892294 -5051.4871931 -5048.9800451
# 50% 75% 97.5% n_eff Rhat
#a2 -0.4916061 -0.4819086 -0.4625947 5314.196 1.0000947
#a[1] 0.7638723 0.7751518 0.7959048 5171.881 0.9997468
#a[2] 0.2361277 0.2472543 0.2672309 5171.881 0.9997468
#sigma 1.0048994 1.0095420 1.0187554 6492.930 0.9998086
#lp__ -5048.1238783 -5047.5409682 -5047.0355381 4101.832 1.0012841
You can see that a[1]+a[2]=1.
Plotting parameter estimates including confidence intervals is also easy:
plot(fit);
The simplest way to solve optimization problems with equality and inequality constraints will most likely be through the "augmented Lagrangian" approach. In R this is, for example, realized in the alabama package.
# function and gradient
fn = function(u){sum((y - u[3] * (x %*% u[1:2]))^2)}
gr = function(u) numDeriv::grad(fn, u)
# constraint sum(u) == 1
heq = function(u) sum(u) - 1
# constraints 0 <= u[1],u[2] <= 1; -1 <= u[3] <= 1
hin = function(u) c(u[1], u[2], 1-u[1], 1-u[2], u[3]+1, 1-u[3])
sol_a = alabama::auglag(c(0.5, 0.5, 0), fn, gr, hin=hin, heq=heq)
sol_a
## $par
## [1] 1.0000000 0.3642904 -0.3642904
## $value
## [1] 10094.74
## ...
## $hessian
## [,1] [,2] [,3]
## [1,] 15009565054 9999999977 9999992926
## [2,] 9999999977 10000002578 9999997167
## [3,] 9999992926 9999997167 10000022569
For other packages containing an "augmented Lagrangian" procedure see the CRAN Task View on optimization.
My task is to simulate a compound Poisson process defined as:
where
is a Poisson process and Y_i are Gamma(shape,scale) distributed. This is my R code:
# parameter for Poisson distribution.
lambda = 1
# parameters for Gamma distribution.
shape = 7.5
scale = 1
comp.pois = function(t.max, lambda) {
stopifnot(t.max >= 0 && t.max %% 1 == 0)
# offset ns by 1 because first y is 0.
# generate N(t), that is number of arrivals until time t.
ns = cumsum(rpois(n = t.max, lambda = lambda)) + 1
# generate gamma distributed random variables Y_i.
ys = c(0, rgamma(n = max(ns), shape = shape, scale = scale))
# generate all X(t) for t <= t.max.
return(c(0, cumsum(x = ys[ns])))
}
Compute a random sample of X(10) and compare means and variances.
# sample size.
size = 1000
t = 10
# ts is a vector of sample values for X(10).
ts = sapply(1:size, function(i) comp.pois(t, lambda)[t])
# sample mean and variance:
(mean.s = mean(ts))
(var.s = var(ts))
# theoretical mean and variance:
(mean.t = lambda * t * shape * scale)
(var.t = (shape + 1) * shape * scale^2)
output:
> # sample:
> (mean.s = mean(ts))
[1] 63.38403
> (var.s = var(ts))
[1] 184.3264
> # theoretical:
> (mean.t = lambda * t * shape * scale)
[1] 75
> (var.t = (shape + 1) * shape * scale^2)
[1] 63.75
This variance is gigantic, but I cannot spot my mistake. Please help. Thank you.
EDIT:
I used the following algorithm to generate the N(t). I don't know why it is supposed to be better. I took it from Rizzo, Maria L. Statistical computing with R. CRC Press, 2007. The mean is good, but the variance is even worse. I tried sampling from the Gamma distribution only once for the entire simulation (although I'm pretty sure this does not reflect the problem very well) and the mean was off by around 10-40 for t = 10. When resampling for every X(t) (which is what the following code does), the mean is very exact. As pointed out, the variance is horrifying. This is probably not a good solution, but I suppose it is as good as it gets.
lambda = 3
shape = 6
scale = 2
size = 10000
eps = 1e-8
t = 10
# with probability 1-eps, n or less gamma distributed random variables are needed.
n = qpois(1-eps, lambda = lambda * t)
# sample from the gamma distribution. Not sure if it's ok to use the same sample every time.
# with this, the mean is of by about 10%.
# ys = c(rgamma(n = n, shape = shape, scale = scale))
# the interarrival times are exponentially distributed with rate lambda.
pp.exp = function (t0) {
# not sure how many Tn are needed :/
Tn = rexp(1000, lambda)
Sn = cumsum(Tn)
return(min(which(Sn > t0)) - 1)
}
# generate N(t) which follow the poisson process.
ns = sapply(1:size, function (i) pp.exp(t))
# generate X(t) as in the problem description.
xs = sapply(ns, function (n) {
ys = c(rgamma(n = n, shape = shape, scale = scale))
sum(ys[1:n])
})
output (t=10) in this case:
> # compare mean and variance of 'size' samples of X(t) for verification.
> # sample:
> (mean.s = mean(xs))
[1] 359.864
> (var.s = var(xs))
[1] 4933.277
> # theoretical:
> (mean.t = lambda * t * shape * scale)
[1] 360
> (var.t = (shape + 1) * shape * scale^2)
[1] 168
I want to use the mle function to get estimates of a and b in a Unif(a,b) distribution. But I get absurd estimates nowhere close to 1 and 3.
library(stats4)
set.seed(20161208)
N <- 100
c <- runif(N, 1, 3)
LL <- function(min, max) {
R <- runif(100, min, max)
suppressWarnings((-sum(log(R))))
}
mle(minuslogl = LL, start = list(min = 1, max = 3), method = "BFGS",
lower = c(-Inf, 0), upper = c(Inf, Inf))
I got:
Call:
mle(minuslogl = LL, start = list(min = 1, max = 3), method = "BFGS")
Coefficients:
min max
150.8114 503.6586
Any ideas of what's going on? Thank you in advance!
I would first point out where your code is wrong.
You need dunif not runif. You may define:
LL <- function (a, b) -sum(dunif(x, a, b, log.p = TRUE))
In my code below I did not use dunif, as the density is just 1 / (b - a) so I wrote it directly.
You are generating samples inside objective function. For U[a,b] this is OK as its density is free of x. But for other distributions the objective function changes at each iteration.
With box constraints, you need method = "L-BFGS-B", not the ordinary "BFGS". And you are not using the right constraints.
Now in more depth...
For a length-n sample vector x from U[a, b], the likelihood is (b - a) ^ (-n), and negative-log-likelihood is n * log(b - a). Obviously the MLE are a = min(x) and b = max(x).
Numerical optimization is completely unnecessary, and is in fact impossible without constraints. Look at the gradient vector:
( n / (a - b), n / (b - a) )
The partial derivative w.r.t. a / b is always negative / positive and can't be 0.
Numerical approach becomes feasible when we impose box constraints: -Inf < a <= min(x) and max(x) <= b < Inf. We know for sure that iteration terminates at the boundary.
My code below uses both optim and mle. Note mle will fail, when it inverts Hessian matrix, as it is singular:
-(b - a) ^ 2 (b - a) ^ 2
(b - a) ^ 2 -(b - a) ^ 2
Code:
## 100 samples
set.seed(20161208); x <- runif(100, 1, 3)
# range(x)
# [1] 1.026776 2.984544
## using `optim`
nll <- function (par) log(par[2] - par[1]) ## objective function
gr_nll <- function (par) c(-1, 1) / diff(par) ## gradient function
optim(par = c(0,4), fn = nll, gr = gr_nll, method = "L-BFGS-B",
lower = c(-Inf, max(x)), upper = c(min(x), Inf), hessian = TRUE)
#$par
#[1] 1.026776 2.984544 ## <- reaches boundary!
#
# ...
#
#$hessian ## <- indeed singular!!
# [,1] [,2]
#[1,] -0.2609022 0.2609022
#[2,] 0.2609022 -0.2609022
## using `stats4::mle`
library(stats4)
nll. <- function (a, b) log(b - a)
mle(minuslogl = nll., start = list(a = 0, b = 4), method = "L-BFGS-B",
lower = c(-Inf, max(x)), upper = c(min(x), Inf))
#Error in solve.default(oout$hessian) :
# Lapack routine dgesv: system is exactly singular: U[2,2] = 0