I'm trying to run a MLE for an infectious disease compartmental transmission model (SEIR, in my case SSEIR) with the mle2 command, trying to fit a curve of predicted number of weekly deaths to that of observed weekly deaths similar to this:
plot of predicted vs observed weekly deaths.
However, the parameter estimates seem to always be on the (sensible) boundaries I provide and SEs, z-values, p-values are NA.
I set up the SEIR model and then solve it with the ode solver. Using that model output and the observed data, I calculate a negative log likelihood, which I then submit to the mle2 function.
When I first set it up, there were multiple errors that stopped the script from running, but now that those are resolved, I cannot seem to find the root of why the fitting doesn't work.
I am certain that the boundaries I set for the parameter estimation are sensible. The parameters are transition rates between compartments and are therefore defined as (for example) delta = 1/duration of infectiousness, so there are very real biological boundaries on what the parameters can be.
I am aware that I am trying to fit a lot of parameters with not that much data, but the same problem persists when I try only fitting one, so that cannot be the root of it.
library(deSolve)
library(bbmle)
#data
gdta <- c(0, 36.2708172419082, 1.57129615346629, 28.1146409459558, 147.701669719614, 311.876708482584, 512.401145459178, 563.798275104372, 470.731269976821, 292.716043742125, 153.604156195608, 125.760068922451, 198.755685044427, 143.847282793854, 69.2693867232681, 42.2093135487066, 17.0200426587424)
#build seir function
seir <- function(time, state, parameters) {
with(as.list(c(state, parameters)), {
dS0 <- - beta0 * S0 * (I/N)
dS1 <- - beta1 * S1 * (I/N)
dE <- beta0 * S0 * (I/N) + beta1 * S1 * (I/N) - delta * E
dI <- delta * E - gamma * I
dR <- gamma * I
return(list(c(dS0, dS1, dE, dI, dR)))
})
}
# build function to run seir, include ode solver
run_seir <- function(time, state, beta0, beta1, delta, gamma, sigma, N, startInf) {
parameters <- c(beta0, beta1, delta, gamma)
names(parameters) <- c("beta0", "beta1", "delta", "gamma")
init <- c(S0 = (N - startInf)*(sigma) ,
S1 = (N - startInf) * (1-sigma),
E = 0,
I = startInf,
R = 0)
state_est <- as.data.frame(ode(y = init, times = times, func = seir, parms = parameters))
return(state_est)
}
times <- seq(0, 16, by = 1) #sequence
states <- c("S0", "S1", "E", "I", "R")
# run the run_seir function to see if it works
run_seir(time = times, state= states, beta0 = 1/(1.9/7), beta1 = 0.3*(1/(1.9/7)), delta = 1/(4.1/7), gamma = 1/(4.68/7), sigma = 0.7, N = 1114100, startInf = 100)
#build calc likelihood function
calc_likelihood <- function(times, state, beta0, beta1, delta, gamma, sigma, N, startInf, CFR) {
model.output <- run_seir(time, state, beta0, beta1, delta, gamma, sigma, N, startInf)
LL <- sum(dpois(round(as.numeric(gdta)), (model.output$I)/(1/delta)*CFR, log = TRUE))
print(LL)
return(LL)
}
# run calc_likelihood function
calc_likelihood(time = times, state = states, beta0 = 1/(1.9/7), beta1 = 0.3*(1/(1.9/7)), delta = 1/(4.1/7), gamma = 1/(4.68/7), sigma = 0.7, N = 1114100, startInf = 100, CFR = 0.02)
#MLE
#parameters that are supposed to be fixed
fixed.pars <- c(N=1114100, startInf=100, CFR = 0.02)
#parameters that mle2 is supposed to estimate
free.pars <- c(beta0 = 1/(1.9/7), beta1 = 0.3*(1/(1.9/7)),
delta = 1/(4.1/7), gamma = 1/(4.68/7), sigma = 0.7)
#lower bound
lower_v <- c(beta0 = 0, beta1 = 0, delta = 0, gamma = 0, sigma = 0)
#upper bound
upper_v <- c(beta0 = 15, beta1 = 15, delta = 15, gamma = 15, sigma = 1)
#sigma = 1, this is not a typo
#mle function - need to use L-BFGS-B since we need to include boundaries
test2 <- mle2(calc_likelihood, start = as.list(free.pars), fixed = as.list(fixed.pars),method = "L-BFGS-B", lower = lower_v, upper = upper_v)
summary(test2)
After I run mle2, I get a warning saying:
Warning message:
In mle2(calc_likelihood, start = as.list(free.pars), fixed = as.list(fixed.pars), :
some parameters are on the boundary: variance-covariance calculations based on Hessian may be unreliable
and if I look at summary(test2):
Warning message:
In sqrt(diag(object#vcov)) : NaNs produced
Based on the research I've done so far, I understand that the second error might be due to the estimates being on the boundaries, so my question really is how to address the first one.
If I run mle2 with only lower boundaries, I get parameter estimates in the millions, which cannot be correct.
I am fairly certain that my model specification for the SEIR is correct, but after staring at this code and trying to resolve this issue for a week, I'm open to any input on how to make the fitting work.
Thanks,
JJ
Related
I am trying to establish a method of estimating infectious disease parameters by comparing real epidemic curves with simulations of a stochastic SIR model. To construct the stochastic SIR model, I am using the deSolve package and instead of using fixed parameter values I would like to draw the parameter value used in the equations at each time point from a Poisson distribution centered on the original parameter values.
Using the parameter beta as an example, beta represents the average number of transmission events per capita and is the product of the average number of contacts and the probability that transmission occurs upon contact. Realistically, there is variation in the number of contacts a person will have and since transmission is also a probabilistic event there is variation surrounding this too.
So even if the average transmission rate were to be 2.4 (for example), an individual can go on to infect 0, 1, 2 or 3 ... etc. people with varying probabilities.
I have tried to incorporate this into my code below using the rpois function and reassigning the parameters used in the equations to the outputs of the rpois.
I have run my code with the same initial values and parameters multiple times and all the curves are different indicating that SOMETHING "stochastic" is going on, but I am unsure whether the code is sampling using the rpois at each time point or just once at the beginning. I have only started coding very recently so do not have much experience.
I would be grateful if anyone more experienced than myself could verify what my code is ACTUALLY doing and whether it is sampling using rpois at each time point or not. If not I would be grateful for any suggestions for achieving this. Perhaps a loop is needed?
library('deSolve')
library('reshape2')
library('ggplot2')
#MODEL INPUTS
initial_state_values <- c(S = 10000,
I = 1,
R = 0)
#PARAMETERS
parameters <- c(beta = 2.4,
gamma = 0.1)
#POISSON MODELLING OF PARAMETERS
#BETA
beta_p <- rpois(1, parameters[1])
#GAMMA
infectious_period_p <- rpois(1, 1/(parameters[2]))
gamma_p <- 1/infectious_period_p
#TIMESTEPS
times <- seq(from = 0, to = 50,by = 1)
#SIR MODEL FUNCTION
sir_model <- function(time, state, parameters) {
with(as.list(c(state, parameters)), {
N <- S + I + R
lambda <- beta_p * I/N
dS <- -lambda * S
dI <- lambda*S - gamma_p*I
dR <- gamma_p*I
return(list(c(dS, dI, dR)))
})
}
output<- as.data.frame(ode(y= initial_state_values,
times = times,
func = sir_model,
parms = parameters))
The code given in the question runs the model with constant parameters over time. Here an example with parameters varying over time. However, this setting assumes that for a given time step, the parameters are equal for all indidividuals of the population. If you want to have individual variability, one can either use a matrix formulation for different sub-populations or use an individual model instead.
Model with fluctuating population parameters:
library('deSolve')
initial_state_values <- c(S = 10000,
I = 1,
R = 0)
parameters <- c(beta = 2.4, gamma = 0.1)
times <- seq(from = 0, to = 50, by = 1) # note time step = 1!
# +1 to add one for time = zero
beta_p <- rpois(max(times) + 1, parameters[1])
infectious_period_p <- rpois(max(times) + 1, 1/(parameters[2]))
gamma_p <- 1/infectious_period_p
sir_model <- function(time, state, parameters) {
# cat(time, "\n") # show time steps for debugging
with(as.list(c(state, parameters)), {
# this overwrites the parms passed via parameters
beta <- beta_p[floor(time) + 1]
gamma <- gamma_p[floor(time) + 1]
N <- S + I + R
lambda <- beta * I/N
dS <- -lambda * S
dI <- lambda * S - gamma * I
dR <- gamma * I
list(c(dS, dI, dR))
})
}
output <- ode(y = initial_state_values,
times = times,
func = sir_model,
parms = parameters)
plot(output)
Here another, slightly more generalized version. It is added as a second answer, to keep the original version compact and simple. The new version differs with respect to the following:
generalized, so that it can work with fixed parameters and stochastic forcing
pass parameters as list
run a basic Monte-Carlo simulation
library('deSolve')
sir_model <- function(time, state, parameters) {
with(as.list(c(state, parameters)), {
# this overwrites the parms passed via parameters
if (time_dependent) {
beta <- beta_p[floor(time) + 1]
gamma <- gamma_p[floor(time) + 1]
}
N <- S + I + R
lambda <- beta * I/N
dS <- -lambda * S
dI <- lambda * S - gamma * I
dR <- gamma * I
list(c(dS, dI, dR))
})
}
initial_state_values <- c(S = 10000, I = 1, R = 0)
times <- seq(from = 0, to = 50, by = 1) # note time step = 1!
## (1) standard simulation with constant parameters
parameters <- c(beta = 2.4, gamma = 0.1)
out0 <- ode(y= initial_state_values,
times = times,
func = sir_model,
parms = c(parameters, time_dependent = FALSE))
plot(out0)
## (2) single simulation with time varying parameters
beta_p <- rpois(max(times) + 1, parameters[1])
infectious_period_p <- rpois(times + 1, 1/(parameters[2]))
gamma_p <- 1/infectious_period_p
## here we need pass the vectorized parameters globally
## for simplicity, it can also be done as list
out1 <- ode(y = initial_state_values, times = times,
func = sir_model, parms = c(time_dependent = TRUE))
plot(out0, out1)
## (3) a sample of simulations
monte_carlo <- function(i) {
#parameters <- c(beta = 2.4, gamma = 0.1)
beta_p <- rpois(max(times) + 1, parameters[1])
infectious_period_p <- rpois(max(times) + 1, 1 / (parameters[2]))
gamma_p <- 1/infectious_period_p
ode(y = initial_state_values, times = times,
func = sir_model, parms = list(beta_p = beta_p,
gamma_p = gamma_p,
time_dependent = TRUE))
}
## run 10 simulations
out_mc <- lapply(1:10, monte_carlo)
plot(out0, out_mc, mfrow=c(1, 3))
I hope you are well and having a nice day. I am attempting to code an MH algorithm for a multiple linear regression model. I am following a tutorial online for simple linear regression using an MH algorithm and I plan on applying the principles myself to my multiple linear regression model, but I am running into a problem figuring our how to specify prior distributions for the parameters. Below, is the code.
trueA <- 5
trueB <- 0
trueSd <- 10
sampleSize <- 31
# create independent x-values
x <- (-(sampleSize-1)/2):((sampleSize-1)/2)
# create dependent values according to ax + b + N(0,sd)
y <- trueA * x + trueB + rnorm(n=sampleSize,mean=0,sd=trueSd)
likelihood <- function(param){
a = param[1]
b = param[2]
sd = param[3]
pred = a*x + b
singlelikelihoods = dnorm(y, mean = pred, sd = sd, log = T)
sumll = sum(singlelikelihoods)
return(sumll)
}
This is the portion where I am running into problems. In the tutorial, the author now specifies that they define prior distributions for the parameters. In this case, I would like to specify uninformative prior distributions for the slopes, coefficients, and standard deviation. The author says that they used uniform/normal distributions.
I found an example using Poisson regression, which looks like:
LogPriorFunction <- function(param){
beta0 <- param[1]
beta1 <- param[2]
beta0prior <- dnorm(beta0, 0, sqrt(100), log=TRUE)
beta1prior <- dnorm(beta1, 0, sqrt(100), log=TRUE)
return(beta0prior + beta1prior) # Logarithm of prior distributions
}
But I am having trouble adapting it to the simple linear regression. Here is my attempt:
prior <- function(param){
a = param[1]
b = param[2]
sd = param[3]
aprior = dnorm(a, 0, 1000, log=TRUE)
bprior = dnorm(b, 0, 1000, log=TRUE)
sdprior = 1/rgamma(sd, shape = .001, scale = .001)
return(aprior + bprior + sdprior) # Logarithm of prior distributions
}
The rest of the code is:
posterior <- function(param){
return (likelihood(param) + prior(param))
}
proposalfunction <- function(param){
return(rnorm(3,mean = param, sd= c(0.1,0.5,0.3)))
}
run_metropolis_MCMC <- function(startvalue, iterations){
chain = array(dim = c(iterations+1,3))
chain[1,] = startvalue
for (i in 1:iterations){
proposal = proposalfunction(chain[i,])
probab = exp(posterior(proposal) - posterior(chain[i,]))
if (runif(1) < probab){
chain[i+1,] = proposal
}else{
chain[i+1,] = chain[i,]
}
}
return(chain)
}
startvalue = c(4,0,10)
chain = run_metropolis_MCMC(startvalue, 10000)
burnIn = 5000
acceptance = 1-mean(duplicated(chain[-(1:burnIn),]))
Finally, along with specifying the priors, I keep getting the warning message:
Error in if (runif(1) < probab) { : missing value where TRUE/FALSE needed
In addition: Warning message:
In if (runif(1) < probab) { :
the condition has length > 1 and only the first element will be used
I am not sure if this is related to my incorrectly specifying the prior distributions, as the author of the article gets similar results from the MH algorithm as running a simple linear regression model using the lm() function. Any help would with these issues would be greatly appreciated. I look forward to hearing from you. Thank you.
As a reference, here is the link to the article:
https://khayatrayen.github.io/MCMC.html#defining_the_prior
Finally, here is a link to the article from where I tried to emulate the priors, but does Poisson regression and not linear regression:
https://rpubs.com/SaraGarcesCespedes/586440
I am using R studio to estimate paramters for data under Variance Gamma. I want to fit this data to the data and find estimates of parameters. The code I have is
x<-c(1291,849,238,140,118,108,87,70,63,58,50,47,21,21,19)
library(VarianceGamma)
init<-c(0,0.5,0,0.5)
vgFit(x, freq = NULL, breaks = NULL, paramStart = init, startMethod = "Nelder-Mead", startValues = "SL", method = "Nelder-Mead", hessian = FALSE, plots = TRUE)
The error I got was:
Error in optim(paramStart, llsklp, NULL, method = startMethodSL, hessian = FALSE, :
function cannot be evaluated at initial parameters
I am not sure what the issue is?
The error might suggest divergence. Based on your previous questions, I'm wildly guessing the x is the raw number of the stock values. So a log-transformation may be necessary before modelling the change per time unit (ex. daily returns).
x <- c(1291,849,238,140,118,108,87,70,63,58,50,47,21,21,19)
dx <- log(x)[2:length(dx)] - log(x)[1:(length(dx)-1)]
vgFit(dx)
#Parameter estimates:
# vgC sigma theta nu
# 0.16887 0.03128 -0.47164 0.27558
We may want to compare with simulated data. I implemented two methods and they seem equivalent for large observation number nt.
Method 2 is according to below:
#Simulating VG as a time-fixed Brownian Motion
set.seed(1)
nt = 15 #number of observations
T = nt - 1 #total time
dt = rep(T/(nt-1), nt-1) #fixed time increments
r = 1 + 0.16887 #interest rate
vgC = (r-1)
sigma = 0.03128
theta = -0.47164
nu = 0.27558
V_ = rep(NA,nt) #Simulations for log stock value
V_[1] = 7.163172 #log(x[1])
V2_ = V_ #alternative simulation method
for(i in 2:nt)
{#method 1: by VarianceGamma package
V_[i] <- V_[i-1] + rvg(1,vgC=vgC*dt[i-1], sigma=sigma, theta=theta, nu=nu)
#method 2: by R built-in packages
gamma_i<-rgamma(1, shape=dt[i-1]/nu, scale = nu)
normal<-rnorm(1, mean=0, sd=sigma*sqrt(gamma_i))
V2_[i] <- V2_[i-1] + vgC*dt[i-1] + theta*gamma_i + normal
}
# Visual comparison
x11(width=4,height=4)
plot(x, xlab='Time',ylab='Stock value',type='l')
lines(exp(V_), col='red')
lines(exp(V2_), col='blue')
legend('topright',legend=c('Observed','Method1','Method2'),fill=c('black','red','blue'))
The resulting parameters suggest unstable estimations due to small sample size nt:
#The real parameter:
c(vgC*dt[1], sigma, theta, nu).
# vgC sigma theta nu
# 0.16887 0.03128 -0.47164 0.27558
#Parameter estimates for 1st data set:
dV = V_[2:nt] - V_[1:(nt-1)]
vgFit(dV)
# vgC sigma theta nu
#-0.9851 0.3480 1.2382 2.0000
#Parameter estimates for 2nd data set:
dV2 = V2_[2:nt] - V2_[1:(nt-1)]
vgFit(dV2)
# vgC sigma theta nu
#-0.78033 0.07641 0.52414 0.11840
In addition, the rvg function is assuming fixed time increments. We can relax that hypothesis by #Louis Marascio's answer using log-likelihood approach.
#Simulating VG as a time-changed Brownian Motion
set.seed(1)
nt = 100 #Increase the number of observations!
T = nt-1
dt = runif(nt-1) #random time increments
dt = dt/sum(dt)*T
r = 1 + 0.16887
vgC = (r-1)
sigma = 0.03128
theta = -0.47164
nu = 0.27558
V_ = rep(NA,nt) #simulations for log stock value
V_[1] = 7.163172
for(i in 2:nt)
{V_[i] <- V_[i-1] + rvg(1,vgC=vgC*dt[i-1], sigma=sigma, theta=theta, nu=nu)
}
dV = V_[2:nt] - V_[1:(nt-1)]
# -log-likelihood function with different time increments
ll = function(par){
if(par[2]>0 & par[4]>0)
{tem = 0
for (i in 1:(length(dV)))
{tem = tem - log(dvg(dV[i], vgC = par[1]*dt[i], sigma=par[2], theta=par[3], nu = par[4]))
}
return (tem)
}
else return(Inf)}
Indeed, the results show better estimation by relaxing the fixed time assumption:
#The real parameters:
c(vgC, sigma, theta, nu)
# vgC sigma theta nu
# 0.16887 0.03128 -0.47164 0.27558
#Assuming fixed time increments
vgFit(dV)$param*c(1/mean(dt),1,1,1)
# vgC sigma theta nu
#-0.2445969 0.3299023 -0.0696895 1.5623556
#Assuming different time increments
optim(vgFit(dV)$param*c(1/mean(dt),1,1,1),ll,
method = c("Nelder-Mead", "BFGS", "CG", "L-BFGS-B", "SANN", "Brent")[5])
# vgC sigma theta nu
# 0.16503125 0.03241617 -0.50193694 0.28221985
Finally, the confidence intervals for the estimated parameters may be obtained by multiple simulations:
set.seed(1)
out = NULL
for (j in 1:100) #100 simulations
{V_ = rep(NA,nt)
V_[1] = 7.163172
for(i in 2:nt)
{V_[i] <- V_[i-1] + rvg(1,vgC=vgC*dt[i-1], sigma=sigma, theta=theta, nu=nu)
}
dV = V_[2:nt] - V_[1:(nt-1)]
#to skip divergence
tem <- try(vgFit(dV)$param)
if (inherits(tem, "try-error")) next
out = rbind(out,tem)
}
apply(out,2,mean)
# vgC sigma theta nu
#-0.8735168 0.1652970 0.4737270 0.9821458
apply(out,2,sd)
# vgC sigma theta nu
#2.8935938 0.3092993 2.6833866 1.3161695
I am using 'KFAS' package from R to estimate a state-space model with the Kalman filter. My measurement and transition equations are:
y_t = Z_t * x_t + \eps_t (measurement)
x_t = T_t * x_{t-1} + R_t * \eta_t (transition),
with \eps_t ~ N(0,H_t) and \eta_t ~ N(0,Q_t).
So, I want to estimate the variances H_t and Q_t, but also T_t, the AR(1) coefficient. My code is as follows:
library(KFAS)
set.seed(100)
eps <- rt(200, 4, 1)
meas <- as.matrix((arima.sim(n=200, list(ar=0.6), innov = rnorm(200)*sqrt(0.5)) + eps),
ncol=1)
Zt <- 1
Ht <- matrix(NA)
Tt <- matrix(NA)
Rt <- 1
Qt <- matrix(NA)
ss_model <- SSModel(meas ~ -1 + SSMcustom(Z = Zt, T = Tt, R = Rt,
Q = Qt), H = Ht)
fit <- fitSSM(ss_model, inits = c(0,0.6,0), method = 'L-BFGS-B')
But it returns: "Error in is.SSModel(do.call(updatefn, args = c(list(inits, model), update_args)),: System matrices (excluding Z) contain NA or infinite values, covariance matrices contain values larger than 1e+07"
The NA definitions for the variances works well, as documented in the package's paper. However, it seems this cannot be done for the AR coefficients. Does anyone know how can I do this?
Note that I am aware of the SSMarima function, which eases the definition of the transition equation as ARIMA models. Although I am able to estimate the AR(1) coef. and Q_t this way, I still cannot estimate the \eps_t variance (H_t). Moreover, I am migrating my Kalman filter codes from EViews to R, so I need to learn SSMcustom for other models that are more complicated.
Thanks!
It seems that you are missing something in your example, as your error message comes from the function fitSSM. If you want to use fitSSM for estimating general state space models, you need to provide your own model updating function. The default behaviour can only handle NA's in covariance matrices H and Q. The main goal of fitSSM is just to get started with simple stuff. For complex models and/or large data, I would recommend using your self-written objective function (with help of logLik method) and your favourite numerical optimization routines manually for maximum performance. Something like this:
library(KFAS)
set.seed(100)
eps <- rt(200, 4, 1)
meas <- as.matrix((arima.sim(n=200, list(ar=0.6), innov = rnorm(200)*sqrt(0.5)) + eps),
ncol=1)
Zt <- 1
Ht <- matrix(NA)
Tt <- matrix(NA)
Rt <- 1
Qt <- matrix(NA)
ss_model <- SSModel(meas ~ -1 + SSMcustom(Z = Zt, T = Tt, R = Rt,
Q = Qt), H = Ht)
objf <- function(pars, model, estimate = TRUE) {
model$H[1] <- pars[1]
model$T[1] <- pars[2]
model$Q[1] <- pars[3]
if (estimate) {
-logLik(model)
} else {
model
}
}
opt <- optim(c(1, 0.5, 1), objf, method = "L-BFGS-B",
lower = c(0, -0.99, 0), upper = c(100, 0.99, 100), model = ss_model)
ss_model_opt <- objf(opt$par, ss_model, estimate = FALSE)
Same with fitSSM:
updatefn <- function(pars, model) {
model$H[1] <- pars[1]
model$T[1] <- pars[2]
model$Q[1] <- pars[3]
model
}
fit <- fitSSM(ss_model, c(1, 0.5, 1), updatefn, method = "L-BFGS-B",
lower = c(0, -0.99, 0), upper = c(100, 0.99, 100))
identical(ss_model_opt, fit$model)
I am trying to estimate the parameters of the three-parametric Weibull distribution with ML for censored data.
I've worked it out by using the package flexsurv where I've defined an "own" density function.
I've also followed the instructions given in the documentation of the function flexsurv::flexsurvregto build the list with all required information to do the MLE with a customer density function.
In the following you can see what I've done so far.
library(FAdist)
library(flexsurv)
set.seed(1)
thres <- 3500
data <- rweibull(n = 1000, shape = 2.2, scale = 25000) + thres
y <- sample(c(0, 1), size = 1000, replace = TRUE)
df1 <- data.frame(x = data, status = y)
dweib3 <- function(x, shape, scale, thres, log = FALSE) {
dweibull(x - thres, shape, scale, log = log)
}
pweib3 <- function(q, shape, scale, thres, log_p = FALSE) {
pweibull(q - thres, shape, scale, log.p = log_p)
}
# Not required
#qweib3 <- function(p, shape, scale, thres, log.p = FALSE) {
# if (log.p == TRUE) {
# p <- exp(p)
# }
# qwei3 <- thres + qweibull(p, shape, scale)
# return(qwei3)
#}
dweib3 <- Vectorize(dweib3)
pweib3 <- Vectorize(pweib3)
custom.weibull <- list(name = "weib3",
pars = c('shape', 'scale', 'thres'), location = 'scale',
transforms = c(log, log, log),
inv.transforms = c(exp, exp, exp),
inits = function(t) {
c(1.2 / sqrt((var(log(t)))), exp(mean(log(t)) + (.572 / (1.2 / sqrt((var(log(t))))))), .5 * min(t))
}
)
ml <- flexsurvreg(Surv(df1$x, df1$status) ~ 1, data = df1, dist = custom.weibull)
The variable y should represent the status of a unit where 1 is a failure and 0 is an unfailed unit until censoring.
The initial values for shape and scale are taken from the moments which are also defined in the fitdistrpluspackage.
For the threshold parameter there must be a constraining since the threshold must be really smaller than the minimum of the data. Therefore a constraint of threshold is at its max .99 * t_min would be satisfactory (this is something which I haven't implement until now).
The output of the above MLE is the following:
> ml
Call:
flexsurvreg(formula = Surv(df1$x, df1$status) ~ 1, data = df1,
dist = custom.weibull)
Estimates:
est L95% U95% se
shape 2.37e+00 2.12e+00 2.65e+00 1.33e-01
scale 3.52e+04 3.32e+04 3.74e+04 1.08e+03
thres 2.75e+03 1.51e+03 5.02e+03 8.44e+02
N = 1000, Events: 481, Censored: 519
Total time at risk: 25558684
Log-likelihood = -5462.027, df = 3
AIC = 10930.05
The estimated parameters aren't fine even if there is censoring.
I've did this procedure a few times with other randomly generated data... the estimates are always pretty far away from the "truth".
Therefore I need an improvement of my code or another possibility to estimate the parameters of a three-parameter Weibull with MLE.