R Estimating parameters of binomial distribution - r

I'm trying estimate parameters n and p from Binomial Distribution by Maximum Likelihood in R.
I'm using the function optim from stats package, but there is an error.
That is my code:
xi = rbinom(100, 20, 0.5) # Sample
n = length(xi) # Sample size
# Log-Likelihood
lnlike <- function(theta){
log(prod(choose(theta[1],xi))) + sum(xi*log(theta[2])) +
(n*theta[1] - sum(xi))*log(1-theta[2])
}
# Optimizing
optim(theta <- c(10,.3), lnlike, hessian=TRUE)
Error in optim(theta <- c(10, 0.3), lnlike, hessian = TRUE) :
function cannot be evaluated at initial parameters
Anyone done this? Which function used?

tl;dr you're going to get a likelihood of zero (and thus a negative-infinite log-likelihood) if the response variable is greater than the binomial N (which is the theoretical maximum value of the response). In most practical problems, N is taken as known and just the probability is estimated. If you do want to estimate N, you need to (1) constrain it to be >= the largest value in the sample; (2) do something special to optimize over a parameter that must be discrete (this is an advanced/tricky problem).
First part of this answer shows debugging strategies for identifying the problem, second illustrates a strategy for optimizing N and p simultaneously (by brute force over a reasonable range of N).
Setup:
set.seed(101)
n <- 100
xi <- rbinom(n, size=20, prob=0.5) # Sample
Log-likelihood function:
lnlike <- function(theta){
log(prod(choose(theta[1],xi))) + sum(xi*log(theta[2])) +
(n*theta[1] - sum(xi))*log(1-theta[2])
}
Let's break this down.
theta <- c(10,0.3) ## starting values
lnlike(c(10,0.3)) ## -Inf
OK, the log-likelihood is -Inf at the starting value. Not surprising that optim() can't work with that.
Let's work through the terms.
log(prod(choose(theta[1],xi))) ## -Inf
OK, we're already in trouble on the first term.
prod(choose(theta[1],xi)) ## 0
The product is zero ... why?
choose(theta[1],xi)
## [1] 120 210 10 0 0 10 120 210 0 0 45 210 1 0
Lots of zeros. Why? What are the values of xi that are problematic?
## [1] 7 6 9 12 11 9 7 6
Aha! We're OK for 7, 6, 9 ... but in trouble with 12.
badvals <- (choose(theta[1],xi)==0)
all(badvals==(xi>10)) ## TRUE
If you really want to do this, you can do it by brute-force enumeration over reasonable values of n ...
## likelihood function
llik2 <- function(p,n) {
-sum(dbinom(xi,prob=p,size=n,log=TRUE))
}
## possible N values (15 to 50)
nvec <- max(xi):50
Lvec <- numeric(length(nvec))
for (i in 1:length(nvec)) {
## optim() wants method="Brent"/lower/upper for 1-D optimization
Lvec[i] <- optim(par=0.5,fn=llik2,n=nvec[i],method="Brent",
lower=0.001,upper=0.999)$val
}
nvec[which.min(Lvec)] ## 20
par(las=1,bty="l")
plot(nvec,Lvec,type="b")

Why you get into trouble?
If you do lnlike(c(10, 0.3)), you get -Inf. That's why your error message is complaining lnlike, rather than optim.
Often, n is known, and only p needs be estimated. In this situation, either moment estimator or maximum likelihood estimator is in closed form, and no numerical optimization is needed. So, it is really weird to estimate n.
If you do want to estimate, you have to be aware that it is constrained. Check
range(xi) ## 5 15
You observations have range [5, 15], therefore, it is required that n >= 15. How can you pass an initial value 10? The searching direction for n, should be from a large starting value, and then gradually searching downward till it reaches max(xi). So, you might try 30 as the initial value for n.
Additionally, you don't need to define lnlike in the current way. Do this:
lnlike <- function(theta, x) -sum(dbinom(x, size = theta[1], prob = theta[2], log = TRUE))
optim is often used for minimization (though it can do maximization). I have put a minus sign in the function to get negative log likelihood. In this way, you are minimizing lnlike w.r.t. theta.
You should also pass your observations xi as additional argument to lnlike, rather than taking it from global environment.
Naive try with optim:
In my comment, I already said that I don't believe using optim to estimate n will work, because n must be integers while optim is used for continuous variables. These errors and warnings shall convince you.
optim(c(30,.3), fn = lnlike, x = xi, hessian = TRUE)
Error in optim(c(30, 0.3), fn = lnlike, x = xi, hessian = TRUE) :
non-finite finite-difference value [1]
In addition: There were 15 or more warnings (use warnings() to see the
first 15
> warnings()
Warning messages:
1: In dbinom(x, size = theta[1], prob = theta[2], log = TRUE) : NaNs produced
2: In dbinom(x, size = theta[1], prob = theta[2], log = TRUE) : NaNs produced
3: In dbinom(x, size = theta[1], prob = theta[2], log = TRUE) : NaNs produced
4: In dbinom(x, size = theta[1], prob = theta[2], log = TRUE) : NaNs produced
5: In dbinom(x, size = theta[1], prob = theta[2], log = TRUE) : NaNs produced
Solution?
Ben has provided you a way. Instead of letting optim to estimate n, we manually do a grid search for n. For each candidate n, we perform a univariate optimization w.r.t. p. (Oops, in fact, there is no need to do numerical optimization here.) In this way, you are getting a profile likelihood of n. Then, we find n on the grid to minimize this profile likelihood.
Ben has provided you full details, and I shall not repeat that. Nice (and swift) work, Ben!

Related

R code for maximum likelihood estimate from a specific likelihood function

I have been trying to generate R code for maximum likelihood estimation from a log likelihood function in a paper (equation 9 in page 609). Authors in the paper estimated it using MATLAB, which I am not familiar with. So I tried to generate codes in R.
Here is the snapshot of the log likelihood function in the paper:
, where
r: Binary decision (0 or 1) indicating infested plant(s) detection (1) or not (0).
e: Inspection efficiency. This is known.
n: Sample size
The overall objective is to estimate plant infestation rate (gamma: γ) and epsilon (e) based on binary decision of presence and absence of infested plants instead of using infested plant(s) detected. So, the function has only binary information (r) of infested plant detection and sample size. Since epsilon (e) is known or fixed, the actual goal is to estimate gamma (γ) in a population.
Another objective is to compare estimated infestation rates from above with ones in hypergeometric sampling formula in another paper (in page 6). The formula is:
This formula generates required sample size to detect infested plants with selected probability (e.g., 95) given an infested rate. For example:
# Sample size calculation function
fosgate.sample1 <- function(box, p, ci){ # Note: box represent total plant number
ninf <- p*box
sample.size <- round(((1-(1-ci)^(1/ninf))*(box-(ninf-1)/2)))
#sample.size <- ceiling(((1-(1-ci)^(1/ninf))*(box-(ninf-1)/2)))
sample.size
}
fosgate.sample1(box=100, p = .05, ci = .95) # where box: population or total plants, p: infestation rate, and ci: probability of detection
## 44
The idea is if sample size (e.g., 44) and binary decision data are provided the log-likelihood function can be used to estimate infestation rate and the rate may be close to anticipated rate (e.g., .05). Ultimately, I would like to compare plant infestation rates (gamma: γ) estimated from the log likelihood function above and D/N in the sample size calculation formula (second) or p in the sample size code below.
I generated R code for the log-likelihood described above.
### MLE with stat4
library(stats4)
# Log-likelihood function
plant.inf.lik <- function(inf.rate){
logl <- suppressWarnings(
sum((1-insp.result)*n*log(1-inf.rate) +
insp.result*log(1-(1-inf.rate)^n))
)
return(-logl)
}
Using the sample size function (i.e., fosgate.sample1) I generated sample sizes for various cases of total plant (or box) and anticipated detection rate (p) in the function. Since I am also interested in error/confidence ranges of estimated plant infestation rates, I used bootstrapping to calculate range of estimates (I am not sure if this is appropriate/acceptable). Here is the final code I generated:
### MLE and CI with bootstrapping with multiple scenarios
plant <- c(100, 500, 1000, 5000, 10000, 100000) # Total plant number
ir <- seq(.01, .2, by = .01) # Plant infestation rate
df.result <- data.frame(expand.grid(plant=plant, inf.rate = ir))
df.result$sample.size <- fosgate.sample1(box=df.result$plant, p=df.result$inf.rate, ci=.95) # Sample size
df.result$insp.result <- 1000 # Shipment number (can be replaced with random integers)
df.result <- df.result[order(df.result$plant, df.result$inf.rate, df.result$sample.size), ]
rownames(df.result) <- 1:nrow(df.result)
df.result$est.mean <- 0
#df.result$est.median <- 0
df.result$est.lower.ci <- 0
df.result$est.upper.ci <- 0
df.result$nsim <- 0
str(df.result)
head(df.result)
# Looping
est <- rep(NA, 1000)
for(j in 1:nrow(df.result)){
for(i in 1:1000){
insp.result <- sample(c(rep(1, df.result$insp.result[j]-df.result$insp.result[j]*df.result$inf.rate[j]),
rep(0, df.result$insp.result[j]*df.result$inf.rate[j])))
ir <- df.result$inf.rate[j]
n <- df.result$sample.size[j]
insp.result <- sample(insp.result, replace = TRUE)
est[i] <- mle(plant.inf.lik, start = list(inf.rate = ir*.9), method = "BFGS", nobs = length(insp.result))#coef
df.result$est.mean[j] <- mean(est, na.rm = TRUE)
# df.result$est.median[j] <- median(est, na.rm = TRUE)
df.result$est.lower.ci[j] <- quantile(est, prob = .025, na.rm = TRUE)
df.result$est.upper.ci[j] <- quantile(est, prob = .975, na.rm = TRUE)
df.result$nsim[j] <- length(est)
}
}
# Significance test result
sig <- ifelse(df.result$inf.rate >= df.result$est.lower.ci & df.result$inf.rate <= df.result$est.upper.ci, "no sig", "sig")
table(sig)
# Plot
library(ggplot2)
library(reshape2)
df.result$num <- ave(df.result$inf.rate, df.result$plant, FUN=seq_along)
df.result.m <- melt(df.result, id.vars=c("plant", "sample.size", "insp.result", "est.lower.ci", "est.upper.ci", "nsim", "num"))
df.result.m$est.lower.ci <- ifelse(df.result.m$variable == "inf.rate", NA, df.result.m$est.lower.ci)
df.result.m$est.upper.ci <- ifelse(df.result.m$variable == "inf.rate", NA, df.result.m$est.upper.ci)
str(df.result.m)
ggplot(data = df.result.m, aes(x = num, y = value, group=variable, color=variable, shape=variable))+
geom_point()+
geom_errorbar(aes(ymin = est.lower.ci, ymax = est.upper.ci), width=.5)+
scale_y_continuous(breaks = seq(0, .2, .02))+
xlab("Index")+
ylab("Plant infestation rate")+
facet_wrap(~plant, ncol = 3)
When I ran the code, I was able to obtain results and to compare estimated (est.mean) and anticipated (inf.rate) infestation rates as shown in the plot below.
If results are correct, plot indicates that estimation looks fine but off for greater infestation rates.
Also, I always got warning messages without "suppressWarnings" function and occasionally error messages below. I have no clue how to fix them.
## Warning messages
## 29: In log(1 - (1 - inf.rate)^n) : NaNs produced
## 30: In log(1 - inf.rate) : NaNs produced
## Error message (occasionally)
## Error in solve.default(oout$hessian) :
## Lapack routine dgesv: system is exactly singular: U[1,1] = 0
My questions are:
Is R function (plant.inf.lik) for maximum likelihood estimation of the log-likelihood function appropriate?
Should I take care of warning and error messages? If yes, how? Again, I have no clue how to fix...
Is bootstrapping (resampling?) method appropriate to estimate CI ranges and/or standard error?
I found this link useful for alternative approach. Although I am still working both approaches together, results seem different (maybe following question).
Any suggestion would be greatly appreciated.
Concerning your last question about estimating CI ranges, there are three common methods for ML estimators:
Variance estimation from the inverted Hessian matrix.
Jackknife estimator for the variance (simpler and more stable, if the Hessian is estimated numerically, but computationally more expensive)
Bootstrap CIs (the computatianally most expensive approach).
For bootstrap CIs, you do not need to implement them yourself (bias correction, e.g. can be tricky), but can rely on the R library boot.
Incidentally, I have written a summary with R code for all three approaches two years ago: Construction of Confidence Intervals (see section 5). For the method utilizing the Hessian Matrix, e.g., the outline is as follows:
lnL <- function(theta1, theta2, ...) {
# definition of the negative (!)
# log-likelihood function...
}
# starting values for the optimization
theta0 <- c(start1, start2, ...)
# optimization
p <- optim(theta0, lnL, hessian=TRUE)
if (p$convergence == 0) {
theta <- p$par
covmat <- solve(p$hessian)
sigma <- sqrt(diag(covmat))
}
The function mle from stats4 already wraps the covrainace matrix estimation and retruns it in vcov. In the practical use cases in which I have tried this (paired comparison models), though, this estimation was rather unstable, and I have resorted to the jackknife method instead.

lambda.1se not being in one standard error of the error

In the documentation of the function cv.glmnet(), it is given that:
lambda.1se :
largest value of lambda such that error is within 1 standard error of the minimum.
Which means that lambda.1se gives the lambda, which gives an error (cvm) which is one standard error away from the minimum error.
So, while trying to check this fact:
There is a data set Boston in the library MASS. I performed a cross validation, using the lasso:
x = model.matrix(crim~.-1,data=Boston)#-1 for removing the intercept column
y = Boston$crim
cv.lasso = cv.glmnet(x,y,type.measure = "mse",alpha=1)
And the value of cv.lasso$lambda.min comes out to be:
> cv.lasso$lambda.min
[1] 0.05630926
And, the value of cv.lasso$lambda.1se is:
> cv.lasso$lambda.1se
[1] 3.375651
Now, look at this:
> std(cv.lasso$cvm)
[1] 0.7177808
Where std is a function, that returns the standard error of the values inserted in it.1
And the minimum value of cvm can be found as:
> cv.lasso$cvm[cv.lasso$lambda==cv.lasso$lambda.min]
[1] 42.95009
So, we add the standard error to the value of cvm and we get:
> 42.95009+0.7177808
[1] 43.66787
Even though there is no lambda value corresponding to this cvm value, we can have an idea on the basis of existing data:
Which means lambda.1se should be between 0.4784899 and 0.4359821. But that's absolutely not the case. So, there's a gut feeling that says I'm making a blunder here. Can you help me point at that?
1:Definition of std:
std<-function(x)
sd(x)/sqrt(length(x))
I'll add a seed so one can replicate the results below:
library(glmnet)
library(MASS)
data("Boston")
x = model.matrix(crim~.-1,data=Boston)#-1 for removing the intercept column
y = Boston$crim
set.seed(100)
cv.lasso = cv.glmnet(x,y,type.measure = "mse",alpha=1)
The minimum cross-validated MSE is min(cv.lasso$cvm) = 43.51256. The corresponding lambda is cv.lasso$lambda.min = 0.01843874. The lambda.1se is cv.lasso$lambda.1se = 3.375651. This corresponds to a cross-validated MSE of
cv.lasso$cvm[which(cv.lasso$lambda == cv.lasso$lambda.1se)] = 57.5393
We can access the cross-validated standard errors directly from GLMNET's output as follows:
cv.lasso$cvsd[which(cv.lasso$lambda == cv.lasso$lambda.min)] = 15.40236
So the cross-validated MSE one standard error away is
43.51256 + 15.40236 = 58.91492
This is just slightly higher than the cross-validated MSE at lambda.1se above (i.e. 57.5393). If we look at the cross-validated MSE at the lambda just before the lambda.1se, it is:
cv.lasso$cvm[which(cv.lasso$lambda == cv.lasso$lambda.1se)-1] = 59.89079
So now that we can reconcile GLMNET's output, let me explain why you are not getting the same thing using your calculation:
cv.lasso$cvm contains the cross-validated mean MSE for each value of lambda.
When we say 1 standard error, we are not talking about the standard error across the lambda's but the standard error across the folds for a given lambda.
Continuing with the above point, at lambda.min, we have 10 folds. We fit 10 models and have 10 out-of-sample MSE's. The mean of these 10 MSE's is given by cv.lasso$cvm[which(cv.lasso$lambda == cv.lasso$lambda.min)]. The standard deviation of these 10 MSE's is given by cv.lasso$cvsd[which(cv.lasso$lambda == cv.lasso$lambda.min)]. What we are not given in the GLMNET output are the 10 MSE's at lambda.min. If we had this, then we should be able to replicate the the standard error by using the formula you have above.
Let me know if this helps.
EDIT: Let's do an example whereby we define in advance three folds
set.seed(100)
folds = sample(1:3, nrow(x), replace = T)
cv.lasso = cv.glmnet(x,y,type.measure = "mse",alpha=1, keep =T, foldid = folds)
Note that
> min(cv.lasso$cvm)
[1] 42.76584
> cv.lasso$cvsd[which.min(cv.lasso$cvm)]
[1] 17.89725
(These differ from the earlier example above because we've defined our own folds)
Note also that I have an additional parameter keep = T in the cv.glmnet call. This returns the fold predictions for each lambda. You can extract them for the optimal lambda by doing:
cv.lasso$fit.preval[,which.min(cv.lasso$cvm)]
Before we proceed, let's create a data frame with the response, the fold predictions, and the corresponding folds:
library(data.table)
OOSPred = data.table(y = y,
predictions = cv.lasso$fit.preval[,which.min(cv.lasso$cvm)],
folds = folds)
Here is a preview of the first 10 rows:
> head(OOSPred, 10)
y predictions folds
1: 0.00632 -0.7477977 1
2: 0.02731 -1.3823830 1
3: 0.02729 -3.4826143 2
4: 0.03237 -4.4419795 1
5: 0.06905 -3.4373021 2
6: 0.02985 -2.5256505 2
7: 0.08829 0.7343478 3
8: 0.14455 1.1262462 2
9: 0.21124 4.0507847 2
10: 0.17004 0.5859587 1
For example, for the cases where folds = 1, a model was built on folds #2 & #3 and then predictions were obtained for the observations in fold #1. We compute the MSE by fold now:
OOSPredSum = OOSPred[, list(MSE = mean((y - predictions)^2)), by = folds]
folds MSE
1: 1 27.51469
2: 2 75.72847
3: 3 19.93480
Finally, we return the mean MSE and standard error of the MSE across the folds
> OOSPredSum[, list("Mean MSE" = mean(MSE), "Standard Error" = sd(MSE)/sqrt(3))]
Mean MSE Standard Error
1: 41.05932 17.47213
GLMNET may be performing a weighted mean and standard error (weighted by the number of observations in each fold), which is why the figures above a close but do not match exactly.
I think the procedure is:
For each ƛ it creates x models ( x = nº of folds in which the data set has been splitted for cross-validation algorithm)
For each ƛ and each model x, it calculates the mean(error) and sd(error), thus, mean(x errors) and sd(x errors)
Let's say that we have ƛmin and serrorƛmin (calculated in step 2.). Now, ƛse is defined as "largest value of lambda such that error is within 1 standard error of the minimum". Then the condition for ƛse is:
ƛse in [ƛmin - seƛmin, ƛmin + seƛmin]
Then ƛse = max(ƛ),ƛ where which fulfills the condition above.
I can show you an example:
lasso_cv <- cv.glmnet(x = x, y= endpoint, alpha = 1, lambda = lambdas_to_try,
standardize = TRUE, nfolds = 10,type.measure="auc",
family="binomial")
Note that ƛmin is:
lasso_cv$lambda.min
[1] 0.007742637
And serrorƛmin is:
serrorlmin <- lasso_cv$cvsd[which(lasso_cv$lambda == lasso_cv$lambda.min)]
serrorlmin
[1] 0.01058009
Then, the range in which ƛse is chosen is:
rang <- c(lasso_cv$lambda.min - serrorlmin,lasso_cv$lambda.min + serrorlmin)
[1] -0.002837457 0.018322731
And to find it:
max(lasso_cv$lambda[lasso_cv$lambda>=rang[1] & lasso_cv$lambda<=rang[2]])
[1] 0.01629751
And this value matches with ƛse!
lasso_cv$lambda.1se # 0.01629751
I hope it helps!

Error in optim(): searching for global minimum for a univariate function

I am trying to optmize a function in R
The function is the Likelihood function of negative binominal when estimating only mu parameter. This should not be a problem since the function clearly has just one point of maximum. But, I am not being able to reach the desirable result.
The function to be optmized is:
EMV <- function(data, par) {
Mi <- par
Phi <- 2
N <- NROW(data)
Resultado <- log(Mi/(Mi + Phi))*sum(data) + N*Phi*log(Phi/(Mi + Phi))
return(Resultado)
}
Data is a vector of negative binomial variables with parameters 2 and 2
data <- rnegbin(10000, mu = 2, theta = 2)
When I plot the function having mu as variable with the following code:
x <- seq(0.1, 100, 0.02)
z <- EMV(data,0.1)
for (aux in x) {z <- rbind(z, EMV(data,aux))}
z <- z[2:NROW(z)]
plot(x,z)
I get the following curve:
And the maximum value of z is close to parameter value --> 2
x[which.max(z)]
But the optimization is not working with BFGS
Error in optim(par = theta, fn = EMV, data = data, method = "BFGS") :
non-finite finite-difference value [1]
And is not going to right value using SANN, for example:
$par
[1] 5.19767e-05
$value
[1] -211981.8
$counts
function gradient
10000 NA
$convergence
[1] 0
$message
NULL
The questions are:
What am I doing wrong?
Is there a way to tell optim that the param should be bigger than 0?
Is there a way to tell optim that I want to maximize the function? (I am afraid the optim is trying to minimize and is going to a very small value where function returns smallest values)
Minimization or Maximization?
Although ?optim says it can do maximization, but that is in a bracket, so minimization is default:
fn: A function to be minimized (or maximized) ...
Thus, if we want to maximize an objective function, we need to multiply an -1 to it, and then minimize it. This is quite a common situation. In statistics we often want to find maximum log likelihood, so to use optim(), we have no choice but to minimize the negative log likelihood.
Which method to use?
If we only do 1D minimization, we should use method "Brent". This method allows us to specify a lower bound and an upper bound of search region. Searching will start from one bound, and search toward the other, until it hit the minimum, or it reach the boundary. Such specification can help you to constrain your parameters. For example, you don't want mu to be smaller than 0, then just set lower = 0.
When we move to 2D or higher dimension, we should resort to "BFGS". In this case, if we want to constrain one of our parameters, say a, to be positive, we need to take log transform log_a = log(a), and reparameterize our objective function using log_a. Now, log_a is free of constraint. The same goes when we want constrain multiple parameters to be positive.
How to change your code?
EMV <- function(data, par) {
Mi <- par
Phi <- 2
N <- NROW(data)
Resultado <- log(Mi/(Mi + Phi))*sum(data) + N*Phi*log(Phi/(Mi + Phi))
return(-1 * Resultado)
}
optim(par = theta, fn = EMV, data = data, method = "Brent", lower = 0, upper = 1E5)
The help file for optim says: "By default optim performs minimization, but it will maximize if control$fnscale is negative." So if you either multiply your function output by -1 or change the control object input, you should get the right answer.

Maximum Likelihood Estimation for three-parameter Weibull distribution in r

I want to estimate the scale, shape and threshold parameters of a 3p Weibull distribution.
What I've done so far is the following:
Refering to this post, Fitting a 3 parameter Weibull distribution in R
I've used the functions
EPS = sqrt(.Machine$double.eps) # "epsilon" for very small numbers
llik.weibull <- function(shape, scale, thres, x)
{
sum(dweibull(x - thres, shape, scale, log=T))
}
thetahat.weibull <- function(x)
{
if(any(x <= 0)) stop("x values must be positive")
toptim <- function(theta) -llik.weibull(theta[1], theta[2], theta[3], x)
mu = mean(log(x))
sigma2 = var(log(x))
shape.guess = 1.2 / sqrt(sigma2)
scale.guess = exp(mu + (0.572 / shape.guess))
thres.guess = 1
res = nlminb(c(shape.guess, scale.guess, thres.guess), toptim, lower=EPS)
c(shape=res$par[1], scale=res$par[2], thres=res$par[3])
}
to "pre-estimate" my Weibull parameters, such that I can use them as initial values for the argument "start" in the "fitdistr" function of the MASS-Package.
You might ask why I want to estimate the parameters twice... reason is that I need the variance-covariance-matrix of the estimates which is also estimated by the fitdistr function.
EXAMPLE:
set.seed(1)
thres <- 450
dat <- rweibull(1000, 2.78, 750) + thres
pre_mle <- thetahat.weibull(dat)
my_wb <- function(x, shape, scale, thres) {
dweibull(x - thres, shape, scale)
}
ml <- fitdistr(dat, densfun = my_wb, start = list(shape = round(pre_mle[1], digits = 0), scale = round(pre_mle[2], digits = 0),
thres = round(pre_mle[3], digits = 0)))
ml
> ml
shape scale thres
2.942548 779.997177 419.996196 ( 0.152129) ( 32.194294) ( 28.729323)
> ml$vcov
shape scale thres
shape 0.02314322 4.335239 -3.836873
scale 4.33523868 1036.472551 -889.497580
thres -3.83687258 -889.497580 825.374029
This works quite well for cases where the shape parameter is above 1. Unfortunately my approach should deal with the cases where the shape parameter could be smaller than 1.
The reason why this is not possible for shape parameters that are smaller than 1 is described here: http://www.weibull.com/hotwire/issue148/hottopics148.htm
in Case 1, All three parameters are unknown the following is said:
"Define the smallest failure time of ti to be tmin. Then when γ → tmin, ln(tmin - γ) → -∞. If β is less than 1, then (β - 1)ln(tmin - γ) goes to +∞ . For a given solution of β, η and γ, we can always find another set of solutions (for example, by making γ closer to tmin) that will give a larger likelihood value. Therefore, there is no MLE solution for β, η and γ."
This makes a lot of sense. For this very reason I want to do it the way they described it on this page.
"In Weibull++, a gradient-based algorithm is used to find the MLE solution for β, η and γ. The upper bound of the range for γ is arbitrarily set to be 0.99 of tmin. Depending on the data set, either a local optimal or 0.99tmin is returned as the MLE solution for γ."
I want to set a feasible interval for gamma (in my code called 'thres') such that the solution is between (0, .99 * tmin).
Does anyone have an idea how to solve this problem?
In the function fitdistr there seems to be no opportunity doing a constrained MLE, constraining one parameter.
Another way to go could be the estimation of the asymptotic variance via the outer product of the score vectors. The score vector could be taken from the above used function thetahat.weibul(x). But calculating the outer product manually (without function) seems to be very time consuming and does not solve the problem of the constrained ML estimation.
Best regards,
Tim
It's not too hard to set up a constrained MLE. I'm going to do this in bbmle::mle2; you could also do it in stats4::mle, but bbmle has some additional features.
The larger issue is that it's theoretically difficult to define the sampling variance of an estimate when it's on the boundary of the allowed space; the theory behind Wald variance estimates breaks down. You can still calculate confidence intervals by likelihood profiling ... or you could bootstrap. I ran into a variety of optimization issues when doing this ... I haven't really thought about wether there are specific reasons
Reformat three-parameter Weibull function for mle2 use (takes x as first argument, takes log as an argument):
dweib3 <- function(x, shape, scale, thres, log=TRUE) {
dweibull(x - thres, shape, scale, log=log)
}
Starting function (slightly reformatted):
weib3_start <- function(x) {
mu <- mean(log(x))
sigma2 <- var(log(x))
logshape <- log(1.2 / sqrt(sigma2))
logscale <- mu + (0.572 / logshape)
logthres <- log(0.5*min(x))
list(logshape = logshape, logsc = logscale, logthres = logthres)
}
Generate data:
set.seed(1)
dat <- data.frame(x=rweibull(1000, 2.78, 750) + 450)
Fit model: I'm fitting the parameters on the log scale for convenience and stability, but you could use boundaries at zero as well.
tmin <- log(0.99*min(dat$x))
library(bbmle)
m1 <- mle2(x~dweib3(exp(logshape),exp(logsc),exp(logthres)),
data=dat,
upper=c(logshape=Inf,logsc=Inf,
logthres=tmin),
start=weib3_start(dat$x),
method="L-BFGS-B")
vcov(m1), which should normally provide a variance-covariance estimate (unless the estimate is on the boundary, which is not the case here) gives NaN values ... not sure why without more digging.
library(emdbook)
tmpf <- function(x,y) m1#minuslogl(logshape=x,
logsc=coef(m1)["logsc"],
logthres=y)
tmpf(1.1,6)
s1 <- curve3d(tmpf,
xlim=c(1,1.2),ylim=c(5.9,tmin),sys3d="image")
with(s1,contour(x,y,z,add=TRUE))
h <- lme4:::hessian(function(x) do.call(m1#minuslogl,as.list(x)),coef(m1))
vv <- solve(h)
diag(vv) ## [1] 0.002672240 0.001703674 0.004674833
(se <- sqrt(diag(vv))) ## standard errors
## [1] 0.05169371 0.04127558 0.06837275
cov2cor(vv)
## [,1] [,2] [,3]
## [1,] 1.0000000 0.8852090 -0.8778424
## [2,] 0.8852090 1.0000000 -0.9616941
## [3,] -0.8778424 -0.9616941 1.0000000
This is the variance-covariance matrix of the log-scaled variables. If you want to convert to the variance-covariance matrix on the original scale, you need to scale by (x_i)*(x_j) (i.e. by the derivatives of the transformation exp(x)).
outer(exp(coef(m1)),exp(coef(m1))) * vv
## logshape logsc logthres
## logshape 0.02312803 4.332993 -3.834145
## logsc 4.33299307 1035.966372 -888.980794
## logthres -3.83414498 -888.980794 824.831463
I don't know why this doesn't work with numDeriv - would be very careful with variance estimates above. (Maybe too close to boundary for Richardson extrapolation to work?)
library(numDeriv)
hessian()
grad(function(x) do.call(m1#minuslogl,as.list(x)),coef(m1)) ## looks OK
vcov(m1)
The profiles look OK ... (we have to supply std.err because the Hessian isn't invertible)
pp <- profile(m1,std.err=c(0.01,0.01,0.01))
par(las=1,bty="l",mfcol=c(1,3))
plot(pp,show.points=TRUE)
confint(pp)
## 2.5 % 97.5 %
## logshape 0.9899645 1.193571
## logsc 6.5933070 6.755399
## logthres 5.8508827 6.134346
Alternately, we can do this on the original scale ... one possibility would be to use the log-scaling to fit, then refit starting from those parameters on the original scale.
wstart <- as.list(exp(unlist(weib3_start(dat$x))))
names(wstart) <- gsub("log","",names(wstart))
m2 <- mle2(x~dweib3(shape,sc,thres),
data=dat,
lower=c(shape=0.001,sc=0.001,thres=0.001),
upper=c(shape=Inf,sc=Inf,
thres=exp(tmin)),
start=wstart,
method="L-BFGS-B")
vcov(m2)
## shape sc thres
## shape 0.02312399 4.332057 -3.833264
## sc 4.33205658 1035.743511 -888.770787
## thres -3.83326390 -888.770787 824.633714
all.equal(unname(coef(m2)),unname(exp(coef(m1))),tol=1e-4)
About the same as the values above.
We can fit with a small shape, if we are a little more careful to bound the paraameters, but now we end up on the boundary for the threshold, which will cause lots of problems for the variance calculations.
set.seed(1)
dat <- data.frame(x = rweibull(1000, .53, 365) + 100)
tmin <- log(0.99 * min(dat$x))
m1 <- mle2(x ~ dweib3(exp(logshape), exp(logsc), exp(logthres)),
lower=c(logshape=-10,logscale=0,logthres=0),
upper = c(logshape = 20, logsc = 20, logthres = tmin),
data = dat,
start = weib3_start(dat$x), method = "L-BFGS-B")
For censored data, you need to replace dweibull with pweibull; see Errors running Maximum Likelihood Estimation on a three parameter Weibull cdf for some hints.
Another possible solution is to do Bayesian inference. Using scale priors on the shape and scale parameters and a uniform prior on the location parameter, you can easily run Metropolis-Hastings as follows. It might be adviceable to reparameterize in terms of log(shape), log(scale) and log(y_min - location) because the posterior for some of the parameters becomes strongly skewed, in particular for the location parameter. Note that the output below shows the posterior for the backtransformed parameters.
library(MCMCpack)
logposterior <- function(par,y) {
gamma <- min(y) - exp(par[3])
sum(dweibull(y-gamma,exp(par[1]),exp(par[2]),log=TRUE)) + par[3]
}
y <- rweibull(100,shape=.8,scale=10) + 1
chain0 <- MCMCmetrop1R(logposterior, rep(0,3), y=y, V=.01*diag(3))
chain <- MCMCmetrop1R(logposterior, rep(0,3), y=y, V=var(chain0))
plot(exp(chain))
summary(exp(chain))
This produces the following output
#########################################################
The Metropolis acceptance rate was 0.43717
#########################################################
Iterations = 501:20500
Thinning interval = 1
Number of chains = 1
Sample size per chain = 20000
1. Empirical mean and standard deviation for each variable,
plus standard error of the mean:
Mean SD Naive SE Time-series SE
[1,] 0.81530 0.06767 0.0004785 0.001668
[2,] 10.59015 1.39636 0.0098738 0.034495
[3,] 0.04236 0.05642 0.0003990 0.001174
2. Quantiles for each variable:
2.5% 25% 50% 75% 97.5%
var1 0.6886083 0.768054 0.81236 0.8608 0.9498
var2 8.0756210 9.637392 10.50210 11.4631 13.5353
var3 0.0003397 0.007525 0.02221 0.0548 0.1939

Parameters estimation of a bivariate mixture normal-lognormal model

I have to create a model which is a mixture of a normal and log-normal distribution. To create it, I need to estimate the 2 covariance matrixes and the mixing parameter (total =7 parameters) by maximizing the log-likelihood function. This maximization has to be performed by the nlm routine.
As I use relative data, the means are known and equal to 1.
I’ve already tried to do it in 1 dimension (with 1 set of relative data) and it works well. However, when I introduce the 2nd set of relative data I get illogical results for the correlation and a lot of warnings messages (at all 25).
To estimate these parameters I defined first the log-likelihood function with the 2 commands dmvnorm and dlnorm.plus. Then I assign starting values of the parameters and finally I use the nlm routine to estimate the parameters (see script below).
`P <- read.ascii.grid("d:/Documents/JOINT_FREQUENCY/grid_E727_P-3000.asc", return.header=
FALSE );
V <- read.ascii.grid("d:/Documents/JOINT_FREQUENCY/grid_E727_V-3000.asc", return.header=
FALSE );
p <- c(P); # tranform matrix into a vector
v <- c(V);
p<- p[!is.na(p)] # removing NA values
v<- v[!is.na(v)]
p_rel <- p/mean(p) #Transforming the data to relative values
v_rel <- v/mean(v)
PV <- cbind(p_rel, v_rel) # create a matrix of vectors
L <- function(par,p_rel,v_rel) {
return (-sum(log( (1- par[7])*dmvnorm(PV, mean=c(1,1), sigma= matrix(c(par[1]^2, par[1]*par[2]
*par[3],par[1]*par[2]*par[3], par[2]^2 ),nrow=2, ncol=2))+
par[7]*dlnorm.rplus(PV, meanlog=c(1,1), varlog= matrix(c(par[4]^2,par[4]*par[5]*par[6],par[4]
*par[5]*par[6],par[5]^2), nrow=2,ncol=2)) )))
}
par.start<- c(0.74, 0.66 ,0.40, 1.4, 1.2, 0.4, 0.5) # log-likelihood estimators
result<-nlm(L,par.start,v_rel=v_rel,p_rel=p_rel, hessian=TRUE, iterlim=200, check.analyticals= TRUE)
Messages d'avis :
1: In log(eigen(sigma, symmetric = TRUE, only.values = TRUE)$values) :
production de NaN
2: In sqrt(2 * pi * det(varlog)) : production de NaN
3: In nlm(L, par.start, p_rel = p_rel, v_rel = v_rel, hessian = TRUE) :
NA/Inf replaced by maximum positive value
4: In log(eigen(sigma, symmetric = TRUE, only.values = TRUE)$values) :
production de NaN
…. Until 25.
par.hat <- result$estimate
cat("sigN_p =", par[1],"\n","sigN_v =", par[2],"\n","rhoN =", par[3],"\n","sigLN_p =", par [4],"\n","sigLN_v =", par[5],"\n","rhoLN =", par[6],"\n","mixing parameter =", par[7],"\n")
sigN_p = 0.5403361
sigN_v = 0.6667375
rhoN = 0.6260181
sigLN_p = 1.705626
sigLN_v = 1.592832
rhoLN = 0.9735974
mixing parameter = 0.8113369`
Does someone know what is wrong in my model or how should I do to find these parameters in 2 dimensions?
Thank you very much for taking time to look at my questions.
Regards,
Gladys Hertzog
When I do these kind of optimization problems, I find that it's important to make sure that all the variables that I'm optimizing over are constrained to plausible values. For example, standard deviation variables have to be positive, and from knowledge of the situation that I'm modelling I'll probably be able to put an upper bound all my standard deviation variables as well. So if s is one of my standard deviation variables, and if m is the maximum value that I want it to take, instead of working with s I'll solve for the variable z which is related to s via
s = m/(1+e-z)
In that formula, z is unconstrained, but s must lie between 0 and m. This is vital because optimization routines where the variables are not constrained to take plausible values will often try completely implausible values while they're trying to bound the solution. Implausible values often cause problems with e.g. precision, that then results in NaN's etc. The general formula that I use for constraining a single variable x to lie between a and b is
x = a + (b - a)/(1+e-z)
However, regarding your particular problem where you're looking for covariance matrices, a more sophisticated approach is necessary than simply bounding all the individual variables. Covariance matrices must be positive semi-definite, so if you're simply optimizing the individual values in the matrix, the optimization will probably fail (producing NaN's) if a matrix which isn't positive definite is fed into the likelihood function. To get round this problem, one approach is to solve for the Cholesky decomposition of the covariance matrix instead of the covariance matrix itself. My guess is that this is probably what's causing your optimization to fail.

Resources