Estimation method(s) of exponent available in DeMAND package, in R - r

I'm interested in finding the estimated value of the exponent of Pareto distributed data using DeMAND package available in R. According to https://rdrr.io/bioc/DeMAND/src/R/pareto.R I installed it and tried to compile the code as follows:
#### Functions for continuous power law or Pareto distributions
# Revision history at end of file
### Standard R-type functions for distributions:
# dpareto Probability density
# ppareto Probability distribution (CDF)
# qpareto Quantile function
# rpareto Random variable generation
### Functions for fitting:
# pareto.fit Fit Pareto to data
# .pareto.fit.threshold Determine scaling threshold and then fit
# --- not for direct use, call pareto.fit instead
# .pareto.fit.ml Fit Pareto to data by maximum likelihood
# --- not for direct use, call pareto.fit instead
# pareto.loglike Calculate log-likelihood under Pareto
# .pareto.fit.regression.cdf Fit Pareto data by linear regression on
# log-log CDF (disrecommended)
# --- not for direct use, call pareto.fit instead
# loglogslope Fit Pareto via regression, extract scaling
# exponent
# loglogrsq Fit Pareto via regression, extract R^2
### Functions for testing:
#
### Functions for visualization:
# plot.eucdf.loglog Log-log plot of the empirical upper cumulative
# distribution function, AKA survival function
# plot.survival.loglog Alias for plot.eucdf.loglog
### Back-stage functions, not intended for users:
# .ks.dist.for.pareto Find Kolmogorov-Smirnov distance between fitted
# and empirical distribution; called by
# .pareto.fit.threshold
# .ks.dist.fixed.pareto Find K-S distance between given Pareto and
# empirical distribution
# Probability density of Pareto distributions
# Gives NA on values below the threshold
# Input: Data vector, lower threshold, scaling exponent, "log" flag
# Output: Vector of (log) probability densities
dpareto <- function(x, threshold = 1, exponent, log=FALSE) {
# Avoid doing limited-precision arithmetic followed by logs if we want
# the log!
if (!log) {
prefactor <- (exponent-1)/threshold
f <- function(x) {prefactor*(x/threshold)^(-exponent)}
} else {
prefactor.log <- log(exponent-1) - log(threshold)
f <- function(x) {prefactor.log -exponent*(log(x) - log(threshold))}
}
d <- ifelse(x<threshold,NA,f(x))
return(d)
}
# Cumulative distribution function of the Pareto distributions
# Gives NA on values < threshold
# Input: Data vector, lower threshold, scaling exponent, usual flags
# Output: Vector of (log) probabilities
ppareto <- function(x, threshold=1, exponent, lower.tail=TRUE, log.p=FALSE) {
if ((!lower.tail) && (!log.p)) {
f <- function(x) {(x/threshold)^(1-exponent)}
}
if ((lower.tail) && (!log.p)) {
f <- function(x) { 1 - (x/threshold)^(1-exponent)}
}
if ((!lower.tail) && (log.p)) {
f <- function(x) {(1-exponent)*(log(x) - log(threshold))}
}
if ((lower.tail) && (log.p)) {
f <- function(x) {log(1 - (x/threshold)^(1-exponent))}
}
p <- ifelse(x < threshold, NA, f(x))
return(p)
}
# Quantiles of Pareto distributions
# Input: vector of probabilities, lower threshold, scaling exponent, usual flags
# Output: Vector of quantile values
qpareto <- function(p, threshold=1, exponent, lower.tail=TRUE, log.p=FALSE) {
# Quantile function for Pareto distribution
# P(x) = 1 - (x/xmin)^(1-a)
# 1-p = (x(p)/xmin)^(1-a)
# (1-p)^(1/(1-a)) = x(p)/xmin
# xmin*((1-p)^(1/(1-a))) = x(p)
# Upper quantile:
# U(x) = (x/xmin)^(1-a)
# u^(1/(1-a)) = x/xmin
# xmin * u^(1/(1-a)) = x
# log(xmin) + (1/(1-a)) log(u) = log(x)
if (log.p) {
p <- exp(p)
}
if (lower.tail) {
p <- 1-p
}
# This works, via the recycling rule
# q<-(p^(1/(1-exponent)))*threshold
q.log <- log(threshold) + (1/(1-exponent))*log(p)
q <- exp(q.log)
return(q)
}
# Generate Pareto-distributed random variates
# Input: Integer size, lower threshold, scaling exponent
# Output: Vector of real-valued random variates
rpareto <- function(n, threshold=1, exponent) {
# Using the transformation method, because we know the quantile function
# analytically
# Consider replacing with a non-R implementation of transformation method
ru <- runif(n)
r<-qpareto(ru,threshold,exponent)
return(r)
}
# Estimate parameters of Pareto distribution
# A wrapper for functions implementing actual methods
# Input: data vector, lower threshold (or "find", indicating it should be found
# from the data), method (likelihood or regression, defaulting to former)
# Output: List indicating type of distribution ("pareto"), parameters,
# information about fit (depending on method), OR a warning and NA
# if method is not recognized
pareto.fit <- function(data, threshold, method="ml") {
if (threshold == "find") {
return(.pareto.fit.threshold(data,method=method))
}
switch(method,
ml = { return(.pareto.fit.ml(data,threshold)) },
regression.cdf = { return(.pareto.fit.regression.cdf(data,threshold)) },
{ cat("Unknown method\n"); return(NA)}
)
}
# Estimate lower threshold of Pareto distribution
# Use the method in Clauset, Shalizi and Newman (2007): consider each distinct
# data value as a possible threshold, fit using that threshold, and then find
# the Kolmogorov-Smirnov distance between estimated and empirical distributions.
# Pick the threshold which minimizes this distance. Then function then returns
# the output of one of the fixed-threshold estimators.
# Input: data vector, method (defaulting to ML)
# Output: List indicating type of distribution ("pareto"), parameters,
# information about fit (depending on method)
.pareto.fit.threshold <- function(data, method="ml") {
possibles <- unique(data)
ks.distances <- sapply(possibles,.ks.dist.for.pareto,data=data,method=method)
min.index = which.min(ks.distances)
min = possibles[min.index]
return(pareto.fit(data,threshold=min,method=method))
}
# Calculate the KS discrepancy between a data set and its fit Pareto
# distribution, assuming a given threshold. Not intended for users but rather
# for the .pareto.fit.threshold function.
# N.B., this KS statistic CANNOT be plugged in to the usual tables to find valid
# p-values, as the exponent has been estimated from the data.
# Input: real threshold, data vector, method flag
# Output: real-valued KS statistic
.ks.dist.for.pareto <- function(threshold,data,method="ml") {
model <- pareto.fit(data,threshold=threshold,method=method)
return(model$ks.dist)
}
# Calculate KS distanced between a data set and given Pareto distribution
# Not intended for users
# Input: real threshold, real exponent, data vector
# Output: real-valued KS statistic
.ks.dist.fixed.pareto <- function(data,threshold,exponent) {
data <- data[data>=threshold]
d <- suppressWarnings(ks.test(data,ppareto,threshold=threshold,exponent=exponent))
# ks.test complains about p-values when there are ties, we don't care
return(as.vector(d$statistic))
}
# Estimate scaling exponent of Pareto distribution by maximum likelihood
# Input: Data vector, lower threshold
# Output: List giving distribution type ("pareto"), parameters, log-likelihood
.pareto.fit.ml <- function (data, threshold) {
data <- data[data>=threshold]
n <- length(data)
x <- data/threshold
alpha <- 1 + n/sum(log(x))
loglike = pareto.loglike(data,threshold,alpha)
ks.dist <- .ks.dist.fixed.pareto(data,threshold=threshold,exponent=alpha)
fit <- list(type="pareto", exponent=alpha, xmin=threshold, loglike = loglike,
ks.dist = ks.dist, samples.over.threshold=n)
return(fit)
}
# Calculate log-likelihood under a Pareto distribution
# Input: Data vector, lower threshold, scaling exponent
# Output: Real-valued log-likelihood
pareto.loglike <- function(x, threshold, exponent) {
L <- sum(dpareto(x, threshold = threshold, exponent = exponent, log = TRUE))
return(L)
}
# Log-log plot of the survival function (empirical upper CDF) of a data set
# Input: Data vector, lower limit, upper limit, graphics parameters
# Output: None (returns NULL invisibly)
plot.survival.loglog <- function(x,from=min(x),to=max(x),...) {
plot.eucdf.loglog(x,from,to,...)
}
plot.eucdf.loglog <- function(x,from=min(x),to=max(x),type="l",...) {
# Use the "eucdf" function (below)
x <- sort(x)
x.eucdf <- eucdf(x)
# This is nice if the number of points is small...
plot(x,x.eucdf(x),xlim=c(from,to),log="xy",type=type,...)
# Should check how many points and switch over to a curve-type plot when
# it gets too big
invisible(NULL)
}
# Calculate the upper empirical cumulative distribution function of a
# one-dimensional data vector
# Uses the standard function ecdf
# Should, but does not yet, also produce a function of class "stepfun"
# (like ecdf)
# Input: data vector
# Output: a function
eucdf <- function(x) {
# Exploit built-in R function to get ordinary (lower) ECDF, Pr(X<=x)
x.ecdf <- ecdf(x)
# Now we want Pr(X>=x) = (1-Pr(X<=x)) + Pr(X==x)
# If x is one of the "knots" of the step function, i.e., a point with
# positive probability mass, should add that in to get Pr(X>=x)
# rather than Pr(X>x)
away.from.knot <- function(y) { 1 - x.ecdf(y) }
at.knot.prob.jump <- function(y) {
x.knots = knots(x.ecdf)
# Either get the knot number, or give zero if this was called
# away from a knot
k <- match(y,x.knots,nomatch=0)
if ((k==0) || (k==1)) { # Handle special cases
if (k==0) {
prob.jump = 0 # Not really a knot
} else {
prob.jump = x.ecdf(y) # Special handling of first knot
}
} else {
prob.jump = x.ecdf(y) - x.ecdf(x.knots[(k-1)]) # General case
}
return(prob.jump)
}
# Use one function or the other
x.eucdf <- function(y) {
baseline = away.from.knot(y)
jumps = sapply(y,at.knot.prob.jump)
ifelse (y %in% knots(x.ecdf), baseline+jumps, baseline)
}
return(x.eucdf)
}
# Calculate valid p-value for the goodness of fit of a power-law
# tail to a data set, via simulation
# Input: data vector (x), number of replications (m)
# Output: p-value
pareto.tail.ks.test <- function(x,m) {
x.pt <- pareto.fit(x,threshold="find")
x0 <- x.pt$xmin # extract parameters of fitted dist.
alpha <- x.pt$exponent
ntail <- sum(x>=x0) # How many samples in the tail?
n <- length(x)
ptail <- ntail/n # Total prob. of the tail
# Carve out the non-tail data points
body <- x[x < x0]
# Observed value of KS distance:
d.ks <- x.pt$ks.dist
# KS statistics of resamples:
r.ks <- replicate(m,.ks.resimulate.pareto.tail(n,ptail,x0,alpha,body))
p.value <- sum(r.ks >= d.ks)/m
return(p.value)
}
# Resimulate from a data set with a Pareto tail, estimate on
# the simulation and report the KS distance
# Inputs: Size of sample (n), probability of being in the tail (tail.p),
# threshold for tail (threshold), power law exponent (exponent),
# vector giving values in body (data.body)
# Output: KS distance
.ks.resimulate.pareto.tail <- function(n,tail.p,threshold,exponent,data.body) {
# Samples come from the tail with probability ptail, or else from the body
# decide randomly how many samples come from the tail
tail.samples <- rbinom(1,n,tail.p)
# Draw the samples from the tail
rtail <- rpareto(tail.samples,threshold,exponent)
# Draw the samples from the body (with replacement!)
rbody <- sample(data.body,n-tail.samples,replace=TRUE)
b <- c(rtail,rbody)
b.ks <- pareto.fit(b,threshold="find")$ks.dist
return(b.ks)
}
### The crappy linear regression way to fit a power law
# The common procedure is to fit to the binned density function, which is even
# crappier than to fit to the complementary distribution function; this
# currently only implements the latter
# First, produce the empirical complementary distribution function, as
# a pair of lists, {x}, {C(x)}
# Then regress log(C) ~ log(x)
# and report the slope and the R^2
# Input: Data vector, threshold
# Output: List with distributional parameters and information about the
# fit
.pareto.fit.regression.cdf <- function(x,threshold=1) {
# Discard data under threshold
x <- x[x>=threshold]
n <- length(x)
# We need the different observed values of x, in order
distinct_x <- sort(unique(x))
x.eucdf <- eucdf(x)
upper_probs <- x.eucdf(distinct_x)
loglogfit <- lm(log(upper_probs) ~ log(distinct_x))
intercept <- as.vector(coef(loglogfit)[1]) # primarily useful for plotting
slope <- as.vector(-coef(loglogfit)[2]) # Remember sign of parameterization
# But that's the exponent of the CDF, that of the pdf is one larger
# and is what we're parameterizing by
slope <- slope+1
r2 <- summary(loglogfit)$r.squared
loglike <- pareto.loglike(x, threshold, slope)
ks.dist <- .ks.dist.fixed.pareto(x,threshold=threshold,exponent=slope)
result <- list(type="pareto", exponent = slope, rsquare = r2,
log_x = log(distinct_x), log_p = log(upper_probs),
intercept = intercept, loglike = loglike, xmin=threshold,
ks.dist = ks.dist, samples.over.threshold=n)
return(result)
}
# Wrapper function to just get the exponent estimate
loglogslope <- function(x,threshold=1) {
llf <- .pareto.fit.regression.cdf(x,threshold)
exponent <- llf$exponent
return(exponent)
}
# Wrapper function to just get the R^2 values
loglogrsq <- function(x,threshold=1) {
llf <- .pareto.fit.regression.cdf(x,threshold)
r2 <- llf$rsquare
return(r2)
}
# Revision history:
# no release 2003 First draft
# v 0.0 2007-06-04 First release
# v 0.0.1 2007-06-29 Fixed "not" for "knot" typo, thanks to
# Nicholas A. Povak for bug report
# v 0.0.2 2007-07-22 Fixed bugs in plot.survival.loglog, thanks to
# Stefan Wehrli for report
# v 0.0.3 2008-03-02 Realized R has a "unique" function; added
# estimating xmin via method in minimal KS dist.
# v 0.0.4 2008-04-24 Made names of non-end-user functions start
# with period, hiding them in workspace
# v 0.0.5 2011-02-03 Suppressed the warning ks.test produces about
# not being able to calculate p-values in the
# presence of ties
pareto.fit(rpareto(100,2,3), "find", method='ml')
(Just copy pasting the above code will give the results)
This did give me a result, and I need to know what estimation method really have been used here in estimating the exponent? Is it maximum likelihood estimation method or any other? What does it mean by method='ml'? (m-maximum l-likelihood ?) "Help" in R doesn't work for this particular package. Are there any other estimation methods available in this package such as least squares, method of moment etc?
Knowing exactly the answers for these questions will be really helpful as I'm performing a comparison for accuracy between several packages under distinct estimation methods. So knowing the estimation method exactly is mandatory. Thanks in advance.

Related

Generate Random Survival Times From A Hazard Function Applying A Hazard Rate with flexsurv

Please consider the following:
My aim is to draw random survival times from a flexible parametric multi-state model fitted with flexsurvreg (more specifically msfit.flexsurvreg) and applying some hazard ratio (HR, in this example set to 0.2).
I found an example to generate random survival times using any hazard function here. This is also were I apply the HR.
Problem
With the actual data, I receive an error once the HR is below the value of 0.2: Error in uniroot((function(x) { : no sign change found in 1000 iterations.
This does not happen in the reproducible example below.
Questions
Is there another, better way than the one below to draw random survival times while applying a HR?
Can someone indicate why the "no sign change" error may occur and how this can be fixed?
Any help is greatly appreciated!
Minimal reproducible example
# Load package
library(flexsurv)
#> Loading required package: survival
# Load data
data("bosms4")
# Define hazard ratio
hr <- 0.2
# Fit model (weibull)
crwei <- flexsurvreg(formula = Surv(years, status) ~ trans + shape(trans),
data = bosms3, dist = "weibull")
# Create transition matrix
Q <- rbind(c(NA,1,2),c(NA,NA,3), c(NA,NA,NA))
# Capture parameters
pars <- pars.fmsm(crwei, trans=Q, newdata=data.frame(trans=1:3))
# Code from https://eurekastatistics.com/generating-random-survival-times-from-any-hazard-function/ ----
inverse = function(fn, min_x, max_x){
# Returns the inverse of a function for a given range.
# E.g. inverse(sin, 0, pi/2)(sin(pi/4)) equals pi/4 because 0 <= pi/4 <= pi/2
fn_inv = function(y){
uniroot((function(x){fn(x) - y}), lower=min_x, upper=max_x)[1]$root
}
return(Vectorize(fn_inv))
}
integrate_from_0 = function(fn, t){
int_fn = function(t) integrate(fn, 0, t)
result = sapply(t, int_fn)
value = unlist(result["value",])
msg = unlist(result["message",])
value[which(msg != "OK")] = NA
return(value)
}
random_survival_times = function(hazard_fn, n, max_time=10000){
# Given a hazard function, returns n random time-to-event observations.
cumulative_density_fn = function(t) 1 - exp(-integrate_from_0(hazard_fn, t))
inverse_cumulative_density_fn = inverse(cumulative_density_fn, 0, max_time)
return(inverse_cumulative_density_fn(runif(n)))
}
# Run with data ----
random_survival_times(hazard_fn = function(t){crwei$dfns$h(t, pars[[1]][1], pars[[1]][2]) * hr}, n = 100)
#> Error in integrate(fn, 0, t): non-finite function value
# Adapt random_survival time function replacing 0 with 0.1 ----
random_survival_times <- function(hazard_fn, n, max_time=10000){
# Given a hazard function, returns n random time-to-event observations.
cumulative_density_fn = function(t) 1 - exp(-integrate_from_0(hazard_fn, t))
inverse_cumulative_density_fn = inverse(cumulative_density_fn, 0.1, max_time)
return(inverse_cumulative_density_fn(runif(n)))
}
# Run again with data ----
random_survival_times(hazard_fn = function(t){crwei$dfns$h(t, pars[[1]][1], pars[[1]][2]) * hr}, n = 100)
#> Error in uniroot((function(x) {: f() values at end points not of opposite sign
# Adapt inverse adding extendedInt = "yes" ----
inverse <- function(fn, min_x, max_x){
# Returns the inverse of a function for a given range.
# E.g. inverse(sin, 0, pi/2)(sin(pi/4)) equals pi/4 because 0 <= pi/4 <= pi/2
fn_inv <- function(y){
uniroot((function(x){fn(x) - y}), lower=min_x, upper=max_x,
extendInt = "yes" # extendInt added because of error on some distributions: "Error in uniroot((function(x) { : f() values at end points not of opposite sign. Solution found here: https://stackoverflow.com/questions/38961221/uniroot-solution-in-r
)[1]$root
}
return(Vectorize(fn_inv))
}
# Run again with data ----
res <- random_survival_times(hazard_fn = function(t){crwei$dfns$h(t, pars[[1]][1], pars[[1]][2]) * hr}, n = 100)
res[1:5]
#> [1] 1.074281 13.688663 30.896637 159.643827 15.805103
Created on 2022-10-18 with reprex v2.0.2
This method of sampling survival times basically works by sampling a random uniform(0,1) number p and finding x for which the survival probability is p. The uniroot step is used to solve S(x) = p by a numerical search. In your case, it is having difficulty finding a solution after 1000 steps.
I've seen this happen, and fixed by adding, e.g. uniroot(..., maxiter=10000) to tell it to try a bit harder to find the solution. That's always been enough in my tests, though those may be limited. If that doesn't work, I'd advise digging in by hand and examining the survival curve that you are trying to invert - it may be invalid due to some parameter value being extreme.
(This kind of thing is done in the function qgeneric in the flexsurv package, though it borrows a vectorised version of uniroot from the rstpm2 package which is faster.)

Sample from a custom likelihood function

I have the following likelihood function which I used in a rather complex model (in practice on a log scale):
library(plyr)
dcustom=function(x,sd,L,R){
R. = (log(R) - log(x))/sd
L. = (log(L) - log(x))/sd
ll = pnorm(R.) - pnorm(L.)
return(ll)
}
df=data.frame(Range=seq(100,500),sd=rep(0.1,401),L=200,U=400)
df=mutate(df, Likelihood = dcustom(Range, sd,L,U))
with(df,plot(Range,Likelihood,type='l'))
abline(v=200)
abline(v=400)
In this function, the sd is predetermined and L and R are "observations" (very much like the endpoints of a uniform distribution), so all 3 of them are given. The above function provides a large likelihood (1) if the model estimate x (derived parameter) is in between the L-R range, a smooth likelihood decrease (between 0 and 1) near the bounds (of which the sharpness is dependent on the sd), and 0 if it is too much outside.
This function works very well to obtain estimates of x, but now I would like to do the inverse: draw a random x from the above function. If I would do this many times, I would generate a histogram that follows the shape of the curve plotted above.
The ultimate goal is to do this in C++, but I think it would be easier for me if I could first figure out how to do this in R.
There's some useful information online that helps me start (http://matlabtricks.com/post-44/generate-random-numbers-with-a-given-distribution, https://stats.stackexchange.com/questions/88697/sample-from-a-custom-continuous-distribution-in-r) but I'm still not entirely sure how to do it and how to code it.
I presume (not sure at all!) the steps are:
transform likelihood function into probability distribution
calculate the cumulative distribution function
inverse transform sampling
Is this correct and if so, how do I code this? Thank you.
One idea might be to use the Metropolis Hasting Algorithm to obtain a sample from the distribution given all the other parameters and your likelihood.
# metropolis hasting algorithm
set.seed(2018)
n_sample <- 100000
posterior_sample <- rep(NA, n_sample)
x <- 300 # starting value: I chose 300 based on your likelihood plot
for (i in 1:n_sample){
lik <- dcustom(x = x, sd = 0.1, L = 200, R =400)
# propose a value for x (you can adjust the stepsize with the sd)
x.proposed <- x + rnorm(1, 0, sd = 20)
lik.proposed <- dcustom(x = x.proposed, sd = 0.1, L = 200, R = 400)
r <- lik.proposed/lik # this is the acceptance ratio
# accept new value with probablity of ratio
if (runif(1) < r) {
x <- x.proposed
posterior_sample[i] <- x
}
}
# plotting the density
approximate_distr <- na.omit(posterior_sample)
d <- density(approximate_distr)
plot(d, main = "Sample from distribution")
abline(v=200)
abline(v=400)
# If you now want to sample just a few values (for example, 5) you could use
sample(approximate_distr,5)
#[1] 281.7310 371.2317 378.0504 342.5199 412.3302

Calculation of DFFITS as diagnostic for Leverage and Influence in regression

I am trying to calculate DFFITS by hand. The value obtained should be equal to the first value obtained by dffits function. However there must be something wrong with my own calculation.
attach(cars)
x1 <- lm(speed ~ dist, data = cars) # all observations
x2 <- lm(speed ~ dist, data = cars[-1,]) # without first obs
x <- model.matrix(speed ~ dist) # x matrix
h <- diag(x%*%solve(crossprod(x))%*%t(x)) # hat values
num_dffits <- x1$fitted.values[1] - x2$fitted.values[1] #Numerator
denom_dffits <- sqrt(anova(x2)$`Mean Sq`[2]*h[1]) #Denominator
df_fits <- num_dffits/denom_dffits #DFFITS
dffits(x1)[1] # DFFITS function
Your numerator is wrong. As you have removed first datum from the second model, corresponding predicted value is not in fitted(x2). We need to use predict(x2, cars[1, ]) in place of fitted(x2)[1].
Hat values can be efficiently computed by
h <- rowSums(qr.Q(x1$qr) ^ 2)
or using its R wrapper function
h <- hat(x1$qr, FALSE)
R also has a generic function for getting hat values, too:
h <- lm.influence(x1, FALSE)$hat
or its wrapper function
h <- hatvalues(x1)
You also don't have to call anova to get MSE:
c(crossprod(x2$residuals)) / x2$df.residual

Maximum Likelihood Estimation for three-parameter Weibull distribution in r

I want to estimate the scale, shape and threshold parameters of a 3p Weibull distribution.
What I've done so far is the following:
Refering to this post, Fitting a 3 parameter Weibull distribution in R
I've used the functions
EPS = sqrt(.Machine$double.eps) # "epsilon" for very small numbers
llik.weibull <- function(shape, scale, thres, x)
{
sum(dweibull(x - thres, shape, scale, log=T))
}
thetahat.weibull <- function(x)
{
if(any(x <= 0)) stop("x values must be positive")
toptim <- function(theta) -llik.weibull(theta[1], theta[2], theta[3], x)
mu = mean(log(x))
sigma2 = var(log(x))
shape.guess = 1.2 / sqrt(sigma2)
scale.guess = exp(mu + (0.572 / shape.guess))
thres.guess = 1
res = nlminb(c(shape.guess, scale.guess, thres.guess), toptim, lower=EPS)
c(shape=res$par[1], scale=res$par[2], thres=res$par[3])
}
to "pre-estimate" my Weibull parameters, such that I can use them as initial values for the argument "start" in the "fitdistr" function of the MASS-Package.
You might ask why I want to estimate the parameters twice... reason is that I need the variance-covariance-matrix of the estimates which is also estimated by the fitdistr function.
EXAMPLE:
set.seed(1)
thres <- 450
dat <- rweibull(1000, 2.78, 750) + thres
pre_mle <- thetahat.weibull(dat)
my_wb <- function(x, shape, scale, thres) {
dweibull(x - thres, shape, scale)
}
ml <- fitdistr(dat, densfun = my_wb, start = list(shape = round(pre_mle[1], digits = 0), scale = round(pre_mle[2], digits = 0),
thres = round(pre_mle[3], digits = 0)))
ml
> ml
shape scale thres
2.942548 779.997177 419.996196 ( 0.152129) ( 32.194294) ( 28.729323)
> ml$vcov
shape scale thres
shape 0.02314322 4.335239 -3.836873
scale 4.33523868 1036.472551 -889.497580
thres -3.83687258 -889.497580 825.374029
This works quite well for cases where the shape parameter is above 1. Unfortunately my approach should deal with the cases where the shape parameter could be smaller than 1.
The reason why this is not possible for shape parameters that are smaller than 1 is described here: http://www.weibull.com/hotwire/issue148/hottopics148.htm
in Case 1, All three parameters are unknown the following is said:
"Define the smallest failure time of ti to be tmin. Then when γ → tmin, ln(tmin - γ) → -∞. If β is less than 1, then (β - 1)ln(tmin - γ) goes to +∞ . For a given solution of β, η and γ, we can always find another set of solutions (for example, by making γ closer to tmin) that will give a larger likelihood value. Therefore, there is no MLE solution for β, η and γ."
This makes a lot of sense. For this very reason I want to do it the way they described it on this page.
"In Weibull++, a gradient-based algorithm is used to find the MLE solution for β, η and γ. The upper bound of the range for γ is arbitrarily set to be 0.99 of tmin. Depending on the data set, either a local optimal or 0.99tmin is returned as the MLE solution for γ."
I want to set a feasible interval for gamma (in my code called 'thres') such that the solution is between (0, .99 * tmin).
Does anyone have an idea how to solve this problem?
In the function fitdistr there seems to be no opportunity doing a constrained MLE, constraining one parameter.
Another way to go could be the estimation of the asymptotic variance via the outer product of the score vectors. The score vector could be taken from the above used function thetahat.weibul(x). But calculating the outer product manually (without function) seems to be very time consuming and does not solve the problem of the constrained ML estimation.
Best regards,
Tim
It's not too hard to set up a constrained MLE. I'm going to do this in bbmle::mle2; you could also do it in stats4::mle, but bbmle has some additional features.
The larger issue is that it's theoretically difficult to define the sampling variance of an estimate when it's on the boundary of the allowed space; the theory behind Wald variance estimates breaks down. You can still calculate confidence intervals by likelihood profiling ... or you could bootstrap. I ran into a variety of optimization issues when doing this ... I haven't really thought about wether there are specific reasons
Reformat three-parameter Weibull function for mle2 use (takes x as first argument, takes log as an argument):
dweib3 <- function(x, shape, scale, thres, log=TRUE) {
dweibull(x - thres, shape, scale, log=log)
}
Starting function (slightly reformatted):
weib3_start <- function(x) {
mu <- mean(log(x))
sigma2 <- var(log(x))
logshape <- log(1.2 / sqrt(sigma2))
logscale <- mu + (0.572 / logshape)
logthres <- log(0.5*min(x))
list(logshape = logshape, logsc = logscale, logthres = logthres)
}
Generate data:
set.seed(1)
dat <- data.frame(x=rweibull(1000, 2.78, 750) + 450)
Fit model: I'm fitting the parameters on the log scale for convenience and stability, but you could use boundaries at zero as well.
tmin <- log(0.99*min(dat$x))
library(bbmle)
m1 <- mle2(x~dweib3(exp(logshape),exp(logsc),exp(logthres)),
data=dat,
upper=c(logshape=Inf,logsc=Inf,
logthres=tmin),
start=weib3_start(dat$x),
method="L-BFGS-B")
vcov(m1), which should normally provide a variance-covariance estimate (unless the estimate is on the boundary, which is not the case here) gives NaN values ... not sure why without more digging.
library(emdbook)
tmpf <- function(x,y) m1#minuslogl(logshape=x,
logsc=coef(m1)["logsc"],
logthres=y)
tmpf(1.1,6)
s1 <- curve3d(tmpf,
xlim=c(1,1.2),ylim=c(5.9,tmin),sys3d="image")
with(s1,contour(x,y,z,add=TRUE))
h <- lme4:::hessian(function(x) do.call(m1#minuslogl,as.list(x)),coef(m1))
vv <- solve(h)
diag(vv) ## [1] 0.002672240 0.001703674 0.004674833
(se <- sqrt(diag(vv))) ## standard errors
## [1] 0.05169371 0.04127558 0.06837275
cov2cor(vv)
## [,1] [,2] [,3]
## [1,] 1.0000000 0.8852090 -0.8778424
## [2,] 0.8852090 1.0000000 -0.9616941
## [3,] -0.8778424 -0.9616941 1.0000000
This is the variance-covariance matrix of the log-scaled variables. If you want to convert to the variance-covariance matrix on the original scale, you need to scale by (x_i)*(x_j) (i.e. by the derivatives of the transformation exp(x)).
outer(exp(coef(m1)),exp(coef(m1))) * vv
## logshape logsc logthres
## logshape 0.02312803 4.332993 -3.834145
## logsc 4.33299307 1035.966372 -888.980794
## logthres -3.83414498 -888.980794 824.831463
I don't know why this doesn't work with numDeriv - would be very careful with variance estimates above. (Maybe too close to boundary for Richardson extrapolation to work?)
library(numDeriv)
hessian()
grad(function(x) do.call(m1#minuslogl,as.list(x)),coef(m1)) ## looks OK
vcov(m1)
The profiles look OK ... (we have to supply std.err because the Hessian isn't invertible)
pp <- profile(m1,std.err=c(0.01,0.01,0.01))
par(las=1,bty="l",mfcol=c(1,3))
plot(pp,show.points=TRUE)
confint(pp)
## 2.5 % 97.5 %
## logshape 0.9899645 1.193571
## logsc 6.5933070 6.755399
## logthres 5.8508827 6.134346
Alternately, we can do this on the original scale ... one possibility would be to use the log-scaling to fit, then refit starting from those parameters on the original scale.
wstart <- as.list(exp(unlist(weib3_start(dat$x))))
names(wstart) <- gsub("log","",names(wstart))
m2 <- mle2(x~dweib3(shape,sc,thres),
data=dat,
lower=c(shape=0.001,sc=0.001,thres=0.001),
upper=c(shape=Inf,sc=Inf,
thres=exp(tmin)),
start=wstart,
method="L-BFGS-B")
vcov(m2)
## shape sc thres
## shape 0.02312399 4.332057 -3.833264
## sc 4.33205658 1035.743511 -888.770787
## thres -3.83326390 -888.770787 824.633714
all.equal(unname(coef(m2)),unname(exp(coef(m1))),tol=1e-4)
About the same as the values above.
We can fit with a small shape, if we are a little more careful to bound the paraameters, but now we end up on the boundary for the threshold, which will cause lots of problems for the variance calculations.
set.seed(1)
dat <- data.frame(x = rweibull(1000, .53, 365) + 100)
tmin <- log(0.99 * min(dat$x))
m1 <- mle2(x ~ dweib3(exp(logshape), exp(logsc), exp(logthres)),
lower=c(logshape=-10,logscale=0,logthres=0),
upper = c(logshape = 20, logsc = 20, logthres = tmin),
data = dat,
start = weib3_start(dat$x), method = "L-BFGS-B")
For censored data, you need to replace dweibull with pweibull; see Errors running Maximum Likelihood Estimation on a three parameter Weibull cdf for some hints.
Another possible solution is to do Bayesian inference. Using scale priors on the shape and scale parameters and a uniform prior on the location parameter, you can easily run Metropolis-Hastings as follows. It might be adviceable to reparameterize in terms of log(shape), log(scale) and log(y_min - location) because the posterior for some of the parameters becomes strongly skewed, in particular for the location parameter. Note that the output below shows the posterior for the backtransformed parameters.
library(MCMCpack)
logposterior <- function(par,y) {
gamma <- min(y) - exp(par[3])
sum(dweibull(y-gamma,exp(par[1]),exp(par[2]),log=TRUE)) + par[3]
}
y <- rweibull(100,shape=.8,scale=10) + 1
chain0 <- MCMCmetrop1R(logposterior, rep(0,3), y=y, V=.01*diag(3))
chain <- MCMCmetrop1R(logposterior, rep(0,3), y=y, V=var(chain0))
plot(exp(chain))
summary(exp(chain))
This produces the following output
#########################################################
The Metropolis acceptance rate was 0.43717
#########################################################
Iterations = 501:20500
Thinning interval = 1
Number of chains = 1
Sample size per chain = 20000
1. Empirical mean and standard deviation for each variable,
plus standard error of the mean:
Mean SD Naive SE Time-series SE
[1,] 0.81530 0.06767 0.0004785 0.001668
[2,] 10.59015 1.39636 0.0098738 0.034495
[3,] 0.04236 0.05642 0.0003990 0.001174
2. Quantiles for each variable:
2.5% 25% 50% 75% 97.5%
var1 0.6886083 0.768054 0.81236 0.8608 0.9498
var2 8.0756210 9.637392 10.50210 11.4631 13.5353
var3 0.0003397 0.007525 0.02221 0.0548 0.1939

Calculation of return levels based on a GPD in different R packages

I am performing an extreme value analysis for meteorological data, to be precise for precipitation data available in mm/d. I am using a threshold excess approach for estimating the parameters of a generalized Pareto distribution with a maximum likelihood method.
The aim is to calculate several return levels (i.e. the 2, 5, 10, 20, 50, 100 year event) for daily precipitation.
While the R code works fine, I am wondering why I get clearly different results when calculating return levels based on the quantiles of the fitted GPD with different packages. Even though the estimated parameters of the GPD are almost identical in each package, the quantiles differ a lot.
The packages I used are:
ismev, extRemes, evir and POT.
I guess that the different estimates for the parameters of the GPD are due to different calculation routines, but I do not understand why the calculation of the quantiles differs that much depending on the different packages.
while lmom, evir and POT return the same quanatile values, the return period derived from the extRemes package differs from the other results.
# packages
library(ismev)
library(extRemes)
library(evir)
library(POT)
library(lmom)
th <- 50
# sample data:
potvalues <- c(
58.5,44.2,49.6,59.3,48.3,60.9,94.5,47.1,45.3,57.6,48.2,46.2,44.2,50.6,42.1,52.7,80.9,
58.5,51.3,48.4,51.7,71.9,60.1,64.4,43.5,55.5,49.3,58.2,47.5,43.7,45.2,52.8,42.2,46.4,
96.1,47.5,50.1,42.4,60.9,72.6,51.6,59.4,80.5,63.7,59.9,45.0,66.7,47.6,53.3,43.1,51.0,
46.2,53.6,59.8,51.7,46.7,42.6,44.5,45.0,50.0,44.0,89.9,44.2,47.8,53.3,43.0,55.7,44.6,
44.6,54.9,45.1,43.9,78.7,45.5,64.0,42.7,47.4,57.0,105.4,64.3,43.2,50.4,80.2,49.9,71.6,
47.4,44.1,47.6,55.2,44.4,78.6,50.8,42.4,47.1,43.5,51.4)
#------------------------------------------------------------------------------------------#
# MLE Fitting of GPD - package extRemes
# fit gpd
pot.ext <- fevd(potvalues, method = "MLE", type="GP", threshold=th)
# return levels:
rl.extremes <- return.level(pot.ext, conf = 0.05,
return.period= c(2,5,10,20,50,100))
rl.extremes <- as.numeric(rl.extremes)
#------------------------------------------------------------------------------------------#
# MLE Fitting of GPD - package ismev
pot.gpd <- gpd.fit(potvalues, threshold=th)
s1 <- quagpa(f=.99, para=c(pot.gpd$threshold, pot.gpd$mle[1],-pot.gpd$mle[2])) # 100
s2 <- quagpa(f=.98, para=c(pot.gpd$threshold, pot.gpd$mle[1],-pot.gpd$mle[2])) # 50
s3 <- quagpa(f=.95, para=c(pot.gpd$threshold, pot.gpd$mle[1],-pot.gpd$mle[2])) # 20
s4 <- quagpa(f=.90, para=c(pot.gpd$threshold, pot.gpd$mle[1],-pot.gpd$mle[2])) # 10
s5 <- quagpa(f=.80, para=c(pot.gpd$threshold, pot.gpd$mle[1],-pot.gpd$mle[2])) # 5
s6 <- quagpa(f=.50, para=c(pot.gpd$threshold, pot.gpd$mle[1],-pot.gpd$mle[2])) # 2
rl.ismev <- c(s6, s5, s4, s3, s2, s1)
#------------------------------------------------------------------------------------------#
# MLE Fitting of GPD - package evir
# fit gpd
gpd.evir <- gpd(potvalues, threshold=th)
# plot
evirplot <- plot(gpd.evir)
1 # Excess Distribution
0 # exit
x100 <- gpd.q(pp=.99, x=evirplot) # 100
x050 <- gpd.q(pp=.98, x=evirplot) # 50
x020 <- gpd.q(pp=.95, x=evirplot) # 20
x010 <- gpd.q(pp=.90, x=evirplot) # 10
x005 <- gpd.q(pp=.80, x=evirplot) # 5
x002 <- gpd.q(pp=.50, x=evirplot) # 2
rl.evir <- t(rbind(x002,x005,x010,x020,x050,x100))
rl.evir <- as.numeric(rl.evir[2,])
#------------------------------------------------------------------------------------------#
# MLE Fitting of GPD - package POT
gpd.pot <- fitgpd(potvalues, threshold=th)
quant = c(0.50, 0.80, 0.90, 0.95, 0.98, 0.99)
rtp <- c(2,5,10,20,50,100)
retvec <- vector()
for (i in quant){
x <- POT::qgpd(i, loc = th, scale = as.numeric(gpd.pot$param[1]),
shape = as.numeric(gpd.pot$param[2]))
retvec <- c(retvec,x)
}
rl.pot <- retvec
#------------------------------------------------------------------------------------------#
# comparison of results - return periods
result <- cbind(rl.extremes,rl.ismev, rl.evir, rl.pot)
round(result, 2)
#------------------------------------------------------------------------------------------#
# comparison of estimated parameters
param.extremes <- pot.ext$results$par # extremes
param.ismev <- pot.gpd$mle # ismev
param.evir <- c(gpd.evir$par.ests[2],gpd.evir$par.ests[1]) # evir
param.pot <- gpd.pot$param # POT
parameters <- cbind(param.extremes, param.ismev , param.evir, param.pot)
round(parameters, 4)
#------------------------------------------------------------------------------------------#
The solution for this problem is described e.g. in Coles book (An Introduction to Statistical Modeling of Extreme Values, Chapter 4.3.3). While the return levels for a GEV can be derived rather directly from its quantiles, the so called exceedance rate (i.e. number of events per year or the likelihood, that an event exceeds the threshold respectively) has to be considered when calculating return levels for a GP within the scope of a peak over threshold appoach.
The N-year return level is defined by
Thus it does not work to obtain meaningful results for return levels when simply calculating the quantiles for the GP distribution without considering the exceedance rate. The extRemes package takes the exceedance rate into account, while the default value for the number of events per year in the POT and evir packages is set to 1 if unspecified.
The differences may also come from the different methods of fitting the distribution function to the dataset. I have a package on CRAN that compares GPD fits (or rather, their quantile estimates) for several R packages and methods:
https://cran.r-project.org/web/packages/extremeStat/vignettes/extremeStat.html
You can also use the package to compare GPD with other distributions.

Resources