I've spent 2 months wondering if this question is StackOverflow worthy, and I've concluded it is.
I'm volunteering on a team for a year to forecast a number of interesting things, a few months ago it was forecasting the probability of the number of earthquakes worldwide over mag 5 during the month of March. Really interesting problem. Thought I was reasonably good with R and then I hit this problem like a brick wall. It is a count problem, wanted to use Poisson distribution, but it won't work, mean and variance aren't equal. It is over dispersed.
The goal is to estimate the probability of:
<100 earthquakes
100-140 earthquakes 140-170 earthquakes 170-210
earthquakes 210 earthquakes
But I wrote some code here:
#(load data and libaries blah blah blah)
quakes_this_month<-10
days_left=31-1
days_left
month_left<- days_left/31
month_left
earthq5<- earthq4
earthq5$mag<-earthq5$mag*month_left
mu <- mean(earthq5$mag)
sigma <- sd(earthq5$mag)
paste("mean is ",mu, " and sigma is ", sigma)
pnorm((99-quakes_this_month) , mu, sigma, lower.tail = T)
lower.bound<- 100 -quakes_this_month
upper.bound<- 140.5-quakes_this_month
(pnorm(upper.bound, mu, sigma, lower.tail = T) - pnorm(lower.bound, mu, sigma))
lower.bound<- 140.5-quakes_this_month
upper.bound<- 170.5-quakes_this_month
(pnorm(upper.bound, mu, sigma) - pnorm(lower.bound, mu, sigma))
lower.bound<- 170.5-quakes_this_month
upper.bound<- 210.5-quakes_this_month
(pnorm(upper.bound, mu, sigma) - pnorm(lower.bound, mu, sigma))
(pnorm(210.5-quakes_this_month, mu, sigma, lower.tail = F))
So the idea here is as the month progresses and some earthquakes have happened, I can estimate the probability of hitting those limit thresholds. However, this isn't a Gaussian distribution, I can't use pnorm, I should use pnbinom(q, size, prob, mu, lower.tail = TRUE, log.p = FALSE) but I don't know how to get the 'size' and 'prob' out of a count problem. This isn't taking 15 balls out of a jar 4 time. So I'm reaching out on this one, as it's been haunting me for weeks.
How can I use pnbinom() in place of pnorm() given this is about earthquake counts per month?
So I found the answer, and for anyone else, here is how I did it. The data I was using was from USGS about earthquakes. There are quite a few other libraries I use in R. I think only MASS is needed for this example.
Load library and data
library(MASS)
quakeSim <- earthq4$count # this was my real data
quakeSim <- rnbinom(n = 12000, mu = 145, size =18) # you can use this for the example
Test for distribution fit checking 3 likely distributions, Gaussian, Poisson, and Negative Binomial
quakeDistNB <- MASS::fitdistr(quakeSim, densfun = "negative binomial")
quakeDistPois <- MASS::fitdistr(quakeSim, densfun = "poisson")
quakeDistGaus<-MASS::fitdistr(quakeSim, densfun = "normal")
Compare Negative binomial, Poisson, and Guassian - lower AIC is better so pick the distribution with the lowest AIC.
AIC(quakeDistNB)
AIC(quakeDistPois)
AIC(quakeDistGaus)
Quick check on Normalicy with shapiro test. (if Gaussian is lowest)
shapiro.test(earthq4$count)
Use the 5% rule. But it is NB, and not Gaussian so ignore all the CI tests below
summary(earthq4)
t.test(earthq4$count ) #default 0.95
So my data shows Negative Binomial distribution. Now lets look at it as a histogram with enough bins to show the shape of a a NB.
visualize empirical distrib
hist(quakeSim, breaks=80)
Fit a negative binomial model and get the two critical values sizeHat and muHat from the output of the model 'quakeDistNB'
This part really drove me nuts until a friend shows me.
quakeDistNB <- MASS::fitdistr(earthq4$count , densfun = "negative binomial")
quakeDistNB
sizeHat <- quakeDistNB$estimate[1]
sizeHat
muHat <- quakeDistNB$estimate[2]
Now then, my problem was to predict the probability of less than 100 earthquakes and between 150 and 100 of greater than or equal to 5 magnitude.
Then the probability of fewer than 100:
p100 <- pnbinom(q = 100, size = sizeHat, mu = muHat)
p100
probability of fewer than 150:
p150 <- pnbinom(q = 150, size = sizeHat, mu = muHat)
p150
probability of 100 to 150:
p150 - p100
Related
In a coin flip, we would like to compute p(theta|Data), where theta is the underlying parameter.
The prior follows a beta distribution with parameters a and b.
The likelihood follows a Bernoulli distribution which gives us the probability of coming up heads.
Here is the code implementation:
a = 1 # a and b are the beta distribution's parameters
b= 1
num = 1e5 #Number of candidate theta values
z= 17220 #Number of heads
N= 143293 #Total number of flips
Theta = seq(0.07,0.12, length.out= num)
prior = dbeta(Theta, a,b) #Compute the prior at each value
likelihood = Theta^z *(1-Theta)^(N-z)
pData = likelihood * prior /sum(likelihood * prior) #Compute evidence
posterior = likelihood*prior / pData
I would like to verify that the posterior is equal to the analytical solutions beta(a+z, N-z+b). However, since the likelihood equals 0 because the theta values are small, the probability of the evidence is a Nan and so is the posterior.
I have tried computing the log likelihood but it gives me a large negative number which is equal to 0 when taking the exponential.
Theta = seq(0.07,0.12, by= num_steps)
lprior = log(dbeta(Theta, a,b)) #Compute the log prior at each value
llikelihood = log(Theta)*z + log(1-Theta)*(N-z) #log likelihood
lpData = llikelihood + lprior - sum(llikelihood + lprior) #compute evidence
lposterior = llikelihood+lprior - lpData
posterior = exp(lposterior)
plot(Theta, posterior, type="l")
lines(Theta, exp(llikelihood), type="l")
lines(Theta, exp(lprior), type="l")
If my ultimate goal is to have a nice graph that shows the posterior, likelihood and prior like so
How should I be computing each value?
This answer has been provided through the comment section by #JosephClarkMcIntyre.
Here is a summary:
In a Bernoulli trial, when N -the total number of trials- and z -the total number of success are large and the underlying parameter theta is small, it is better to only operate in the log space and never take the exponential.
Moreover, since the log function is increasing, comparing the log posteriors of two distributions is similar to comparing the posterior.
The above implementation was wrong because the formula for computing the evidence is not correct. p(evidence) = sum(likelihood*prior), p(log_evidence)= sum(log_likelihood +log_prior)
This is the final code, where the prior, likelihood and evidence are in the log space:
a = 1 # a and b are the beta distribution's paramteres
b= 1
num_steps = 1e5
z= 17220 #Number of heads
N= 143293 #Total number of flips
Theta = seq(from=0.07,to=0.12, length.out= num_steps)
lprior = dbeta(Theta, a,b,log=TRUE) #Compute the log prior at each value
llikelihood = log(Theta)*z + log1p(-Theta)*(N-z) #log likelihood
lpData = sum(llikelihood + lprior) #compute log of the evidence
lposterior = llikelihood+lprior - lpData
plot(Theta,log(dbeta(Theta,a+z,N-z+b)))
plot(Theta, lposterior, type="l")
However, the analytical and the computed log posterior are not the same as shown in the graph..
Feel free to comment if you think there is a flaw in this answer or explain why the analytical and computed log posterior are not the same. ^^
library(lme4)
fm1 <- lmer(Reaction ~ Days + (Days|Subject), data = sleepstudy)
To generate a 95% CI, I can use the predictInterval() function from the package merTools.
library(merTools)
head(predictInterval(fm1, level = 0.95, seed = 123, n.sims = 100))
# fit upr lwr
# 1 255.4179 313.8781 184.1400
# 2 273.2944 333.2005 231.3584
# 3 291.8451 342.8701 240.8226
# 4 311.3562 359.2908 250.4980
# 5 330.3671 384.2520 270.7094
# 6 353.4378 409.9307 289.4760
In the documentation, it says about the predictInterval() function
This function provides a way to capture model uncertainty in predictions from multi-level models
fit with lme4. By drawing a sampling distribution for the random and the fixed effects and then
estimating the fitted value across that distribution, it is possible to generate a prediction interval for
fitted values that includes all variation in the model except for variation in the covariance parameters,
theta. This is a much faster alternative than bootstrapping for models fit to medium to large datasets.
My goal is to get all the fitted values instead of the the upper and lower CI i.e. for each row, I need the
original n simulations from which these 95% CI are calculated. I checked the argument in the documentation and
followed this:
head(predictInterval(fm1, n.sims = 100, returnSims = TRUE, seed = 123, level = 0.95))
# fit upr lwr
# 1 255.4179 313.8781 184.1400
# 2 273.2944 333.2005 231.3584
# 3 291.8451 342.8701 240.8226
# 4 311.3562 359.2908 250.4980
# 5 330.3671 384.2520 270.7094
# 6 353.4378 409.9307 289.4760
Instead of getting the 100 simulations, it still gives me the same output. What is it I am doing wrong here?
A second question though I believe this is more of a StatsExchange one.
"By drawing a sampling distribution for the random and the fixed
effects and then."`
How does it draws the sampling distribution if some could explain me?
You can get simulated values if you specify newdata in the predictInterval() function.
predInt <- predictInterval(fm1, newdata = sleepstudy, n.sims = 100,
returnSims = TRUE, seed = 123, level = 0.95)
simValues <- attr(predInt, "sim.results")
Details on how to create sampling distributions of parameters are given in the Detail section of the help page.You can get the estimates of fit, lower and upper boundaries as:
fit <- apply(simValues, 1, function(x){quantile(x, probs=0.500) } )
lwr <- apply(simValues, 1, function(x){quantile(x, probs=0.025) } )
upr <- apply(simValues, 1, function(x){quantile(x, probs=0.975) } )
In GAM (and GLM, for that matter), we're fitting a conditional likelihood model. So after fitting the model, for a new input x and response y, I should be able to compute the predictive probability or density of a specific value of y given x. I might want to do this to compare the fit of various models on validation data, for example. Is there a convenient way to do this with a fitted GAM in mgcv? Otherwise, how do I figure out the exact form of the density that is used so I can plug in the parameters appropriately?
As a specific example, consider a negative binomial GAM :
## From ?negbin
library(mgcv)
set.seed(3)
n<-400
dat <- gamSim(1,n=n)
g <- exp(dat$f/5)
## negative binomial data...
dat$y <- rnbinom(g,size=3,mu=g)
## fit with theta estimation...
b <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=nb(),data=dat)
And now I want to compute the predictive probability of, say, y=7, given x=(.1,.2,.3,.4).
Yes. mgcv is doing (empirical) Bayesian estimation, so you can obtain predictive distribution. For your example, here is how.
# prediction on the link (with standard error)
l <- predict(b, newdata = data.frame(x0 = 0.1, x1 = 0.2, x2 = 0.3, x3 = 0.4), se.fit = TRUE)
# Under central limit theory in GLM theory, link value is normally distributed
# for negative binomial with `log` link, the response is log-normal
p.mu <- function (mu) dlnorm(mu, l[[1]], l[[2]])
# joint density of `y` and `mu`
p.y.mu <- function (y, mu) dnbinom(y, size = 3, mu = mu) * p.mu(mu)
# marginal probability (not density as negative binomial is discrete) of `y` (integrating out `mu`)
# I have carefully written this function so it can take vector input
p.y <- function (y) {
scalar.p.y <- function (scalar.y) integrate(p.y.mu, lower = 0, upper = Inf, y = scalar.y)[[1]]
sapply(y, scalar.p.y)
}
Now since you want probability of y = 7, conditional on specified new data, use
p.y(7)
# 0.07810065
In general, this approach by numerical integration is not easy. For example, if other link functions like sqrt() is used for negative binomial, the distribution of response is not that straightforward (though also not difficult to derive).
Now I offer a sampling based approach, or Monte Carlo approach. This is most similar to Bayesian procedure.
N <- 1000 # samples size
set.seed(0)
## draw N samples from posterior of `mu`
sample.mu <- b$family$linkinv(rnorm(N, l[[1]], l[[2]]))
## draw N samples from likelihood `Pr(y|mu)`
sample.y <- rnbinom(1000, size = 3, mu = sample.mu)
## Monte Carlo estimation for `Pr(y = 7)`
mean(sample.y == 7)
# 0.076
Remark 1
Note that as empirical Bayes, all above methods are conditional on estimated smoothing parameters. If you want something like a "full Bayes", set unconditional = TRUE in predict().
Remark 2
Perhaps some people are assuming the solution as simple as this:
mu <- predict(b, newdata = data.frame(x0 = 0.1, x1 = 0.2, x2 = 0.3, x3 = 0.4), type = "response")
dnbinom(7, size = 3, mu = mu)
Such result is conditional on regression coefficients (assumed fixed without uncertainty), thus mu becomes fixed and not random. This is not predictive distribution. Predictive distribution would integrate out uncertainty of model estimation.
I am running a power analysis using a normal LMM in R. I have seven input parameters, two of which I do not need to test for (no. of years and no. of sites). The other 5 parameters are the intercept, slope and the random effects standard deviation of the residual, intercept and slope.
Given that my response data (year is the sole explanatory variable in the model) is bound between (-1, +1), the intercept also falls in this range. However, what I am finding is that if I run, say, 1000 simulations with a given intercept and slope (which I am treating as constant over 10 years), then if the random effects intercept SD falls below a certain value, there are many simulations where the random effects intercept SD is zero. If I inflate the intercept SD then this seems to simulate correctly (please see below where I use residual Sd=0.25, intercept SD = 0.10 and slope SD = 0.05; if I increase intercept SD to 0.2, this is correctly simulated; or if I drop the residual SD to say 0.05, the variance parameters are correctly simulated).
Is this problem due to my coercion of the range to be (-1, +1)?
I include the code for my function and the processing of the simulations below, if this would help:
Function: generating the data:
normaldata <- function (J, K, beta0, beta1, sigma_resid,
sigma_beta0, sigma_beta1){
year <- rep(rep(0:J),K) # 0:J replicated K times
site <- rep (1:K, each=(J+1)) # 1:K sites, repeated J years
mu.beta0_true <- beta0
mu.beta1_true <- beta1
# random effects variance parameters:
sigma_resid_true <- sigma_resid
sigma_beta0_true <- sigma_beta0
sigma_beta1_true <- sigma_beta1
# site-level parameters:
beta0_true <<- rnorm(K, mu.beta0_true, sigma_beta0_true)
beta1_true <<- rnorm(K, mu.beta1_true, sigma_beta1_true)
# data
y <<- rnorm(n = (J+1)*K, mean = beta0_true[site] + beta1_true[site]*(year),
sd = sigma_resid_true)
# NOT SURE WHETHER TO IMPOSE THE LIMITS HERE OR LATER IN CODE:
y[y < -1] <- -1 # Absolute minimum
y[y > 1] <- 1 # Absolute maximum
return(data.frame(y, year, site))
}
Processing the simulated code:
vc1 <- as.data.frame(VarCorr(lme.power))
vc2 <- as.numeric(attributes(VarCorr(lme.power)$site)$stddev)
n.sims = 1000
sigma.resid <- rep(0, n.sims)
sigma.intercept <- rep(0, n.sims)
sigma.slope <- rep(0,n.sims)
intercept <- rep(0,n.sims)
slope <- rep(0,n.sims)
signif <- rep(0,n.sims)
for (s in 1:n.sims){
y.data <- normaldata(10,200, 0.30, ((0-0.30)/10), 0.25, 0.1, 0.05)
lme.power <- lmer(y ~ year + (1+year | site), data=y.data)
summary(lme.power)
theta.hat <- fixef(lme.power)[["year"]]
theta.se <- se.fixef(lme.power)[["year"]]
signif[s] <- ((theta.hat + 1.96*theta.se) < 0) |
((theta.hat - 1.96*theta.se) > 0) # returns TRUE or FALSE
signif[s]
betas <- fixef(lme.power)
intercept[s] <- betas[1]
slope[s] <- betas[2]
vc1 <- as.data.frame(VarCorr(lme.power))
vc2 <- as.numeric(attributes(VarCorr(lme.power)$site)$stddev)
sigma.resid[s] <- vc1[4,5]
sigma.intercept[s] <- vc2[1]
sigma.slope[s] <- vc2[2]
cat(paste(s, " ")); flush.console()
}
power <- mean (signif) # proportion of TRUE
power
summary(sigma.resid)
summary(sigma.intercept)
summary(sigma.slope)
summary(intercept)
summary(slope)
Thank you in advance for any help you can offer.
This is really more of a statistical than a computational question, but the short answer is: you haven't made any mistakes, this is exactly as expected. This example on rpubs runs some simulations of a Normally distributed response (i.e. it corresponds exactly to the model assumed by LMM software, so the constraint you're worried about isn't an issue).
The lefthand histogram below is from simulations with 25 samples in 5 groups, equal variance (of 1) within and between groups; the righthand histogram is from simulations with 15 samples in 3 groups.
The sampling distribution of variances for null cases (i.e., no real between-group variation) is known to have a point mass or "spike" at zero; it's not surprising (although as far as I know not theoretically worked out) that the sampling distribution of the variances should also have a point mass at zero when the between-sample is non-zero but small and/or when the sample is small and/or noisy.
http://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#zero-variance has more on this topic.
For a brief background, I am insterested in describing a distribution of fire sizes, which is presumed to follow a lognormal distribution (many small fires and few large fires). For my specific application I am only interested in the fires that fall within a certain range of sizes (> min, < max). So, I am attempting to fit a lognormal distribution to a data set that has been censored on both ends. In essence, I want to find the parameters of the lognormal distribution (mu and sigma) that best fits the full distribution prior to censoring. Can I fit the distribution taking into account that I know I am only looking a a portion of the distribution?
I have done some experimentation, but have become stumped. Here's an example:
# Generate data #
D <- rlnorm(1000,meanlog = -0.75, sdlog = 1.5)
# Censor data #
min <- 0.10
max <- 20
Dt <- D[D > min]
Dt <- Dt[Dt <= max]
If I fit the non-censored data (D) using either fitdistr (MASS) or fitdist (fitdistrplus) I obviously get approximately the same parameter values as I entered. But if I fit the censored data (Dt) then the parameter values do not match, as expected. The question is how to incorporate the known censoring. I have seen some references elsewhere to using upper and lower within fitdistr, but I encounter an error that I'm not sure how to resolve:
> fitt <- fitdist(Dt, "lognormal", lower = min, upper = max)
Error in fitdist(Dt, "lognormal", lower = min, upper = max) :
The dlognormal function must be defined
I will appreciate any advice, first on whether this is the appropriate way to fit a censored distribution, and if so, how to go about defining the dlognormal function so that I can make this work. Thanks!
Your data is not censored (that would mean that observations outside the interval
are there, but you do not know their exact value)
but truncated (those observations have been discarded).
You just have to provide fitdist with the density and the cumulative distribution function
of your truncated distribution.
library(truncdist)
dtruncated_log_normal <- function(x, meanlog, sdlog)
dtrunc(x, "lnorm", a=.10, b=20, meanlog=meanlog, sdlog=sdlog)
ptruncated_log_normal <- function(q, meanlog, sdlog)
ptrunc(q, "lnorm", a=.10, b=20, meanlog=meanlog, sdlog=sdlog)
library(fitdistrplus)
fitdist(Dt, "truncated_log_normal", start = list(meanlog=0, sdlog=1))
# Fitting of the distribution ' truncated_log_normal ' by maximum likelihood
# Parameters:
# estimate Std. Error
# meanlog -0.7482085 0.08390333
# sdlog 1.4232373 0.0668787