Sample proportion confidence interval estimates using logit - math

This seems like a problem that has an accepted, statistically and mathematically sound answer, but I can't seem to find it.
When estimating confidence intervals from sample proportions, I generally use the normal approximation technique described here: https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval#Normal_approximation_interval
However, this fails spectacularly for proportions where my sample is close to 0 or 1, notably having symmetrical distribution which causes it to go above 1 or below 0. Generally, since proportion estimates "behave better" when modeled using a logit, I assume there is some way to apply a logit transform to the confidence intervals which would result in an asymmetric confidence interval that would never cross 0 or 1.
However, instead of trying to hack together my own technique with freshman calculus and MBA statistics as my highest formal mathematical training, I have been searching the web to see if such a technique has already been described by someone more qualified.
Is anyone aware of a way to do this?

A straightforward derivation via the usual change of variables formula shows that y = logit(x) where x has a beta distribution (the posterior distribution for the binomial proportion assuming a beta prior), has a distribution with pdf (exp(y)^a)/((1 + exp(y))^(a + b))/beta(a, b) where beta(a, b) = gamma(a)*gamma(b)/gamma(a + b).
That pdf has a somewhat Gaussian-like shape, but it's less symmetrical the more different a and b are. It probably has a name, although I don't recognize it.
It's not clear that taking y = logit(x) here is helpful. For several other approaches, see: Binomial proportion confidence interval
Statistics problems should probably go to stats.stackexchange.com.

Related

Output from Linear Mixed Models differs from Estimated Marginal Means

I have a query about the output statistics gained from linear mixed models (using the lmer function) relative to the output statistics taken from the estimated marginal means gained from this model
Essentially, I am running an LMM comparing the within-subjects effect of different contexts (with "Negative" coded as the baseline) on enjoyment ratings. The LMM output suggests that the difference between negative and polite contexts is not significant, with a p-value of .35. See the screenshot below with the relevant line highlighted:
LMM output
However, when I then run the lsmeans function on the same model (with the Holm correction), the p-value for the comparison between Negative and Polite context categories is now .05, and all of the other statistics have changed too. Again, see the screenshot below with the relevant line highlighted:
LSMeans output
I'm probably being dense because my understanding of LMMs isn't hugely advanced, but I've tried to Google the reason for this and yet I can't seem to find out why? I don't think it has anything to do with the corrections because the smaller p-value is observed when the Holm correction is used. Therefore, I was wondering why this is the case, and which value I should report/stick with and why?
Thank you for your help!
Regression coefficients and marginal means are not one and the same. Once you learn these concepts it'll be easier to figure out which one is more informative and therefore which one you should report.
After we fit a regression by estimating its coefficients, we can predict the outcome yi given the m input variables Xi = (Xi1, ..., Xim). If the inputs are informative about the outcome, the predicted yi is different for different Xi. If we average the predictions yi for examples with Xij = xj, we get the marginal effect of the jth feature at the value xj. It's crucial to keep track of which inputs are kept fixed (and at what values) and which inputs are averaged over (aka marginalized out).
In your case, contextCatPolite in the coefficients summary is the difference between Polite and Negative when smileType is set to its reference level (no reward, I'd guess). In the emmeans contrasts, Polite - Negative is the average difference over all smileTypes.
Interactions have a way of making interpretation more challenging and your model includes an interaction between smileType and contextCat. See Interaction analysis in emmeans.
To add to #dipetkov's answer, the coefficients in your LMM are based on treatment coding (sometimes called 'dummy' coding). With the interactions in the model, these coefficients are no longer "main-effects" in the traditional sense of factorial ANOVA. For instance, if you have:
y = b_0 + b_1(X_1) + b_2(X_2) + b_3 (X_1 * X_2)
...b_1 is "the effect of X_1" only when X_2 = 0:
y = b_0 + b_1(X_1) + b_2(0) + b_3 (X_1 * 0)
y = b_0 + b_1(X_1)
Thus, as #dipetkov points out, 1.625 is not the difference between Negative and Polite on average across all other factors (which you get from emmeans). Instead, this coefficient is the difference between Negative and Polite specifically when smileType = 0.
If you use contrast coding instead of treatment coding, then the coefficients from the regression output would match the estimated marginal means, because smileType = 0 would now be on average across smile types. The coding scheme thus has a huge effect on the estimated values and statistical significance of regression coefficients, but it should not effect F-tests based on the reduction in deviance/variance (because no matter how you code it, a given variable explains the same amount of variance).
https://stats.oarc.ucla.edu/spss/faq/coding-systems-for-categorical-variables-in-regression-analysis/

Contrast plot for GAM using mgcv

When using the visreg package to visualise a GAM with a contrast plot, the confidence interval goes to zero at the inflection point when the graph is U-shaped:
# Load libraries
library(mgcv)
library(visreg)
# Synthetic data
df <- data.frame(a = -10:10, b = jitter((-10:10)^2, amount = 10))
# Fit GAM
res <- gam(b ~ s(a), data = df)
# Make contrast figure
visreg(res, type = "contrast")
This seems dodgy and doesn't happen when making a conditional plot (i.e., visreg(res, type = "conditional")), so instead I'm looking at the mgcv package to make the same plot. I can make a conditional plot using mgcv (e.g., plot.gam(res)), but I don't see the option to make a contrast plot. Is this possible with the mgcv package?
This is due to identifiability constraints imposed on the spline basis/bases used in the model. This is a sum-to-zero constraint and effectively removes an intercept-like basis function from the basis used for each smooth term so that these are not confounded with the model intercept. This allows the model to be identifiable, rather than having an infinity of solutions.
Using standard theory, the confidence interval has to tend to zero where it crosses zero on the y-axis (the centred effect usually, but here as shown it is on on some transformed scale) as the constraint implies that at some point x, the effect is 0 and has 0 variance.
This is nonsense of course and recent research has investigated this problem. One solution provided by Simon Wood and colleagues employs extensions to Nychka's observation that, for the Gaussian case, the Bayesian credible interval for a smooth has good across-the-function interpretation as a frequentist confidence interval (so not pointwise, but not simultaneous either). Nychka's results (the coverage properties of the interval) fail in situations where the estimated smooth has squared bias that is not substantially less than the variance of the estimate; clearly this fails to be the case when the variance hits zero where the estimated smooth passes through zero effect as the bias is not actually quite zero at this point.
Marra and Wood (2012) have extended these results to the generalized model setting, basically estimating the confidence interval for one smooth by assuming that all the other terms in the model have had the identifiability constraints applied to them, but not the smooth of interest. This shifts the focus of inference from the smooth directly to the smooth + intercept. You can turn this on in plot.gam() with the argument seWithMean = TRUE.
I don't see an easy way to make visreg do this however, although it is trivial to get back the information you want via predict.gam() with the options type = 'iterms', se.fit = TRUE. This returns, on the scale of the linear predictor, the contributions of each model smooth term plus the standard error that include the correction implied by seWithMean. You can then fiddle with this to your heart's content; adding on the model constant term (the estimate for the intercept) for example should provide you something close to the figure you show in your question.

how can I predict probability of an event using the weibull distribution

I have a data set of connection forces based on axial force in N (http://pastebin.com/Huwg4vxv)
Some previous analyses has been undertaken (by another party) and has fitted a Weibull distribution to it, and then predicted that the chances of recording a force of 60N or higher is around 1.2%.
I have to say that eyeballing the data, that doesn't seem likely to me, but I know nothing about this particular distribution.
So far I am able to fit the curve:
force<-read.csv(file="forcestats.csv",header = T)
library(MASS)
fitdistr(force$F, 'weibull')
hist(force$F)
I am trying to understand
is a weibull distro really the best fit for this data ?
how I can make that same prediction using R (how to calculate the probability of values above 60N);
is it possible to calculate the 95% confidence interval for that value (i.e., 1.2% +/- x%)
Thanks for reading
Pete
To address your first item,
is a weibull distro really the best fit for this data ?
conceptually, this is more of a question about statistical inference rather than programming, so you most likely want to tackle that on CrossValidated rather than SO. However, you can certainly inquire about the means of investigating this programmatically, such as comparing the estimated density of the observed data to the theoretical density function or to the density function of random samples from a weibull distribution with your parameter estimates:
library(MASS)
##
Weibull <- read.csv(
"F:/Studio/MiscData/force_in_newtons.txt",
header=TRUE)
##
params <- fitdistr(Weibull$F, 'weibull')
##
Shape <- params[[1]][1]
Scale <- params[[1]][2]
##
set.seed(123)
plot(
density(
rweibull(
500,shape=Shape,scale=Scale)),
col="red",
lwd=2,lty=3,
main="")
##
lines(
density(
Weibull$F),
col="blue",
lty=3,lwd=2)
##
legend(
"topright",
legend=c(
"rweibull(n=500,...)",
"observed data"),
lty=c(3,3),
col=c("red","blue"),
lwd=c(3,3),
bty="n")
Of course, there are many other ways of assessing the fit of your model, this is just a quick sanity check.
As for your second question, you can use the pweibull function with lower.tail=FALSE to get probabilities from the theoretical survival function (S(x) = 1 - F(x)):
## Pr(X >= 60)
> pweibull(
60,shape=Shape,scale=Scale,
lower.tail=FALSE)
[1] 0.01268268
As for your final item, I believe that calculating confidence intervals on probabilities (as well as certain other statistical quantities) for an estimated distribution requires using the Delta method; I could be recalling incorrectly though, so you may want to double check on this. If this is the case and you aren't familiar with the Delta method, then unfortunately you will probably have to do a fair amount of reading on the subject because the calculation involved is generally non-trivial - here's another link; the Wikipedia article doesn't give a very in-depth treatment of the subject. Or, you could inquire about this on Cross Validated as well.

area under the precision-recall curve in R or other summary quantities

I plan to use the precision-recall plot (PR plot) to compare models. See the attached figure (partial screenshot, sorry!) below. Obviously I have the true positives, true negatives, false positives and false negatives at hand, and I need a a single summary quantity for each model. Here are my questions:
Area Under the PR curve (AUC) is the first quantity, but I don't know how to calculate that in R. I do NOT want to use any package like ROCR because all the codes are written by myself and I hope to write my own codes using the quantities available. It seems that there are many ways -- I hope to know which one is the most implementable.
Another quantity is the F-measure: a measure that combines precision and recall is the harmonic mean of precision and recall, the traditional F-measure or balanced F-score. However, I am curious if this is better than the AUC in #1 or they are describing different things? Moreover, since I have a bunch of Recall and Precision values, how can I calculate a single F measure in this case (see Figure below).
Thank you!
To calculate the AUC of a curve, you can use a numeric integration function such as trapz() in the caTools package.
auc <- trapz(recall, precision)
The F-score is the harmonic mean for a given cutoff value. In your case, you would get many F-scores for each curve so it would not summarize the curve as you like.
The AUC describes the performance of the model across possible values of the continuous output from the model. The F-score describes a model at a particular cutpoint. It is more of a way to combine recall and precision to a single statistic.
Be careful when explaining it though. Usually, AUC is discussed in the context of sensitivity and specificity.

estimating density in a multidimensional space with R

I have two types of individuals, say M and F, each described with six variables (forming a 6D space S). I would like to identify the regions in S where the densities of M and F differ maximally. I first tried a logistic binomial model linking F/ M to the six variables but the result of this GLM model is very hard to interpret (in part due to the numerous significant interaction terms). Thus I am thinking to an “spatial” analysis where I would separately estimate the density of M and F individuals everywhere in S, then calculating the difference in densities. Eventually I would manually look for the largest difference in densities, and extract the values at the 6 variables.
I found the function sm.density in the package sm that can estimate densities in a 3d space, but I find nothing for a space with n>3. Would you know something that would manage to do this in R? Alternatively, would have a more elegant method to answer my first question (2nd sentence)?
In advance,
Thanks a lot for your help
The function kde of the package ks performs kernel density estimation for multinomial data with dimensions ranging from 1 to 6.
pdfCluster and np packages propose functions to perform kernel density estimation in higher dimension.
If you prefer parametric techniques, you look at R packages doing gaussian mixture estimation like mclust or mixtools.
The ability to do this with GLM models may be constrained both by interpretablity issues that you already encountered as well as by numerical stability issues. Furthermore, you don't describe the GLM models, so it's not possible to see whether you include consideration of non-linearity. If you have lots of data, you might consider using 2D crossed spline terms. (These are not really density estimates.) If I were doing initial exploration with facilities in the rms/Hmisc packages in five dimensions it might look like:
library(rms)
dd <- datadist(dat)
options(datadist="dd")
big.mod <- lrm( MF ~ ( rcs(var1, 3) + # `lrm` is logistic regression in rms
rcs(var2, 3) +
rcs(var3, 3) +
rcs(var4, 3) +
rcs(var5, 3) )^2,# all 2way interactions
data=dat,
max.iter=50) # these fits may take longer times
bplot( Predict(bid.mod, var1,var2, n=10) )
That should show the simultaneous functional form of var1's and var2's contribution to the "5 dimensional" model estimates at 10 points each and at the median value of the three other variables.

Resources