Model selection with beta and quassi families using gamm4 [closed] - r

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have two responses which conform to beta (also known as betar) and Poisson families, and I am looking into fitting additive mixed-models with beta and quasi-families (count data is over-dispersed), respectively.
I am aware that I could use gamm function from mgcv package which accepts both beta and quassi-families, however I am considering that it uses PQL, and the AIC reported is not useful for comparing models - which is the primary objective of my analyses.
In the case of the count response, I am aware that QAIC has been used for ranking/comparing overdispersed mixed models but I cannot find anything that says this it is appropriate for overdispersed GAMM.
I understand these are potentially two questions in one but they both have a common theme of model selection with extended families and potentially have different solutions. Below I provide reproducible examples for each case.
##generate data
library(gamm4)
library(mgcv)
dat <- gamSim(1,n=400,scale=2)
dat<-subset(dat, select=c(x0,x1,x2,x3,f) )
dat$g <- as.factor(sample(1:20,400,replace=TRUE))#random factor
dat$yb<-runif(400)#yb ranges between 0-1 hence fitted with beta family
dat$f <- dat$f + model.matrix(~ g-1)%*%rnorm(20)*2
dat$yp <- rpois(400,exp(dat$f/7))#y2 is counts hence poisson family
#beta family example with gamm function (this works - however not sure if the subsequent model comparisons are valid!)
m1b<- gamm(yb~s(x0)+s(x1)+s(x2)+s(x3),family=betar(link='logit'),data=dat,random=list(g=~1))
m2b<-gamm(yb~s(x1)+s(x2)+s(x3),family=betar(link='logit'),data=dat,random=list(g=~1))
m3b<-gamm(yb~s(x0)+s(x2)+s(x3),family=betar(link='logit'),data=dat,random=list(g=~1))
#AIC to compare models
AIC(m1b,m2b,m3b)
#try the same using gamm4 (ideally)- it obviously fails with beta family.
m<-gamm4(yb~s(x0)+s(x1)+s(x2)+s(x3),family=betar(link='logit'),data=dat,random = ~ (1|g))
##Example with quassi family - yp response is overdispersed count data (may not be overdispered in this example
#example using gamm function
m1p<-gamm(yp~s(x0)+s(x1)+s(x2)+s(x3),family = quasipoisson,data=dat,random=list(g=~1))
m2p<-gamm(yp~s(x1)+s(x2)+s(x3),family = quasipoisson,data=dat,random=list(g=~1))
m3p<-gamm(yp~s(x0)+s(x2)+s(x3),family = quasipoisson,data=dat,random=list(g=~1))
#AIC to compare models
AIC(m1p,m2p,m3p)
#again the example with using gamm4 function will not work as it doesnt accept quassi falimies
m<-gamm4(yp~s(x0)+s(x1)+s(x2)+s(x3),family = quasipoisson,data=dat,random = ~ (1|g))

You have a bunch of questions here, but I'll try to tackle them. Basically, you want to fit parametric statistical models with
random effects (nlme, lme4)
distributions from the exponential family ... (MASS::glmmPQL, lme4::glmer)
... with overdispersion ...
... or distributions beyond the exponential family such as the Beta distribution (VGAM, betareg)
additive models/splines (splines) ...
... or penalized regression splines, which automatically adjust the complexity of the smooth terms
... with a real likelihood model rather than a marginal or quasi-likelihood model (e.g. GEEs, PQL), so you can do classic inference
Each of the specified issues above adds 1 or more "difficulty points" to a model-fitting exercise ... usually once your score goes beyond about +3 or so, you have to find a way to compromise or take shortcuts on some of the things you want. You've correctly identified gamm and gamm4 as doing some of the stuff you want, but you can't get everything. Some suggestions:
Overdispersion
One way to handle overdispersion is with an observation-level random effect, e.g.
dat$obs <- factor(seq(nrow(dat)))
m <- gamm4(yp~s(x0)+s(x1)+s(x2)+s(x3),
family = poisson,data=dat,random = ~ (1|g)+(1|obs))
Another alternative is to adjust the overdispersion yourself, if you think that makes sense, e.g.:
m0 <- gamm4(yp~s(x0)+s(x1)+s(x2)+s(x3),family = poisson,data=dat,random = ~ (1|g))
First compute overdispersion:
(phi <- sum(residuals(m0$gam,type="pearson")^2/df.residual(m0$gam)))
## [1] 1.003436
(if we repeat this exercise with m0$mer instead we get 0.9939696: the result is almost exactly equal to 1 because we generated data from a Poisson distribution in the first place ...)
(qaic <- -2*logLik(m0$mer)/phi + 2*lme4:::npar.merMod(m0$mer))
N.B. I am guessing that it makes sense to construct the likelihoods, etc. from the individual components of a gamm4 fit in this way; use at your own risk
Alternative distributions
The glmmADMB and glmmTMB packages (both off-CRAN but findable via Google ...) can both handle mixed Beta models. They can't do penalized regression splines, but you can use regular splines via splines::ns() or splines::bs() (but you do have to decide on the appropriate level of complexity -- maybe you can guess from preliminary gamm or mgcv fits ...)
library(glmmADMB)
library(splines)
m3b <- glmmadmb(yb~ns(x0,2)+ns(x1,2)+ns(x2,5)+ns(x3,2)+(1|g),
family="beta",link="logit",data=dat)
The glmmTMB package can in principle do this:
library(glmmTMB)
m2b <- glmmTMB(yb~ns(x0,2)+ns(x1,2)+ns(x2,5)+ns(x3,2)+(1|g),
family=list(family="beta",link="logit"),data=dat)
but the package is in development and the current set of results don't make sense -- so I might hesitate to use it at this point.

Related

Heterocesdastic model of mixed effects via lmer function

I am adjusting a mixed effects model which, due to the observed heteroscedasticity, it was necessary to include an effect to accommodate it. Therefore, using the lme function of the nlme package, this was easy to be solved, see the code below:
library(nlme)
library(lme4)
Model1 <- lme(log(Var1)~log(Var2)+log(Var3)+
(Var4)+(Var5),
random = ~1|Var6, Data1, method="REML",
weights = varIdent(form=~1|Var7))
#Var6: It is a factor with several levels.
#Var7: It is a Dummy variable.
However, I need to readjust the model described above using the lme4 package, that is, using the lmer function. It is known and many are the materials that inform some limitations existing in the lme4, such as, for example, modeling heteroscedasticity. What motivated me to readjust this model is the fact that I have an interest in using a specific package that in cases of mixed models it only accepts if they are adjusted through the lmer function. How could I resolve this situation? Below is a good part of the model adjusted using the lmer function, however, this model is not considering the effect to model the observed heteroscedasticity.
Model2 <- lmer(log(Var1)~log(Var2)+log(Var3)+
(Var4)+(Var5)+(1|Var6),
Data1, REML=T)
Regarding the choice of the random effect (Var6) and the inclusion of the effect to consider the heterogeneity by levels of the variable (Var7), these were carefully analyzed, however, I will not put here the whole procedure so as not to be an extensive post and to be more objective .
This is hackable. You need to add an observation-level random effect that is only applied to the group with the larger residual variance (you need to know this in advance!), via (0+dummy(Var7,"1")|obs); this has the effect of multiplying each observation-level random effect value by 1 if the observation is in group "1" of Var7, 0 otherwise. You also need to use lmerControl() to override a few checks that lmer does to try to make sure you are not adding redundant random effects.
Data1$obs <- factor(seq(nrow(Data1)))
Model2 <- lmer(log(Var1)~log(Var2)+log(Var3)+
(Var4)+(Var5) + (1|Var6) +
(0+dummy(Var7,"1")|obs),
Data1, REML=TRUE,
control=lmerControl(check.nobs.vs.nlev="ignore",
check.nobs.vs.nRE="ignore"))
all.equal(REMLcrit(Model2), c(-2*logLik(Model1))) ## TRUE
all.equal(fixef(Model1), fixef(Model2), tolerance=1e-7)
If you want to use this model with hnp you need to work around the fact that hnp doesn't pass the lmerControl option properly.
library(hnp)
d <- function(obj) resid(obj, type="pearson")
s <- function(n, obj) simulate(obj)[[1]]
f <- function(y.) refit(Model2, y.)
hnp(Model2, newclass=TRUE, diagfun=d, simfun=s, fitfun=f)
You might also be interested in the DHARMa package, which does similar simulation-based diagnostics.

GAM with only Categorical/Logical

I'm currently trying to use a GAM to calculate a rough estimation of expected goals model based purely on the commentary data from ESPN. However, all the data is either a categorical variable or a logical vector, so I'm not sure if there's a way to smooth, or if I should just use the factor names.
Here are my variables:
shot_where (factor): shot location (e.g. right side of the box)
assist_class (factor): type of assist (cross, through ball, pass)
follow_corner (logical): whether the shot follows a corner
shot_with (factor): right foot, left food, header
follow_set_piece (logical): whether the shot follows a set piece
I think I should just use the formula as just the variable names.
model <- bam(is_goal ~ shot_where + assist_class + follow_set_piece + shot_where + follow_corner + shot_where:shot_with, family = "binomial", method = "REML")
The shot_where and shot_with would incorporate any interactions between these two varaibles.
However, I was told I could smooth factor variables as well using the below structure.
model <- bam(is_goal ~ s(shot_where, bs = 'fs') + s(assist_class, bs = 'fs') + as.logical(follow_set_piece) +
as.logical(follow_corner) + s(shot_with, bs = 'fs'), data = model_data, family = "binomial", method = "REML")
This worked for creating a model, but I want to make sure this is a correct method of building the model. I've yet to see any information on using only factor/logical variables in a GAM model, so I thought it was worth asking.
If you only have categorical covariates then you aren't fitting a GAM, whether you fit the model with gam(), bam(), or something else.
What you are doing when you pass factor variables to s() using the fs basis like this
s(f, bs = 'fs')`
is creating a random intercept for each level of the factor f.
There's no smoothing going on here at all; the model is simply exploiting the equivalence of the Bayesian view of smoothing with random effects.
Given that none of your covariates could reasonably be considered random in the sense of a mixed effects model then the only justification for doing what you're doing might be as a computational trick.
Your first model is just a simple GLM (note the typo in the formula as shot_where is repeated twice in the formula.)
It's not clear to me why you are using bam() to fit this model; you're loosing computational efficiency that bam() provides by using method = 'REML'; it should be 'fREML' for bam() models. But as there is no smoothness selection going on in the first model you'd likely be better off using glm() to fit that model. If the issue is large sample sizes, there are several packages that can fit GLMs to large data, for example biglm and it's bigglm() function.
In the second model there is no smoothing going on but there is penalisation which is shrinking the estimates for the random intercepts toward zero. You're likely to get better performance on big data using the lme4 package or TMB and the glmmTMB package to fit what is a GLMM.
This is more of a theoretical question than about R, but let me provide a brief answer. Essentially, the most flexible model you could estimate would be one where you used the variables as factors. It also produces a model that is reasonably easily interpreted - where each coefficient gives you the expected difference in y between the reference level and the level represented by the dummy regressor.
Smoothing splines try to strike the appropriate bias-variance tradeoff. If you've got lots of data and relatively few categories in the categorical variables, there will be no real loss in efficiency for including all of the dummy regressors representing the categories and the bias will also be as small as possible. To the extent that the smoothing spline model is different from the one treating everything as factors, it is likely inducing bias without a corresponding increase in efficiency. If it were me, I would stick with a model that treats all of the categorical variables as factors.

R language, how to use bootstraps to generate maximum likelihood and AICc?

Sorry for a quite stupid question. I am doing multiple comparisons of morphologic traits through correlations of bootstraped data. I'm curious if such multiple comparisons are impacting my level of inference, as well as the effect of the potential multicollinearity in my data. Perhaps, a reasonable option would be to use my bootstraps to generate maximum likelihood and then generate AICc-s to do comparisons with all of my parameters, to see what comes out as most important... the problem is that although I have (more or less clear) the way, I don't know how to implement this in R. Can anybody be so kind as to throw some light on this for me?
So far, here an example (using R language, but not my data):
library(boot)
data(iris)
head(iris)
# The function
pearson <- function(data, indices){
dt<-data[indices,]
c(
cor(dt[,1], dt[,2], method='p'),
median(dt[,1]),
median(dt[,2])
)
}
# One example: iris$Sepal.Length ~ iris$Sepal.Width
# I calculate the r-squared with 1000 replications
set.seed(12345)
dat <- iris[,c(1,2)]
dat <- na.omit(dat)
results <- boot(dat, statistic=pearson, R=1000)
# 95% CIs
boot.ci(results, type="bca")
BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 1000 bootstrap replicates
CALL :
boot.ci(boot.out = results, type = "bca")
Intervals :
Level BCa
95% (-0.2490, 0.0423 )
Calculations and Intervals on Original Scale
plot(results)
I have several more pairs of comparisons.
More of a Cross Validated question.
Multicollinearity shouldn't be a problem if you're just assessing the relationship between two variables (in your case correlation). Multicollinearity only becomes an issue when you fit a model, e.g. multiple regression, with several highly correlated predictors.
Multiple comparisons is always a problem though because it increases your type-I error. The way to address that is to do a multiple comparison correction, e.g. Bonferroni-Holm or the less conservative FDR. That can have its downsides though, especially if you have a lot of predictors and few observations - it may lower your power so much that you won't be able to find any effect, no matter how big it is.
In high-dimensional setting like this, your best bet may be with some sort of regularized regression method. With regularization, you put all predictors into your model at once, similarly to doing multiple regression, however, the trick is that you constrain the model so that all of the regression slopes are pulled towards zero, so that only the ones with the big effects "survive". The machine learning versions of regularized regression are called ridge, LASSO, and elastic net, and they can be fitted using the glmnet package. There is also Bayesian equivalents in so-called shrinkage priors, such as horseshoe (see e.g. https://avehtari.github.io/modelselection/regularizedhorseshoe_slides.pdf). You can fit Bayesian regularized regression using the brms package.

Variable selection methods

I have been doing variable selection for a modeling problem.
I have used trial and error for the selection (adding / removing a variable) with a decrease in error. However, I have the challenge as the number of variables grows into the hundreds that manual variable selection can not be performed as the model takes 1/2 hour to compute, rendering the task impossible.
Would you happen to know of any other packages than the regsubsets from the leaps package (which when tested with the same trial and error variables produced a higher error, it did not include some variables which were lineraly dependant - excluding some valuable variables).
You need a better (i.e. not flawed) approach to model selection. There are plenty of options, but one that should be easy to adapt to your situation would be using some form of regularization, such as the Lasso or the elastic net. These apply shrinkage to the sizes of the coefficients; if a coefficient is shrunk from its least squares solution to zero, that variable is removed from the model. The resulting model coefficients are slightly biased but they have lower variance than the selected OLS terms.
Take a look at the lars, glmnet, and penalized packages
Try using the stepAIC function of the MASS package.
Here is a really minimal example:
library(MASS)
data(swiss)
str(swiss)
lm <- lm(Fertility ~ ., data = swiss)
lm$coefficients
## (Intercept) Agriculture Examination Education Catholic
## 66.9151817 -0.1721140 -0.2580082 -0.8709401 0.1041153
## Infant.Mortality
## 1.0770481
st1 <- stepAIC(lm, direction = "both")
st2 <- stepAIC(lm, direction = "forward")
st3 <- stepAIC(lm, direction = "backward")
summary(st1)
summary(st2)
summary(st3)
You should try the 3 directions and ckeck which model works better with your test data.
Read ?stepAIC and take a look at the examples.
EDIT
It's true stepwise regression isn't the greatest method. As it's mentioned in GavinSimpson answer, lasso regression is a better/much more efficient method. It's much faster than stepwise regression and will work with large datasets.
Check out the glmnet package vignette:
http://www.stanford.edu/~hastie/glmnet/glmnet_alpha.html

Pseudo R squared for cumulative link function

I have an ordinal dependent variable and trying to use a number of independent variables to predict it. I use R. The function I use is clm in the ordinal package, to perform a cumulative link function with a probit link, to be precise:
I tried the function pR2 in the package pscl to get the pseudo R squared with no success.
How do I get pseudo R squareds with the clm function?
Thanks so much for your help.
There are a variety of pseudo-R^2. I don't like to use any of them because I do not see the results as having a meaning in the real world. They do not estimate effect sizes of any sort and they are not particularly good for statistical inference. Furthermore in situations like this with multiple observations per entity, I think it is debatable which value for "n" (the number of subjects) or the degrees of freedom is appropriate. Some people use McFadden's R^2 which would be relatively easy to calculate, since clm generated a list with one of its values named "logLik". You just need to know that the logLikelihood is only a multiplicative constant (-2) away from the deviance. If one had the model in the first example:
library(ordinal)
data(wine)
fm1 <- clm(rating ~ temp * contact, data = wine)
fm0 <- clm(rating ~ 1, data = wine)
( McF.pR2 <- 1 - fm1$logLik/fm0$logLik )
[1] 0.1668244
I had seen this question on CrossValidated and was hoping to see the more statistically sophisticated participants over there take this one on, but they saw it as a programming question and dumped it over here. Perhaps their opinion of R^2 as a worthwhile measure is as low as mine?
Recommend to use function nagelkerke from rcompanion package to get Pseudo r-squared.
When your predictor or outcome variables are categorical or ordinal, the R-Squared will typically be lower than with truly numeric data. R-squared merely a very weak indicator about model's fit, and you can't choose model based on this.

Resources