why do statsmodels and R disagree on AIC computation - r

I have googled this and could not find a solution.
It seems R has an issue with AIC/BIC calculation. It produces incorrect results. A simple example is shown below:
link = 'https://gist.githubusercontent.com/seankross/a412dfbd88b3db70b74b/raw/5f23f993cd87c283ce766e7ac6b329ee7cc2e1d1/mtcars.csv'
df = read.csv(link, row.names = 'model')
form = 'mpg ~ disp + hp + wt + qsec + gear'
my_model = lm(form, data = df)
summary(my_model)
cat('AIC:',AIC(my_model),'\tBIC:',AIC(my_model, k = log(nrow(df))))
AIC: 157.4512 BIC: 167.7113
Doing Exactly the same thing in python, I obtain:
import pandas as pd
from statsmodels.formula.api import ols as lm
link = 'https://gist.githubusercontent.com/seankross/a412dfbd88b3db70b74b/raw/5f23f993cd87c283ce766e7ac6b329ee7cc2e1d1/mtcars.csv'
df = pd.read_csv(link, index_col='model')
form = 'mpg ~ disp + hp + wt + qsec + gear'
my_model = lm(form, df).fit()
my_model.summary()
print(f'AIC: {my_model.aic:.4f}\tBIC: {my_model.bic:.4f}')
AIC: 155.4512 BIC: 164.2456
You could check the summary(my_model) in R and my_model.summary() in python and you will notice that the two models are EXACTLY the same in everything, apart from the AIC and BIC.
I decided to compute it manually in R:
p = length(coef(my_model)) # number of predictors INCLUDING the Intercept ie 6
s = sqrt(sum(resid(my_model)^2)/nrow(df)) #sqrt(sigma(my_model)^2 * (nrow(df) - p)/nrow(df))
logl = -2* sum(dnorm(df$mpg, fitted(my_model),s, log = TRUE))
c(aic = logl + 2*p, bic = logl + log(nrow(df))*p)
aic bic
155.4512 164.2456
Which matches the results produced by python.
Digging deeper, I noticed that the AIC does use the logLik function. And that is where the problem arises: logLik(my_model) gives exactly the same results as shown in the logl above before multiplying by -2 but the df is given as 7 instead of 6.
If I bruteforce the rank in order to make it 6, I get the correct results ie:
my_model$rank = my_model$rank - 1
cat('AIC:',AIC(my_model),'\tBIC:',AIC(my_model, k = log(nrow(df))))
AIC: 155.4512 BIC: 164.2456
Why does R add 1 to the number of predictors? You can access the logLik function used in base R by typing stats:::logLik.lm on your Rstudio and pressing enter. The two lines below kind of seems to have an issue:
function (object, REML = FALSE, ...)
{
...
p <- object$rank
...
attr(val, "df") <- p + 1 # This line here. Why does R ADD 1?
...
}

This is clearly a deliberate choice: R counts the scale parameter in the set of estimated parameters. From ?logLik.lm:
For ‘"lm"’ fits it is assumed that the scale has been estimated
(by maximum likelihood or REML)
(see also here, pointed out by #MrFlick in the comments). This kind of ambiguity (and, whether normalization constants are included in the log-likelihoods: in R, they are) always has to be checked before comparing results across platforms, and sometimes even across procedures or functions within the same platform.
For what it's worth there also seems to be lots of discussion of this from the statsmodels side, e.g. this (closed) issue about why AIC/BIC are inconsistent between R and statsmodels ...
This commit in March 2002 shows Martin Maechler changing the "df" (degrees of freedom/number of model parameters) attribute back to object$rank+1 with the following additional annotations:
The help page ?logLik.lm gains:
Note that error variance \eqn{\sigma^2} is estimated in \code{lm()} and hence
counted as well.
(this message was obviously edited at some later point to the version seen above).
The NEWS file gains (under "BUG FIXES"):
o logLik.lm() now uses "df = p + 1" again (`+ sigma'!).
It was hard for me to do the archaeology back further than this (i.e. presumably based on the messages here the p+1 reckoning was originally used, then someone changed it to p instead, and MM changed it back in 2002), because functions moved around (this file was created in 2001, so earlier versions will be harder to find). I didn't find any discussion of this in the r-devel mailing list archive for Feb or Mar 2002 ...

Related

How to normalize a Lmer model?

lmer:
mixed.lmer6 <- lmer(Size ~ (Time+I(Time^2))*Country*STemperature +
(1|Country:Locality)+ (1|Locality:Individual)+(1|Batch)+
(1|Egg_masses), REML = FALSE, data = data_NoNA)
residuals:
plot_model(mixed.lmer6, type = "diag")
Tried manual log,power, sqrt transformations in my formula but no improvement and I also can not find a suitable automatic transformation R function such as BoxCox (which does not work for LMER's)
Any help or tips would be appreciated
This might be better suited for CrossValidated ("what should I do?" is appropriate for CV; "how should I do it?" is best for Stack Overflow), but I'll take a crack.
The Q-Q plot is generally the last/least important diagnostic you should look at (the order should be approximately (1) check for significant bias/missed patterns in the mean [fitted vs. residual, residual vs. covariates]; (2) check for outliers/influential points [leverage, Cook's distance]; (3) check for heteroscedasticity [scale-location plot]; (4) check distributional assumptions [Q-Q plot]). The reason is that any of the "upstream" failures (e.g. missed patterns) will show up in the Q-Q plot as well; resolving them will often resolve the apparent non-Normality.
If you can fix the distributional assumptions by fixing something else about the model (adding covariates/adding interactions/adding polynomial or spline terms/removing outliers), then do that.
you could code your own brute-force Box-Cox, something like
fitted_model <- lmer(..., data = mydata)
bcfun <- function(lambda, resp = "y") {
y <- mydata[[resp]]
mydata$newy <- if (lambda==0) log(y) else (y^lambda -1)/lambda
## https://stats.stackexchange.com/questions/261380/how-do-i-get-the-box-cox-log-likelihood-using-the-jacobian
log_jac <- sum((lambda-1)*log(y))
newfit <- update(fitted_model, newy ~ ., data = mydata)
return(-2*(c(logLik(newfit))+ log_jac))
}
lambdavec <- seq(-2, 2, by = 0.2)
boxcox <- vapply(lambdavec, bcfun, FUN.VALUE = numeric(1))
plot(lambdavec, boxcox - min(boxcox))
(lightly tested! but feel free to let me know if it doesn't work)
if you do need to fit a mixed model with a heavy-tailed residual distribution (e.g. Student t), the options are fairly limited. The brms package can fit such models (but takes you down the Bayesian/MCMC rabbit hole), and the heavy package (currently archived on CRAN) will work, but doesn't appear to handle crossed random effects.

Discrepancy emmeans in R (using ezAnova) vs estimated marginal means in SPSS

So this is a bit of a hail mary, but I'm hoping someone here has encountered this before. I recently switched from SPSS to R, and I'm now trying to do a mixed-model ANOVA. Since I'm not confident in my R skills yet, I use the exact same dataset in SPSS to compare my results.
I have a dataset with
dv = RT
within = Session (2 levels), Cue (3 levels), Flanker (2 levels)
between = Group(3 levels).
no covariates.
unequal number of participants per group level (25,25,23)
In R I'm using the ezAnova package to do the mixed-model anova:
results <- ezANOVA(
data = ant_rt_correct
, wid = subject
, dv = rt
, between = group
, within = .(session, cue, flanker)
, detailed = T
, type = 3
, return_aov = T
)
In SPSS I use the following GLM:
GLM rt.1.center.congruent rt.1.center.incongruent rt.1.no.congruent rt.1.no.incongruent
rt.1.spatial.congruent rt.1.spatial.incongruent rt.2.center.congruent rt.2.center.incongruent
rt.2.no.congruent rt.2.no.incongruent rt.2.spatial.congruent rt.2.spatial.incongruent BY group
/WSFACTOR=session 2 Polynomial cue 3 Polynomial flanker 2 Polynomial
/METHOD=SSTYPE(3)
/EMMEANS=TABLES(group*session*cue*flanker)
/PRINT=DESCRIPTIVE
/CRITERIA=ALPHA(.05)
/WSDESIGN=session cue flanker session*cue session*flanker cue*flanker session*cue*flanker
/DESIGN=group.
The results of which line up great, ie:
R: Session F(1,70) = 46.123 p = .000
SPSS: Session F(1,70) = 46.123 p = .000
I also ask for the means per cell using:
descMeans <- ezStats(
data = ant_rt_correct
, wid = subject
, dv = rt
, between = group
, within = .(session, cue, flanker) #,cue,flanker)
, within_full = .(location,direction)
, type = 3
)
Which again line up perfectly with the descriptives from SPSS, e.g. for the cell:
Group(1) - Session(1) - Cue(center) - Flanker(1)
R: M = 484.22
SPSS: M = 484.22
However, when I try to get to the estimated marginal means, using the emmeans package:
eMeans <- emmeans(results$aov, ~ group | session | cue | flanker)
I run into descrepancies as compared to the Estimated Marginal Means table from the SPSS GLM output (for the same interactions), eg:
Group(1) - Session(1) - Cue(center) - Flanker(1)
R: M = 522.5643
SPSS: M = 484.22
It's been my understanding that the estimated marginal means should be the same as the descriptive means in this case, as I have not included any covariates. Am I mistaken in this? And if so, how come the two give different results?
Since the group sizes are unbalanced, I also redid the analyses above after making the groups of equal size. In that case the emmeans became:
Group(1) - Session(1) - Cue(center) - Flanker(1)
R: M =521.2954
SPSS: M = 482.426
So even with equal group sizes in both conditions, I end up with quite different means. Keep in mind that the rest of the statistics and the descriptive means áre equal between SPSS and R. What am I missing... ?
Thanks!
EDIT:
The plot thickens.. If I perform the ANOVA using the AFEX package:
results <- aov_ez(
"subject"
,"rt"
,ant_rt_correct
,between=c("group")
,within=c("session", "cue", "flanker")
)
)
and then take the emmeans again:
eMeans <- emmeans(results, ~ group | session | cue | flanker)
I suddenly get values much closer to that of SPSS (and the descriptive means)
Group(1) - Session(1) - Cue(center) - Flanker(1)
R: M = 484.08
SPSS: M = 484.22
So perhaps ezANOVA is doing something fishy somewhere?
I suggest you try this:
library(lme4) ### I'm guessing you need to install this package first
mod <- lmer(rt ~ session + cue + flanker + (1|group),
data = ant_rt_correct)
library(emmeans)
emm <- emmeans(mod, ~ session * cue * flanker)
pairs(emm, by = c("cue", "flanker") # simple comparisons for session
pairs(emm, by = c("session", "flanker") # simple comparisons for cue
pairs(emm, by = c("session", "cue") # simple comparisons for flanker
This fits a mixed model with random intercepts for each group. It uses REML estimation, which is likely to be what SPSS uses.
In contrast, ezANOVA fits a fixed-effects model (no within factor at all), and aov_ez uses the aov function which produces an analysis that ignores the inter-block effects. Those make a difference especially with unbalanced data.
An alternative is to use afex::mixed, which in fact uses lme4::lmer to fit the model.

Using broom::tidy on felm result with clustered standard errors

I'm trying to extract point estimates and confidence intervals from a panel data model. The following reproduces the error using the canned example from the lfe documentation. The only small change I've made is to cluster standard errors at the firm-level to replicate my issue in est2.
## create covariates
x <- rnorm(1000)
x2 <- rnorm(length(x))
## individual and firm
id <- factor(sample(20,length(x),replace=TRUE))
firm <- factor(sample(13,length(x),replace=TRUE))
## effects for them
id.eff <- rnorm(nlevels(id))
firm.eff <- rnorm(nlevels(firm))
## left hand side
u <- rnorm(length(x))
y <- x + 0.5*x2 + id.eff[id] + firm.eff[firm] + u
## estimate and print result
est1 <- felm(y ~ x+x2| id + firm)
summary(est1)
## estimate and print result with clustered std errors
est2 <- felm(y ~ x+x2| id + firm | 0 | firm)
summary(est2)
I can tidy in the non-clustered SE version or without including the fixed effects:
tidy(est1)
tidy(est2)
tidy(est1, fe = TRUE)
But I can't if I ask for the fixed effects:
tidy(est2, fe = TRUE)
The error is this: Error in overscope_eval_next(overscope, expr) : object 'se' not found
I'm not sure if this is a broom side problem or an lfe side problem. It is possible I'm doing something wrong, but there should be point estimates and standard errors for the fixed effects whether or not I cluster the SEs. (And the fact that there are fewer clusters than FEs is probably an econometric issue, but it doesn't seem to be driving this particular problem.) Any suggestions?
The problem here is that lfe::getfe() is supposed to return the columns c('effect','se','obs','comp','fe','idx') according to its help page. However, if you run
lfe::getfe(est1, se = TRUE) and
lfe::getfe(est2, se = TRUE)
in the second instance, the standard errors are in a column named clusterse instead of se.
The error message is a result of the function broom:::tidy.felm using lfe::getfe() and then dplyr::select(se).
I guess technically it's an lfe problem but I'm not sure which package will be easier to amend
Update: I emailed Simen Gaure (the package author) and he'll be releasing to CRAN some time this spring

LMER Factor vs numeric Interaction

I am attempting to use lmer to model my data.
My data has 2 independent variables and a dependent variable.
The first is "Morph" and has values "Identical", "Near", "Far".
The second is "Response" which can be "Old" or "New".
The dependent variable is "Fix_Count".
So here is a sample dataframe and what I currently have for running the linear model.
Subject <- c(rep(1, times = 6), rep(2, times = 6))
q <- c("Identical", "Near", "Far")
Morph <- c(rep(q, times = 4))
t <- c(rep("old", times = 3),rep("new", times=3))
Response <- c(rep(t, times = 2))
Fix_Count <- sample(1:9, 12, replace = T)
df.main <- data.frame(Subject,Morph, Response, Fix_Count, stringsAsFactors = T)
df.main$Subject <- as.factor(df.main$Subject)
res = lmer(Fix_Count ~ (Morph * Response) + (1|Subject), data=df.main)
summary(res)
And the output looks like this:
The issue is I do not want it to do combination but an overall interaction of Morph:Response.
I can get it to do this by converting Morph to numeric instead of factor. However I'm not sure conceptually that makes sense as the values don't properly represent 1,2,3 but low-mid-high (ordered but qualitative).
So: 1. Is it possible to run lmer to get interaction effects between 2 factor variables?
2. Or do you think numeric is a fine way to class "Identica", "Near", "Far"?
3. I have tried setting contrasts to see if that can help, but sometimes I get an error and other times it seems like nothing is changed. If contrasts would help, could you explain how I would implement this?
Thank you so much for any help you can offer. I have also posted this question to stack exchange as I am unsure if this is a coding issue or a stats issue. However I can remove it from the less relevant forum once I know.
Best, Kirk
Two problems I see. First, you should be using a factor variable for Subject. It's clearly not a continuous or integer variable. And to (possibly) address part of your question, there is an interaction function designed to work with regression formulas. I'm pretty sure that the formula interface will interpret the "*" operator that you used as a call to interaction, but the labeling of the output may be different and perhaps more to your liking. I get the same number of coefficients with:
res = lmer(Fix_Count ~ interaction(Morph , Response) + (1|Subject), data=df.main)
But that's not an improvement.
However, they differ from the model created with Morph*Response. Probably there is a different set of contrast options.
The way to get an overall statistical test of the interaction is to compare nested models:
res_simple = lmer(Fix_Count ~ Morph + Response + (1|Subject), data=df.main)
And then do an anova for the model comparison:
anova(res,res_simple)
refitting model(s) with ML (instead of REML)
Data: df.main
Models:
res_simple: Fix_Count ~ Morph + Response + (1 | Subject)
res: Fix_Count ~ interaction(Morph, Response) + (1 | factor(Subject))
Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq)
res_simple 6 50.920 53.830 -19.460 38.920
res 8 54.582 58.461 -19.291 38.582 0.3381 2 0.8445
My opinion is that it is sufficiently close to the boundary for stats vs coding that it could have been acceptable on either forum. (You are not supposed to cross post, however.) If you are satisfied with a coding answer then we are done. If you need help with understanding model comparison, then you may need to edit your CV.com's question to request a more theory-based answer than mine. (I checked to make sure the anova results are the same regardless of whether you use the interaction function or the "*" operator.)

How to write a function to check model assumptions for a linear model in R?

I'm making a lot of models in R and trying to check the model assumptions for all of them. It would be awesome if I could write a function to do it all in one go, but it doesn't seem to be working.
I have:
assumptionfunction <- function(y, modelobject){
plot(x)
plot(y, x$residuals)
qqnorm(x$residuals)
}
And I'm getting lots of errors.
Instead of creating your own function, you can use an existing one. The beautiful check_model() function from the performance package does just that:
library(performance)
library(see)
model <- lm(mpg ~ wt * cyl + gear, data = mtcars)
check_model(model)
If you insist on using some objective tests, there is the gvlma package.
library(gvlma)
gvlma(model)
ASSESSMENT OF THE LINEAR MODEL ASSUMPTIONS
USING THE GLOBAL TEST ON 4 DEGREES-OF-FREEDOM:
Level of Significance = 0.05
Value p-value Decision
Global Stat 1.770046 0.7780 Assumptions acceptable.
Skewness 0.746520 0.3876 Assumptions acceptable.
Kurtosis 0.003654 0.9518 Assumptions acceptable.
Link Function 0.927065 0.3356 Assumptions acceptable.
Heteroscedasticity 0.092807 0.7606 Assumptions acceptable.
Now if you don't like gvlma because it doesn't explicitly name the tests used and gives Skewness and Kurtosis but not overall normality from, say, Shapiro-Wilk, I made a convenience function. It gets all tests names and assumptions at once with the total number of assumptions that are not respected. You can take it and modify it to suit your needs.
# Load the function:
source("https://raw.githubusercontent.com/RemPsyc/niceplots/master/niceAssFunction.R")
View(niceAss(model))
Interpretation: (p) values < .05 imply assumptions are not respected.
Diagnostic is how many assumptions are not respected for a given model or variable.
Applied to a list of models:
# Define our dependent variables
(DV <- names(mtcars[-1]))
# Make list of all formulas
(formulas <- paste(DV, "~ mpg"))
# Make list of all models
models.list <- sapply(X = formulas, FUN = lm, data = mtcars, simplify = FALSE, USE.NAMES = TRUE)
# Make diagnostic table
(ass.table <- do.call("rbind", lapply(models.list, niceAss)))
# Use the Viewer for better results
View(ass.table)

Resources