Fit a linear mixed model with only 1 group - r

Is it possible to force a lmer model with random effect to be fitted on data with only one level? We want to do this to keep the same model structure in rare case where our data only contains 1 grouping level. The following illustrate the error.
library(lme4)
#> Loading required package: Matrix
sleepstudy$Subject <- as.character(sleepstudy$Subject)
ss <- sleepstudy[sleepstudy$Subject == "308", ]
m1 <- lmer(Reaction~Days+(1|Subject), ss)
#> Error: grouping factors must have > 1 sampled level
It is to be noted that we are fixing the variance (see previous question: Fixing variance values in lme4). Hence, we do not need to estimate the variance.

Related

Why is R removing some residuals and how to avoid it?

I am creating linear models in R and testing them for model assumptions.
I noticed that when I create my models, R removes some residuals, giving this:
(2 observations deleted due to missingness)
This prevents me from checking the relationship between the independent variable and the residuals and any further analysis because of the different lengths for x and y.
edit:
Do you have any ideas on how to fix this?
R isn't removing residuals when you run lm(). Rather, it cannot create residuals for samples that have any missing data in the model (nor actually use them in the analysis). Therefore, the summary(model_5) output notifies you that some samples (observations) cannot be used (i.e., are deleted).
To run a correlation between the residuals and the independent variable when there is a difference in their lengths, and when for some reason we cannot find the missing data to remove from the dataset (e.g., if dataset[!complete.cases(dataset), ] isn't working), we first need to figure another way to find which observations are kept/removed in the model. We may be able to rely on the observation ID or the dataset's row names for this.
Example
# sample data
set.seed(12345)
dataset <- data.frame(indep_var = c(NA, rnorm(9)), dep_var = c(rnorm(9), NA))
dataset$index <- rownames(dataset)
# model residuals
resid <- lm(data=dataset, dep_var ~ indep_var)$residuals
dataset.resid <- data.frame(index = names(resid), resid)
# join or match the residuals and the variables by their observation identifier
cor.data <- dplyr::inner_join(dataset.resid, dataset, by = "index")
# correlation analysis
cor.test(~ resid + indep_var, cor.data)
Note that names(resid) are the row names of the observations used in the model from dataset. Any unused rownames(dataset) or observations/rows from dataset (due to missingness) will not be included in names(resid).

Why is lavaan doing list wise deletion when specifying missing="film"?

A lot of my colleagues use FIML in Mplus to address missing data, I'm working on a method comparison study to illustrate some of the advantages/disadvantages of using FIML vs other imputation approaches. All my analysis is done in R so I was hoping to compare treatment effects from FIML using lavaan in R.
I immediately run into two issues:
1) The apparent inability to specify a model comparable to random intercepts from lmer when we have unbalanced groups,
and the most puzzling one
2) The fact that when running a regression in lavaan with missing data (generated by MAR) the estimates are IDENTICAL to those that we get from list wise deletion both in lavaan and using lm()
I generated an example to illustrate this problem below
## stack overflow example
set.seed(1234)
library(MASS)
x <- rnorm(800,0,3)
site <- c(rep(seq(1:30),20),rep(seq(1:8),30))[1:800]
# random intercepts by site
reff <- rnorm(30,1,1)
reffi <- reff[site]
site_id <- as.factor(site[site])
# treatment variable
t <- rbinom(30,1,.5)
treat <- t[site]
## Assume there's an interaction we don't know about, so that the treatment effect would be
## underestimated if we don't address the missingness in x2 correctly
y <- rnorm(800,3+x+1*(treat*(x-mean(x))^2)+reffi,.2)
data <- data.frame("y"=y,"x"=x,"treat"=treat,"reffi"=reffi,"site_id"=as.ordered(site_id))
## MAR mechanism
# missingness in y
library(arm)
p <- invlogit(-3+x)
missing_y <- rbinom(800,1,p)
y_miss <- y
y_miss[missing_y==1] <- NA
data$y_miss <- y_miss
## Say that for our analysis we only care about the average treatment effect, not its interactions
## Ideally we would consider random intercepts
library(lme4)
## Results without missingness
summary(lmer(y~x+treat+(1|site_id),data=data))
## Results with missingness
summary(lmer(y_miss~x+treat+(1|site_id),data=data))
library(lavaan)
## But we can't run random intercepts with lavaan for unbalanced groups, right?
## so, looking at simple regression for comparison with lavaan regression
## Results without missingness
lm_full <- lm(y~x+treat,data=data)
## Results with missingness
lm_miss <- lm(y_miss~x+treat,data=data)
summary(lm_full)
summary(lm_miss)
## Nothing changes, results are identical as for linear regression with listwise deletion
## when specifying FIML
summary(lavaan::sem('
y_miss~ x + treat
y_miss~1', data,meanstructure=T,missing="listwise"))
summary(lavaan::sem('
y_miss~ x + treat
y_miss~1', data,meanstructure=T,missing="fiml"))
Could anyone help me understand why the estimates and their SEs are identical?
Is this what we should expect when there's only one outcome whenever we use FIML? Would this work differently in MPlus? Is it a bug in lavaan?
As a bonus, if anyone knows how to run a random intercepts model with FIML in R for unbalanced groups, I'd appreciate if you could direct me to the package or reading.
Thank you!

R: glmrob can't predict models with dropped co-linear columns, while glm can?

I'm learning to implement robust glms in R, but can't figure out why I am unable to get glmrob to predict values from my regression models when I have a model where some columns are dropped due to co-linearity. Specifically when I use the predict function to predict values from a glmrob, it always gives NA for all values. I don't observe this when predicting values from the same data & model using glm. It doesn't seem to matter what data I use -- as long as there is a NA coefficient in the fitted model (and the NA isn't the last coefficient in the coefficient vector), the predict does not work.
This behavior holds for all datasets and models I have tried where an internal column is dropped due to co-linearity. I include a fake data set where two columns are dropped from the model, which gives two NAs in the coefficient list. Both glm and glmrob give nearly identical coefficients, yet predict only works with the glm model. So my question is: what don't I understand about robust regression that would prevent my glmrob models from generating predicted values?
library(robustbase)
#Make fake data with two categorial predictors
df <- data.frame("category" = rep(c("A","B","C"),each=6))
df$location <- rep(1:6,each=3)
val <- rep(c(500,50,5000),each=6)+rep(c(50,100,25,200,100,1),each=3)
df$value <- rpois(NROW(df),val)
#note that predict works if we omit the newdata parameter. However I need the newdata param
#so I use the original dataframe here as a stand-in.
mod <- glm(val ~ category + as.factor(location), data=df, family=poisson)
predict(mod, newdata=df) # works fine
mod <- glmrob(val ~ category + as.factor(location), data=df, family=poisson)
predict(mod, newdata=df) #predicts NA for all values
I've been digging into this and have concluded that the problem does not lie in my understanding of robust regression, but rather the problem lies with a bug in the robustbase package. The predict.lmrob function does not correctly pick the necessary coefficients from the model before the prediction. It needs to pick the first x non-NA coefficients (where x=rank of the model matrix). Instead it merely picks the first x coefficients without checking if they are NA. This explains why this problem only surfaces for models where the NA isn't the last coefficient in the coefficient vector.
To fix this, I copied the predict.lmrob source using:
getAnywhere(predict.lmrob)
and created my own replacement function. In this function I made a single modification to the code:
...
p <- object$rank
if (is.null(p)) {
df <- Inf
p <- sum(!is.na(coef(object)))
#piv <- seq_len(p) # old code
piv <- which(!is.na(coef(object))) # new code
}
else {
p1 <- seq_len(p)
piv <- if (p)
qr(object)$pivot[p1]
}
...
I've run a few hundred datasets using this change and it has worked well.

How to get individual coefficients and residuals in panel data using fixed effects

I have a panel data including income for individuals over years, and I am interested in the income trends of individuals, i.e individual coefficients for income over years, and residuals for each individual for each year (the unexpected changes in income according to my model). However, I have a lot of observations with missing income data at least for one or more years, so with a linear regression I lose the majority of my observations. The data structure is like this:
caseid<-c(1,1,1,1,1,1,2,2,2,2,2,2,3,3,3,3,3,3,4,4,4,4,4,4)
years<-c(1998,2000,2002,2004,2006,2008,1998,2000,2002,2004,2006,2008,
1998,2000,2002,2004,2006,2008,1998,2000,2002,2004,2006,2008)
income<-c(1100,NA,NA,NA,NA,1300,1500,1900,2000,NA,2200,NA,
NA,NA,NA,NA,NA,NA, 2300,2500,2000,1800,NA, 1900)
df<-data.frame(caseid, years, income)
I decided using a random effects model, that I think will still predict income for missing years by using a maximum likelihood approach. However, since Hausman Test gives a significant result I decided to use a fixed effects model. And I ran the code below, using plm package:
inc.fe<-plm(income~years, data=df, model="within", effect="individual")
However, I get coefficients only for years and not for individuals; and I cannot get residuals.
To maybe give an idea, the code in Stata should be
xtest caseid
xtest income year
predict resid, resid
Then I tried to run the pvcm function from the same library, which is a function for variable coefficients.
inc.wi<-pvcm(Income~Year, data=ldf, model="within", effect="individual")
However, I get the following error message:
"Error in FUN(X[[i]], ...) : insufficient number of observations".
How can I get individual coefficients and residuals with pvcm by resolving this error or by using some other function?
My original long form data has 202976 observations and 15 years.
Does the fixef function from package plm give you what you are looking for?
Continuing your example:
fixef(inc.fe)
Residuals are extracted by:
residuals(inc.fe)
You have a random effects model with random slopes and intercepts. This is also known as a random coefficients regression model. The missingness is the tricky part, which (I'm guessing) you'll have to write custom code to solve after you choose how you wish to do so.
But you haven't clearly/properly specified your model (at least in your question) as far as I can tell. Let's define some terms:
Let Y_it = income for ind i (i= 1,..., N) in year t (t= 1,...,T). As I read you question, you have not specified which of the two below models you wish to have:
M1: random intercepts, global slope, random slopes
Y_it ~ N(\mu_i + B T + \gamma_i I T, \sigma^2)
\mu_i ~ N(\phi_0, \tau_0^2)
\gamma_i ~ N(\phi_1, tau_1^2)
M2: random intercepts, random slopes
Y_it ~ N(\mu_i + \gamma_i I T, \sigma^2)
\mu_i ~ N(\phi_0, \tau_0^2)
\gamma_i ~ N(\phi_1, tau_1^2)
Also, your example data is nonsensical (see below). As you can see, you don't have enough observations to estimate all parameters. I'm not familiar with library(plm) but the above models (without missingness) can be estimated in lme4 easily. Without a realistic example dataset, I won't bother providing code.
R> table(df$caseid, is.na(df$income))
FALSE TRUE
1 2 4
2 4 2
3 0 6
4 5 1
Given that you do have missingness, you should be able to produce estimates for either hierarchical model via the typical methods, such as EM. But I do think you'll have to write the code to do the estimation yourself.

glmer - predict with binomial data (cbind count data)

I am trying to predict values over time (Days in x axis) for a glmer model that was run on my binomial data. Total Alive and Total Dead are count data. This is my model, and the corresponding steps below.
full.model.dredge<-glmer(cbind(Total.Alive,Total.Dead)~(CO2.Treatment+Lime.Treatment+Day)^3+(Day|Container)+(1|index),
data=Survival.data,family="binomial")
We have accounted for overdispersion as you can see in the code (1:index).
We then use the dredge command to determine the best fitted models with the main effects (CO2.Treatment, Lime.Treatment, Day) and their corresponding interactions.
dredge.models<-dredge(full.model.dredge,trace=FALSE,rank="AICc")
Then made a workspace variable for them
my.dredge.models<-get.models(dredge.models)
We then conducted a model average to average the coefficients for the best fit models
silly<-model.avg(my.dredge.models,subset=delta<10)
But now I want to create a graph, with the Total Alive on the Y axis, and Days on the X axis, and a fitted line depending on the output of the model. I understand this is tricky because the model concatenated the Total.Alive and Total.Dead (see cbind(Total.Alive,Total.Dead) in the model.
When I try to run a predict command I get the error
# 9: In UseMethod("predict") :
# no applicable method for 'predict' applied to an object of class "mer"
Most of your problem is that you're using a pre-1.0 version of lme4, which doesn't have the predict method implemented. (Updating would be easiest, but I believe that if you can't for some reason, there's a recipe at http://glmm.wikidot.com/faq for doing the predictions by hand by extracting the fixed-effect design matrix and the coefficients ...)There's actually not a problem with the predictions, which predict the log-odds (by default) or the probability (if type="response"); if you wanted to predict numbers, you'd have to multiply by N appropriately.
You didn't give one, but here's a reproducible (albeit somewhat trivial) example using the built-in cbpp data set (I do get some warning messages -- no non-missing arguments to max; returning -Inf -- but I think this may be due to the fact that there's only one non-trivial fixed-effect parameter in the model?)
library(lme4)
packageVersion("lme4") ## 1.1.4, but this should work as long as >1.0.0
library(MuMIn)
It's convenient for later use (with ggplot) to add a variable for the proportion:
cbpp <- transform(cbpp,prop=incidence/size)
Fit the model (you could also use glmer(prop~..., weights=size, ...))
gm0 <- glmer(cbind(incidence, size - incidence) ~ period+(1|herd),
family = binomial, data = cbpp)
dredge.models<-dredge(gm0,trace=FALSE,rank="AICc")
my.dredge.models<-get.models(dredge.models)
silly<-model.avg(my.dredge.models,subset=delta<10)
Prediction does work:
predict(silly,type="response")
Creating a plot:
library(ggplot2)
theme_set(theme_bw()) ## cosmetic
g0 <- ggplot(cbpp,aes(period,prop))+
geom_point(alpha=0.5,aes(size=size))
Set up a prediction frame:
predframe <- data.frame(period=levels(cbpp$period))
Predict at the population level (ReForm=NA -- this may have to be REForm=NA in lme4 `1.0.5):
predframe$prop <- predict(gm0,newdata=predframe,type="response",ReForm=NA)
Add it to the graph:
g0 + geom_point(data=predframe,colour="red")+
geom_line(data=predframe,colour="red",aes(group=1))

Resources