Using mice for panel data models - r

My panel dataset contains 7 variables and 1452 observations, covering 6 years. I would like to regress y on x, while controlling for the other variables. The data contains quite a lot of missing observations, 35 % for the independent variable, x, and 23 % for the dependent one, y.
a, b, and c also contain missings but not to that extent.
A toy dataset looks like this:
Name
Year
id
y
x
a
b
c
A
2015
1
6
n.a.
9
4
1
A
2016
1
n.a.
2
9
3
n.a.
I used multiple imputation as provided by the mice function, which worked well. Diagnostics of the distributions of the imputed datasets also seem to be okay. Here is my code (excluding the diagnostics):
predictormatrix<-quickpred(data,
include=c("a", "b", "c", "x", "y"),
exclude=c("Name", "Year", "id"),
mincor = 0.1)
imp <- mice(data,
predictorMatrix = predictormatrix,
m=5,
maxit=5,
meth='pmm')
I can manage to conduct a simple pooled regression:
fitimp <- with(imp,
lm(y ~ x + a + b + c))
summary(pool(fitimp))
However, as a pooled OLS does not take into account the structure of the panel data, I would like to fit a fixed effects and a random effects model and decide on a model on the basis of the Hausman test. I tried using the with function like this:
fitimp.fe <- with(imp,
plm(y ~ x + a + b + c),
data = imp,
index = c("Name", "Year"),
effect = "individual", model = "within")
summary(pool(fitimp.fe))
But it gives me an error: No tidy method for objects of class mids. Plus a warning: Infinite sample size assumed.
Apart from fitting a fixed and random effects model to the imputed datasets, I do not know how to compare them (as mentioned, e.g., on the basis of a Hausman test). Can this be done with the with function?
I've been trying to solve this for quite some time now and would be very grateful if someone could help me. I found a lot about imputation for multilevel data, but if I understood it correctly, this does not apply to my dataset. Last but not least, I've read multiple times to install broom.mixed, which didn't help.

Related

Calculate indirect effect of 1-1-1 (within-person, multilevel) mediation analyses

I have data from an Experience Sampling Study, which consists of 8140 observations nested in 106 participants. I want to test if there is a mediation, in which I also want to compare the predictors (X1= socialInteraction_tech, X2= socialInteraction_ftf, M = MPEE_int, Y= wellbeing). X1, X2, and M are person-mean centred in order to obtain the within-person effects. To account for the autocorrelation I have fit a model with an ARMA(2,1) structure. We control for time with the variable "obs".
This is the final model including all variables of interest:
fit_mainH1xmy <- lme(fixed = wellbeing ~ 1 + obs # Controls
+ MPEE_int_centred + socialInteraction_tech_centred + socialInteraction_ftf_centred,
random = ~ 1 + obs | ID, correlation = corARMA(form = ~ obs | ID, p = 2, q = 1),
data = file, method = "ML", na.action=na.exclude)
summary(fit_mainH1xmy)
The mediation is partial, as my predictor X still significantly predicts Y after adding M.
However, I can't find a way to calculate c'(cprime), the indirect effect.
I have found the mlma package, but it looks weird and requires me to do transformations to my data.
I have tried melting the data in a long format and using lmer() to fit the model (following https://quantdev.ssri.psu.edu/sites/qdev/files/ILD_Ch07_2017_Within-PersonMedationWithMLM.html), but lmer() does not let me take into account the moving average (MA-part of the ARMA(2,1) structure).
Does anyone know how I could now obtain the indirect effect?

Panel regression in R with variable coefficients

We have tried to do two (similar) panel regressions in R.
1) One with time and individual fixed effects (the usual intercept dummies) using plm(). However, we are only interested mostly interested in a "slope coefficient" or beta for each individual and not one for all of the individuals:
Regression 1
Where alpha_i is the individual fixed effect, gamme_t is the time fixed effect. The sum of X is the variable X and three lags:
Sum X variable
We have already included the lagged X variables as new columns in our dataset so in our specification in the code we simply treat them as four different variables:
This is an attempt at using plm() and include our own dummy variables for each individual Beta
plm(income ~ (factor(firmid)-1)*(expense_rate + lag1 + lag2 + lag3), data = data1,
effect = c("time"), model = c("within"), index = c("name", "date"))
The lag1, lag,2, lag2 are the lagged variables of expense rate.
Data1 is in the form of a data frame.
“(factor(firmid)-1)” is an attempt at introducing dummies to get Betas for each individual instead of one for all individuals.
2) The second (and simpler) regression is:
Regression 2
This is an example of our attempt at using pvcm
pvcm1 <- pvcm(income ~ expense_rate + lag1 + lag2 + lag3, data = data1,
effect = "individual", model = "within")
Our question is what specific code and or packages/functions which would be suitable for these regressions. We have tried pvcm to no avail, running into errors such as:
“Error in table(index[1], index[2], useNA = "ifany") :
attempt to make a table with >= 2^31 elements”
and
“Error: cannot allocate vector of size 599.7 Gb”
. Furthermore, pvcm() does not seem to be able to cope with both individua and time fixed effects as in 1).

Repeated measures ANOVA and link to mixed-effect models in R

I have a problem when performing a two-way rm ANOVA in R on the following data (link : https://drive.google.com/open?id=1nIlFfijUm4Ib6TJoHUUNeEJnZnnNzO29):
subjectnbr is the id of the subject and blockType and linesTTL are the independent variables. RT2 is the dependent variable
I first performed the rm ANOVA through using ezANOVA with the following code:
ANOVA_RTS <- ezANOVA(
data=castRTs
, dv=RT2
, wid=subjectnbr
, within = .(blockType,linesTTL)
, type = 2
, detailed = TRUE
, return_aov = FALSE
)
ANOVA_RTS
The result is correct (I double-checked using statistica).
However, when I perform the rm ANOVA using the lme function, I do not get the same answer and I have no clue why.
There is my code:
lmeRTs <- lme(
RT2 ~ blockType*linesTTL,
random = ~1|subjectnbr/blockType/linesTTL,
data=castRTs)
anova(lmeRTs)
Here are the outputs of both ezANOVA and lme.
I hope I have been clear enough and have given you all the information needed.
I'm looking forward for your help as I am trying to figure it out for at least 4 hours!
Thanks in advance.
Here is a step-by-step example on how to reproduce ezANOVA results with nlme::lme.
The data
We read in the data and ensure that all categorical variables are factors.
# Read in data
library(tidyverse);
df <- read.csv("castRTs.csv");
df <- df %>%
mutate(
blockType = factor(blockType),
linesTTL = factor(linesTTL));
Results from ezANOVA
As a check, we reproduce the ez::ezANOVA results.
## ANOVA using ez::ezANOVA
library(ez);
model1 <- ezANOVA(
data = df,
dv = RT2,
wid = subjectnbr,
within = .(blockType, linesTTL),
type = 2,
detailed = TRUE,
return_aov = FALSE);
model1;
# $ANOVA
# Effect DFn DFd SSn SSd F p
#1 (Intercept) 1 13 2047405.6654 34886.767 762.9332235 6.260010e-13
#2 blockType 1 13 236.5412 5011.442 0.6136028 4.474711e-01
#3 linesTTL 1 13 6584.7222 7294.620 11.7348665 4.514589e-03
#4 blockType:linesTTL 1 13 1019.1854 2521.860 5.2538251 3.922784e-02
# p<.05 ges
#1 * 0.976293831
#2 0.004735442
#3 * 0.116958989
#4 * 0.020088855
Results from nlme::lme
We now run nlme::lme
## ANOVA using nlme::lme
library(nlme);
model2 <- anova(lme(
RT2 ~ blockType * linesTTL,
random = list(subjectnbr = pdBlocked(list(~1, pdIdent(~blockType - 1), pdIdent(~linesTTL - 1)))),
data = df))
model2;
# numDF denDF F-value p-value
#(Intercept) 1 39 762.9332 <.0001
#blockType 1 39 0.6136 0.4382
#linesTTL 1 39 11.7349 0.0015
#blockType:linesTTL 1 39 5.2538 0.0274
Results/conclusion
We can see that the F test results from both methods are identical. The somewhat complicated structure of the random effect definition in lme arises from the fact that you have two crossed random effects. Here "crossed" means that for every combination of blockType and linesTTL there exists an observation for every subjectnbr.
Some additional (optional) details
To understand the role of pdBlocked and pdIdent we need to take a look at the corresponding two-level mixed effect model
The predictor variables are your categorical variables blockType and linesTTL, which are generally encoded using dummy variables.
The variance-covariance matrix for the random effects can take different forms, depending on the underlying correlation structure of your random effect coefficients. To be consistent with the assumptions of a two-level repeated measure ANOVA, we must specify a block-diagonal variance-covariance matrix pdBlocked, where we create diagonal blocks for the offset ~1, and for the categorical predictor variables blockType pdIdent(~blockType - 1) and linesTTL pdIdent(~linesTTL - 1), respectively. Note that we need to subtract the offset from the last two blocks (since we've already accounted for the offset).
Some relevant/interesting resources
Pinheiro and Bates, Mixed-Effects Models in S and S-PLUS, Springer (2000)
Potvin and Schutz, Statistical power for the two-factor
repeated measures ANOVA, Behavior Research Methods, Instruments & Computers, 32, 347-356 (2000)
Deming Mi, How to understand and apply
mixed-effect models, Department of Biostatistics, Vanderbilt university

Having issues using the lme4 predict function on my mixed models

I’m having a bit of a struggle trying to use the lme4 predict function on my mixed models. When making predications I want to be able to set some of my explanatory variables to a specified level but average across others.
Here’s some made up data that is a simplified, nonsense version of my original dataset:
a <- data.frame(
TLR4=factor(rep(1:3, each=4, times=4)),
repro.state=factor(rep(c("a","j"),each=6,times=8)),
month=factor(rep(1:2,each=8,times=6)),
sex=factor(rep(1:2, each=4, times=12)),
year=factor(rep(1:3, each =32)),
mwalkeri=(sample(0:15, 96, replace=TRUE)),
AvM=(seq(1:96))
)
The AvM number is the water vole identification number. The response variable (mwalkeri) is a count of the number of fleas on each vole. The main explanatory variable I am interested in is Tlr4 which is a gene with 3 different genotypes (coded 1, 2 and 3). The other explanatory variables included are reproductive state (adult or juvenile), month (1 or 2), sex (1 or 2) and year (1, 2 or 3). My model looks like this (of course this model is now inappropriate for the made up data but that shouldn't matter):
install.packages("lme4")
library(lme4)
mm <- glmer(mwalkeri~TLR4+repro.state+month+sex+year+(1|AvM), data=a,
family=poisson,control=glmerControl(optimizer="bobyqa"))`
summary(mm)
I want to make predictions about parasite burden for each different Tlr4 genotype while accounting for all the other covariates. To do this I created a new dataset to specify the level I wanted to set each of the explanatory variables to and used the predict function:
b <- data.frame(
TLR4=factor(1:3),
repro.state=factor(c("a","a","a")),
month=factor(rep(1, times=3)),
sex=factor(rep(1, times=3)),
year=factor(rep(1, times=3))
)
predict(mm, newdata=b, re.form=NA, type="response")
This did work but I would really prefer to average across years instead of setting year to one particular level. However, whenever I attempt to average year I get this error message:
Error in model.frame.default(delete.response(Terms), newdata, na.action = na.action, : factor year has new level
Is it possible for me to average across years instead of selecting a specified level? Also, I've not worked out how to get the standard error associated with these predictions. The only way I've been able to get standard error for predictions was using the lsmeans() function (from the lsmeans package):
c <- lsmeans(mm, "TLR4", type="response")
summary(c, type="response")
Which automatically generates the standard error. However, this is generated by averaging across all the other explanatory variables. I'm sure it’s probably possible to change that but I would rather use the predict() function if I can. My goal is to create a graph with Tlr4 genotype on the x-axis and predicted parasite burden on the y-axis to demonstrate the predicted differences in parasite burden for each genotype while all other significant covariants are accounted for.
You might be interested in the merTools package which includes a couple of functions for creating datasets of counterfactuals and then making predictions on that new data to explore the substantive impact of variables on the outcome. A good example of this comes from the README and the package vignette:
Let's take the case where we want to explore the impact of a model with an interaction term between a category and a continuous predictor. First, we fit a model with interactions:
data(VerbAgg)
fmVA <- glmer(r2 ~ (Anger + Gender + btype + situ)^2 +
(1|id) + (1|item), family = binomial,
data = VerbAgg)
Now we prep the data using the draw function in merTools. Here we draw the average observation from the model frame. We then wiggle the data by expanding the dataframe to include the same observation repeated but with different values of the variable specified by the var parameter. Here, we expand the dataset to all values of btype, situ, and Anger.
# Select the average case
newData <- draw(fmVA, type = "average")
newData <- wiggle(newData, var = "btype", values = unique(VerbAgg$btype))
newData <- wiggle(newData, var = "situ", values = unique(VerbAgg$situ))
newData <- wiggle(newData, var = "Anger", values = unique(VerbAgg$Anger))
head(newData, 10)
#> r2 Anger Gender btype situ id item
#> 1 N 20 F curse other 5 S3WantCurse
#> 2 N 20 F scold other 5 S3WantCurse
#> 3 N 20 F shout other 5 S3WantCurse
#> 4 N 20 F curse self 5 S3WantCurse
#> 5 N 20 F scold self 5 S3WantCurse
#> 6 N 20 F shout self 5 S3WantCurse
#> 7 N 11 F curse other 5 S3WantCurse
#> 8 N 11 F scold other 5 S3WantCurse
#> 9 N 11 F shout other 5 S3WantCurse
#> 10 N 11 F curse self 5 S3WantCurse
Now we simply pass this new dataset to predictInterval in order to generate predictions for these counterfactuals. Then we plot the predicted values against the continuous variable, Anger, and facet and group on the two categorical variables situ and btype respectively.
plotdf <- predictInterval(fmVA, newdata = newData, type = "probability",
stat = "median", n.sims = 1000)
plotdf <- cbind(plotdf, newData)
ggplot(plotdf, aes(y = fit, x = Anger, color = btype, group = btype)) +
geom_point() + geom_smooth(aes(color = btype), method = "lm") +
facet_wrap(~situ) + theme_bw() +
labs(y = "Predicted Probability")

Different number of predictions than expecting in linear regression [duplicate]

This question already has an answer here:
r predict function returning too many values [closed]
(1 answer)
Closed 6 years ago.
I'm anticipating that I'm missing something glaringly obvious here.
I'm trying to build a demonstration of overfitting. I've got a quadratic generating function from which I've drawn 20 samples, and I now want to fit polynomial linear models of increasing degree to the sampled data.
For some reason, regardless which model I use, every time I run predict I get N predictions back, where N is the number of records used to train my model.
set.seed(123)
N=20
xv = seq(1,5,length.out=1e4)
x=sample(xv,N)
gen=function(v){v^2 + 2*rnorm(length(v))}
y=gen(x)
df = data.frame(x,y)
# convenience function for building formulas for polynomial regression
build_formula = function(N){
fpart = paste(lapply(2:N, function(i) {paste('+ poly(x,',i,',raw=T)')} ), collapse="")
paste('y~x',fpart)
}
## Example:
## build_formula(4)="y~x + poly(x, 2 ,raw=T)+ poly(x, 3 ,raw=T)+ poly(x, 4 ,raw=T)"
model = lm(build_formula(10), data=df)
predict(model, data=xv) # returns 20 values instead of 1000
predict(model, data=1) # even *this* spits out 20 results. WTF?
This behavior is present regardless of the degree of polynomial in the formula, including the trivial case 'y~x':
formulas = sapply(c(2,10,20), build_formula)
formulas = c('y~x', formulas)
pred = lapply(formulas
,function(f){
predict(
lm(f, data=df)
,data=xv)
})
lapply(pred, length) # 4 x 20 predictions, expecting 4 x 1000
# unsuccessful sanity check
m1 = lm('y~x', data=df)
predict(m1,data=xv)
This is driving me insane. What am I doing wrong?
The second argument to predict is newdata, not data.
Also, you don't need multiple calls to poly in your model formula; poly(N) will be collinear with poly(N-1) and all the others.
Also^2, to generate a sequence of predictions using xv, you have to put it in a data frame with the appropriate name: data.frame(x=xv).

Resources