Panel regression in R with variable coefficients - r

We have tried to do two (similar) panel regressions in R.
1) One with time and individual fixed effects (the usual intercept dummies) using plm(). However, we are only interested mostly interested in a "slope coefficient" or beta for each individual and not one for all of the individuals:
Regression 1
Where alpha_i is the individual fixed effect, gamme_t is the time fixed effect. The sum of X is the variable X and three lags:
Sum X variable
We have already included the lagged X variables as new columns in our dataset so in our specification in the code we simply treat them as four different variables:
This is an attempt at using plm() and include our own dummy variables for each individual Beta
plm(income ~ (factor(firmid)-1)*(expense_rate + lag1 + lag2 + lag3), data = data1,
effect = c("time"), model = c("within"), index = c("name", "date"))
The lag1, lag,2, lag2 are the lagged variables of expense rate.
Data1 is in the form of a data frame.
“(factor(firmid)-1)” is an attempt at introducing dummies to get Betas for each individual instead of one for all individuals.
2) The second (and simpler) regression is:
Regression 2
This is an example of our attempt at using pvcm
pvcm1 <- pvcm(income ~ expense_rate + lag1 + lag2 + lag3, data = data1,
effect = "individual", model = "within")
Our question is what specific code and or packages/functions which would be suitable for these regressions. We have tried pvcm to no avail, running into errors such as:
“Error in table(index[1], index[2], useNA = "ifany") :
attempt to make a table with >= 2^31 elements”
and
“Error: cannot allocate vector of size 599.7 Gb”
. Furthermore, pvcm() does not seem to be able to cope with both individua and time fixed effects as in 1).

Related

Interpreting standard errors of parameter estimates in PGLS regression with phylolm

I am using the phylolm function (in package phylolm) to conduct PGLS phylogenetic analysis and am having some trouble interpreting the model output.
I am running a phylolm model with a continuous (log transformed) response variable and one predictor variable which is a factor with two groups. When I change the reference group (from condition A to B) and rerun the same model, the estimates change accordingly but the standard errors do not seem to. The standard error for the new reference group remains very high - high enough that I don't see how the difference between groups can be significant (which the p value indicates they are). I was under the impression that phylolm standard errors can be interpreted in the same was as for ordinary linear regression - am I mistaken?
Since you have a binary variable, changing the reference category should only reverse the value of the beta estimate - should it not? This is what happens in your models.
It might help to think of the coefficient for the conditions as the different between the group means. The difference between the means will be the same size, but the sign will change depending if you are comparing condition A to condition B or condition B to condition A.
# Create a binary variable
x = sample(c(0,1), n, replace = TRUE)
# and create the opposite variable (ie. changing the reference level)
x.rev = +(!x)
# add some error to the model
error = runif(n)
# create the continuous response variablbe
y = 2 + 2 * x + error
df = data.frame(y, x)
# look at the group means between each condition
group_means = tapply(y, x, mean)
group_means[1] - group_means[2]
group_means[1] - group_means[2]
# compare those to the coefficients in the model
summary(lm(y ~ x))
summary(lm(y ~ x.rev))

Stratified Cox Model, Unusual Output For Stratified Factor Variable

I'm estimating a Cox PH model in R with time-varying stratification on a few of the variables. I'm using the following code to create the dataset and run the estimation
sepsis_working_fac_strat7 <- survSplit(Surv(LOS, facility) ~ ., data = sepsis_working, cut = 7, episode = "tgroup", id = "id")
cox_facility2 <- coxph(Surv(tstart, LOS, facility) ~ Age_10 + Gender + Insurance + LowInc + AbxBeforeCulture+ MorningDC+ AttendingAffilGrp+
CentralLine:strata(tgroup)+ Consults+ CountIVAbx:strata(tgroup)+ DischargeUnit+ FSSAdmit+ OralAbxBeforeDC+
OrderedToDischarge+ TimeToAbx+ TimeToBC+ UrineCulture+ Vasopressor+ Ventilator+
Behavioral_Dx + BradenGroup+ CCI+ Diabetes_Dx+ Dialysis+ PVD_Dx+ SUD_Dx, data = sepsis_working_fac_strat7, cluster = PersPersObjId)
In the output, there are four rows for the covariate "CentralLine" where there should only be two:
I have run the same estimation for other event types in the data and have not encountered this problem for CentralLine, which is a factor variable with two levels, where the base level is set to "No". Normally I see two rows for the strata, one for above and below the cut point. In the data, there are 184 events with a fairly even distribution between CentralLine == "Yes" and CentralLine == "No".
I'd like to know what is causing this output to appear as it does.
EDIT: This may be the issue: Why coxph() results some of the coefficient as NA when using survSplit() in R?

Calculate indirect effect of 1-1-1 (within-person, multilevel) mediation analyses

I have data from an Experience Sampling Study, which consists of 8140 observations nested in 106 participants. I want to test if there is a mediation, in which I also want to compare the predictors (X1= socialInteraction_tech, X2= socialInteraction_ftf, M = MPEE_int, Y= wellbeing). X1, X2, and M are person-mean centred in order to obtain the within-person effects. To account for the autocorrelation I have fit a model with an ARMA(2,1) structure. We control for time with the variable "obs".
This is the final model including all variables of interest:
fit_mainH1xmy <- lme(fixed = wellbeing ~ 1 + obs # Controls
+ MPEE_int_centred + socialInteraction_tech_centred + socialInteraction_ftf_centred,
random = ~ 1 + obs | ID, correlation = corARMA(form = ~ obs | ID, p = 2, q = 1),
data = file, method = "ML", na.action=na.exclude)
summary(fit_mainH1xmy)
The mediation is partial, as my predictor X still significantly predicts Y after adding M.
However, I can't find a way to calculate c'(cprime), the indirect effect.
I have found the mlma package, but it looks weird and requires me to do transformations to my data.
I have tried melting the data in a long format and using lmer() to fit the model (following https://quantdev.ssri.psu.edu/sites/qdev/files/ILD_Ch07_2017_Within-PersonMedationWithMLM.html), but lmer() does not let me take into account the moving average (MA-part of the ARMA(2,1) structure).
Does anyone know how I could now obtain the indirect effect?

Random Effects in Longitudinal Multilevel Imputation Models Using MICE

I am trying to impute data in dataset with a longitudinal design. There are two predictors (experimental group, and time) and one outcome variable (score). The clustering variable is id.
Here is the toy data
set.seed(345)
A0 <- rnorm(4,2,.5)
B0 <- rnorm(4,2+3,.5)
A1 <- rnorm(4,6,.5)
B1 <- rnorm(4,6+2,.5)
A2 <- rnorm(4,10,.5)
B2 <- rnorm(4,10+1,.5)
A3 <- rnorm(4,14,.5)
B3 <- rnorm(4,14+0,.5)
score <- c(A0,B0,A1,B1,A2,B2,A3,B3)
id <- rep(1:8,times = 4, length = 32)
time <- rep(0:3, each = 8, length = 32)
group <- rep(c("A","B"), times =2, each = 4, length = 32)
df <- data.frame(id = id, group = group, time = time, score = score)
# plots
(ggplot(df, aes(x = time, y = score, group = group)) +
stat_summary(fun.y = "mean", geom = "line", aes(linetype = group)) +
stat_summary(fun.y = "mean", geom = "point", aes(shape = group), size = 3) +
coord_cartesian(ylim = c(0,18)))
# now place some NAs
df[sample(1:nrow(df), 10, replace = F),"score"] <- NA
df
If I understand this post correctly, in the predictor matrix I should specify the id clustering variable with a -2 and the two fixed predictors time and group with a 1. Like so
library(mice)
(ini <- mice(df, maxit=0))
(pred <- ini$predictorMatrix)
(pred["score",] <- c(-2, 1, 1, 0))
(imp <- mice(df,
method = c("", "", "", "2l.pan"),
pred = pred,
maxit = 1,
seed = 71152))
What i would like to know is:
Is this a longitudinal random intercepts imputation model? Specifying the id variable as -2 designates it as a 'class' variable, but in this mice primer it suggests that for multilevel models you should create a variable of all 1's in the dataframe as a constant, which is then specified as the random intercept via 2 in the predictor matrix. However, this is based on the 2l.norm function rather than the 2l.pan function, so I am not really sure where I am here. Does the 2l.pan function not require this column, or the specification of random effects?
Is there any way to specify a longitudinal random-slopes model, and, if so, how?
This answer is probably a bit late for you, but it may be able to help some people who read this in the future:
How to work with 2l.pan
Below are some details about specifying multilevel imputation models with mice. Because the application is longitudinal, I use the term "persons" to refer to units at Level 2. These are the most relevant arguments for 2l.pan as mentioned in the mice documentation:
type
Vector of length ncol(x) identifying random and class variables.
Random effects are identified by a 2. The group variable (only one
is allowed) is coded as -2. Random effects also include the fixed
effect. If for a covariates X1 group means shall be calculated and
included as further fixed effects choose 3. In addition to the
effects in 3, specification 4 also includes random effects of
X1.
There are 5 different codes you can use in the predictor matrix for variables imputed with 2l.pan. The person identifier is coded as -2 (this is different from 2l.norm). To include predictor variables with fixed or random effects, these variables are coded with 1 or 2, respectively. If coded as 2, the corresponding fixed effect is automatically included.
In addition, 2l.pan offers the codes 3 and 4, which have similar meanings as 1 and 2 but will include an additional fixed effect for the person mean of that variable. This is useful if you're trying to model within- and between-person effects of time-varying predictor variables.
intercept
Logical determining whether the intercept is automatically added.
By default, 2l.pan includes the intercept as both a fixed and a random effect. For this reason, it is not required to include a constant term in the predictor matrix. If one sets intercept=FALSE, this behavior is changed, and the intercept is dropped from the imputation model.
groupcenter.slope
If TRUE, in case of group means (type is 3 or 4) group mean
centering for these predictors are conducted before doing imputations.
Default is FALSE.
Using this option, it is possible to center predictor variables around the person mean instead of including the predictor variable "as is" (i.e., without centering). This only applies to variables coded as 3 or 4. For predictors coded as 3, this is not very important because the models with and without centering are identical.
However, when predictor variables are coded as 4 (i.e., with a random slope), then centering alters the meaning of the random effect so that the random slope no longer applies to the variable "as is" but to the within-person deviation of that variable.
In your example, you can include a simple random slope for time as follows:
library(mice)
ini <- mice(df, maxit=0)
# predictor matrix (following 'type')
pred <- ini$predictorMatrix
pred["score",] <- c(-2, 1, 2, 0)
# imputation method
meth <- c("", "", "", "2l.pan")
imp <- mice(df, method=meth, pred=pred, maxit=10, m=10)
In this example, coding time as 3 or 4 wouldn't make a lot of sense because the person means of time are identical for all persons. However, if you have time-varying covariates that you want to include as predictor variables in the imputation model, 3 and 4 can be useful.
The additional arguments like intercept and groupcenter.slope can be specified directly in the call to mice(), for example:
imp <- mice(df, ..., groupcenter.slope=TRUE)
Regarding your Questions
So, to answer your questions as stated in the post:
Yes, 2l.pan provides a multilevel (or rather two-level) imputation model. The intercept is included as both a fixed and a random effect by default (can be changed with intercept=FALSE) and need not be specified in the predictor matrix (this is in contrast to 2l.norm).
Yes, you can specify random slopes with 2l.pan. To do that, predictors with random slopes are coded as 2 or 4 in the predictor matrix. If coded
as 2, the random slope is included. If coded as 4, the random slope is included as well as an additional fixed effect for the person means of that variable. If coded as 4, the meaning of the random slope may be altered by making use of groupcenter.slope=TRUE (see above).
This article also includes some worked examples for how to work with 2l.pan and other functions for mutlivel imputation: [Link]
The pan library doesn't require an intercept term.
You can dig into the function using
library(pan)
?pan
That said mice uses a wrapper around pan called mice.impute.2l.pan with the mice library loaded you can look at the help for that function. It states: it has a parameters called intercept which is [a] Logical [and] determin[es] whether the intercept is automatically added. It is TRUE by default. This is defined as a random intercept by default. Found this out after browsing the R code for the mice wrapper:
if (intercept) {
x <- cbind(1, as.matrix(x))
type <- c(2, type)
}
Where the pan function parameter type is a Vector of length ncol(x) identifying random and class variables. The intercept is added by default and defined as a random effect.
They do provide and example like you stated with a 1 for "x" in the prediction matrix for fixed effects.
It also states for 2l.norm, The random intercept is automatically added in mice.impute.2l.norm().
It has a few examples with descriptions.
The CRAN documentation for pan might help you.

R: Using a variable with less observations in a regression (plm)

I have been trying to deal with this for a while now with no luck. Essentially, what I am doing is a two-stage least squares on some panel data. To do this I am using the plm package. What I want to do is
Do a 2SLS
Get the residuals from the 2SLS in 1.
Use these residuals as an instrument in a different 2SLS
The issue I have is that in the first 2SLS the number of observations used is less than the total observations in the dataset, so my residuals vector is short and I get the following error
Error in model.frame.default(terms(formula, lhs = lhs, rhs = rhs, data = data, :
variable lengths differ (found for 'ivreg.2.a$residuals')
Here is the code I am trying to run for reference, let me know if you need any more details. I really just need my residual vector to be the same length as the data used in the first 2SLS. For reference my data has 1713 observations, however, only 1550 get used in the regression and as a result my residuals vector is length 1550. My code for the two 2SLS regressions is below.
ivreg.2.a = plm(formula = diff(loda) ~ factor(year)+diff(lgdp) | index_g_l + diff(lcru_l) + diff(lcru_l_sq) + factor(year), index = c("country", "year"), model = "within", data = panel[complete.cases(panel[, c(1,2,3,4,5,7)]),])
ivreg.2.a = plm(formula = diff(lgdp) ~ factor(year)+index_g_l + diff(lcru_l) + diff(lcru_l_sq) + diff(loda)| index_g_l + diff(lcru_l) + diff(lcru_l_sq) + factor(year) + ivreg.2.a$residuals, index = c("country", "year"), model = "within", data = panel[complete.cases(panel[, c(1,2,3,4,5,7)]),])
Let me know if you need anything else.
I assume the 163 observations are dropped because they have NA in one of the relevant variables. Most *lm functions in R have a na.action argument, which can be used to pad the residuals to correct length. E.g., when missing observation 3,
residuals(lm(formula, data, na.action=na.omit)) # 1 2 4
residuals(lm(formula, data, na.action=na.exclude)) # 1 2 NA 4
Documentation of plm, however, says that this argument is "currently not fully supported", so it would be simpler if you just filter those 1550 rows to a new dataframe first, and do all subsequent work on that.
BTW, if plm behaves like lm, you shouldn't need to specify complete.cases for it to work, as it should just skip anything with NAs.

Resources