Replace lmer coefficients in R - r

From the post below,
Replace lm coefficients in [r]
I am also interested in changing the coefficients of a mixed model fitted with lmer. For e.g. in a a model of the form below:
mod <- lmer(y ~ x1 + x2 + x3 + (1|class/subjects), data=data)
How can I change the coefficients of x1 and x2 to another number (0 or 0.1 or 1 etc).

I don't see a valid reason to do this and you might encounter some mighty dragons, but it is easy to do.
library(lme4)
fm1 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
summary(fm1)$coef
# Estimate Std. Error t value
#(Intercept) 251.40510 6.823773 36.842535
#Days 10.46729 1.545958 6.770744
fm1#beta[names(fixef(fm1)) == "Days"] <- 0
summary(fm1)$coef
# Estimate Std. Error t value
#(Intercept) 251.4051 6.823773 36.84253
#Days 0.0000 1.545958 0.00000

Related

Recurring error using lmer() function for a linear mixed-effects model

I attempted to construct a linear mixed effects model using lmer function from lme4 package and I ran into a recurring error. The model uses two fixed effects:
DBS_Electrode (factor w/3 levels) and
PostOp_ICA (continuous variable).
I use (1 | Subject) as a random effect term in which Subject is a factor of 38 levels (38 total subjects). Below is the line of code I attempted to run:
LMM.DBS <- lmer(Distal_Lead_Migration ~ DBS_Electrode + PostOp_ICA + (1 | Subject), data = DBS)
I recieved the following error:
Number of levels of each grouping factor must be < number of observations.
I would appreciate any help, I have tried to navigate this issue myself and have been unsuccessful.
Linear mixed effect model supposes that there is less subjects than observations so it throws an if it is not the case.
You can think of this formula as telling your model that it should
expect that there’s going to be multiple responses per subject, and
these responses will depend on each subject’s baseline level.
Please consult A very basic tutorial for performing linear mixed effects analyses by B. Winter, p. 4.
In your case you should increase amount of observations per subject (> 1). Please see the simulation below:
library(lme4)
set.seed(123)
n <- 38
DBS_Electrode <- factor(sample(LETTERS[1:3], n, replace = TRUE))
Distal_Lead_Migration <- 10 * abs(rnorm(n)) # Distal_Lead_Migration in cm
PostOp_ICA <- 5 * abs(rnorm(n))
# amount of observations equals to amout of subjects
Subject <- paste0("X", 1:n)
DBS <- data.frame(DBS_Electrode, PostOp_ICA, Subject, Distal_Lead_Migration)
model <- lmer(Distal_Lead_Migration ~ DBS_Electrode + PostOp_ICA + (1|Subject), data = DBS)
# Error: number of levels of each grouping factor must be < number of observations
# amount of observations more than amout of subjects
Subject <- c(paste0("X", 1:36), "X1", "X37")
DBS <- data.frame(DBS_Electrode, PostOp_ICA, Subject, Distal_Lead_Migration)
model <- lmer(Distal_Lead_Migration ~ DBS_Electrode + PostOp_ICA + (1|Subject), data = DBS)
summary(model)
Output:
Linear mixed model fit by REML ['lmerMod']
Formula: Distal_Lead_Migration ~ DBS_Electrode + PostOp_ICA + (1 | Subject)
Data: DBS
REML criterion at convergence: 224.5
Scaled residuals:
Min 1Q Median 3Q Max
-1.24605 -0.73780 -0.07638 0.64381 2.53914
Random effects:
Groups Name Variance Std.Dev.
Subject (Intercept) 2.484e-14 1.576e-07
Residual 2.953e+01 5.434e+00
Number of obs: 38, groups: Subject, 37
Fixed effects:
Estimate Std. Error t value
(Intercept) 7.82514 2.38387 3.283
DBS_ElectrodeB 0.22884 2.50947 0.091
DBS_ElectrodeC -0.60940 2.21970 -0.275
PostOp_ICA -0.08473 0.36765 -0.230
Correlation of Fixed Effects:
(Intr) DBS_EB DBS_EC
DBS_ElctrdB -0.718
DBS_ElctrdC -0.710 0.601
PostOp_ICA -0.693 0.324 0.219

negative coefficient of predictor in mixed-effects model produces positive slope in ggplot2

I've fitted a mixed-effect regression model to experimental data using the lmer() function in the lme4 package in R.
require(lme4)
mod = lmer(y ~ x1 + x2 + x3 + (x1|re1) + (x1|re2), data = dat, REML = FALSE)
summary(mod)
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
(Intercept) 7.008e+00 3.318e-02 4.850e+01 211.166 < 2e-16 ***
x1 -6.686e-02 2.028e-02 3.400e+01 -3.297 0.00229 **
x2 -4.357e-02 1.313e-02 4.870e+01 -3.318 0.00172 **
x3 -5.302e-03 1.373e-02 7.110e+01 -0.386 0.70054
Notice that the predictor of interest, x1, has a negative coefficient, so I expect the dependent variable to decrease when the predictor of interest increases.
In order to visualize model predictions, I generated model predictions using the predict() function.
dat$preds = predict(mod, dat, type = "response")
Then I plotted these predictions against the predictor of interest using the ggplot2 package.
require(ggplot2)
ggplot(dat, aes(x=x1, y=preds)) +
geom_point(shape=16, cex=1) +
geom_smooth(method=lm) +
xlab("x1") + ylab("predicted y") +
theme(axis.title=element_text(size=26), axis.text=element_text(size=16))
I was surprised to see that the slope of the function is positive. But how is that possible, given that the coefficient of that predictor in the model is negative? Is there some error in my code or am I wrong in interpreting the estimate of x1 as a negative slope?

How to extract fixed effects part of summary from lme4?

I wish to extract the fixed effects part of summary() as a data.frame. I am using lme4 to run the following model:
SleepStudy <- lmer(Reaction ~ Days + (1|Subject), data = sleepstudy)
summary(SleepStudy)
I know that I can extract the random effects section of summary by using the following:
SleepStudy_RE <- as.data.frame(VarCorr(SleepStudy))
Is there a similar line of code for the fixed effects, including the estimate, standard error, degrees of freedom and exact p-value?
Thank you.
coef(summary(fitted_model)) should do it.
library(lme4)
SleepStudy <- lmer(Reaction ~ Days + (1|Subject), data = sleepstudy)
coef(summary(SleepStudy))
## Estimate Std. Error t value
## (Intercept) 251.40510 9.7467163 25.79383
## Days 10.46729 0.8042214 13.01543
If you want p-values you need lmerTest (you need to re-fit the model):
library(lmerTest)
SleepStudy <- lmer(Reaction ~ Days + (1|Subject), data = sleepstudy)
coef(summary(SleepStudy))
## Estimate Std. Error df t value Pr(>|t|)
## (Intercept) 251.40510 9.7467163 22.8102 25.79383 0
## Days 10.46729 0.8042214 161.0036 13.01543 0
I don't know why the p-values are exactly zero in this case; maybe something to take up with the lmerTest maintainers.
You may also be interested in the broom package.

Simple slopes for interaction in Negative Binomial regression

I am looking to obtain parameter estimates for one predictor when constraining another predictors to specific values in a negative binomial glm in order to better explain an interaction effect.
My model is something like this:
model <- glm.nb(outcome ~ IV * moderator + covariate1 + covariate2)
Because the IV:moderator term is significant, I would like to obtain parameter estimates for IV at specific values of moderator (i.e., at +1 and -1 SD). I can obtain slope estimates for IV at various levels of moderator using the visreg package but I don't know how to estimate SEs and test statistics. moderator is a continuous variable so I can't use the multcomp package and other packages designed for finding simple slopes (e.g., pequod and QuantPsyc) are incompatible with negative binomial regression. Thanks!
If you want to constrain one of the values in your regression, consider taking that variable out of the model and adding it in as an offset. For example with the sample data.
dd<-data.frame(
x1=runif(50),
x2=runif(50)
)
dd<-transform(dd,
y=5*x1-2*x2+3+rnorm(50)
)
We can run a model with both x1 and x2 as parameters
lm(y ~ x1 + x2,dd)
# Call:
# lm(formula = y ~ x1 + x2, data = dd)
#
# Coefficients:
# (Intercept) x1 x2
# 3.438438 4.135162 -2.154770
Or say that we know that the coefficient of x2 is -2. Then we can not estimate x2 but put that term in as an offset
lm(y ~ x1 + offset(-2*x2), dd)
# Call:
# lm(formula = y ~ x1 + offset(-2 * x2), data = dd)
#
# Coefficients:
# (Intercept) x1
# 3.347531 4.153594
The offset() option basically just create a covariate who's coefficient is always 1. Even though I've demonstrated with lm, this same method should work for glm.nb and many other regression models.

standard error of outcome in lm and lme

I have the following linear models
library(nlme)
fm2 <- lme(distance ~ age + Sex, data = Orthodont, random = ~ 1)
fm2.lm <- lm(distance ~ age + Sex,data = Orthodont)
How can I obtain the standard error of distance with age and Sex?
For fm2 (linear mixed model), you can do
sqrt(diag(summary(fm2)$varFix))
#(Intercept) age SexFemale
# 0.83392247 0.06160592 0.76141685
For fm2.lm (linear model), you can do
summary(fm2.lm)$coefficients[, "Std. Error"]
#(Intercept) age SexFemale
# 1.11220946 0.09775895 0.44488623
see attributes(summary(your.model)). what you're after is summary(your.model)$coefficients (or did I get your question wrong?). just use subsetting with [] to get what you want

Resources