lmerTest::rand() behaves strangely when variable names contain '.' - r

I have some experience with lme4, but today I tried lmerTest for the first time and was surprised by some results when using the rand() function to examine the random components. (I know this is not advised by the authors of lme4!) In troubleshooting, I think I discovered some undesired behavior: when rand() sees a random effect term whose variable name contains a dot, it appears to parse the term into multiple variables. Of course these may or may not exist in the dataset, but unfortunately it doesn't throw an error, it just gives weird results.
Here's a MWE:
library(dplyr)
nsub <- 500
nvis <- 6
data <- data.frame(subjid = factor(sort(rep(c(1:nsub),nvis))),
visit = rep(c(1:nvis),nsub))
base <- filter(data, visit==1) %>%
select(subjid) %>%
mutate(baseage=rnorm(nsub, 40, 10)) %>%
merge(data, by="subjid") %>%
mutate(interval = ifelse(visit==1, 0, 2*(visit-1) + runif(nsub*nvis, 0, 1)),
age = baseage + interval,
ageverylong = age,
age.very.long = age,
y = 100 - 0.1*age + rnorm(nsub*nvis))
mod1 <- lmer(y ~ (age | subjid), data=base)
mod2 <- lmer(y ~ (ageverylong | subjid), data=base)
mod3 <- lmer(y ~ (age.very.long | subjid), data=base)
summary(mod1) # These three give the same results (so it's not an lme4 problem)
summary(mod2)
summary(mod3)
rand(mod1) # The top two are the same
rand(mod2)
rand(mod3) # But this one is different

Related

loop through gtsummary table to pick out only significant variables

I have a question. I am, relatively new to R. I am transitioning some code from another app to R. In that code, I was able to loop through a table and pick out only the significant variables based on the p-value and the size of the odds ratio for logistic regression. Then I was able to say something like "x had a significant link with y" when the p was less than or equal to 0.05 and the odds ratio as above 1.00 and do the converse "x had a significant negative link with " when the p value was less than 0.05 and the odds ration was below 1.00. Then, I was able to do what I understand from the gtsummary literature is inline_text these statements. As I am trying to get my bearings with R, I was wondering how I would I accomplish this with gtsummary tables? My reproducible code does not work, but it is below:
# install.packages("gtsummary")
library(gtsummary)
library(tidyverse)
#simulated data
gender <- sample(c(0,1), size = 1000, replace = TRUE)
age <- round(runif(1000, 18, 80))
xb <- -9 + 3.5*gender + 0.2*age
p <- 1/(1 + exp(-xb))
y <- rbinom(n = 1000, size = 1, prob = p)
mod <- glm(y ~ gender + age, family = "binomial")
summary(mod)
#create the gtsummary table
tab1 = mod %>%
tbl_regression(exponentiate = TRUE) %>%
as_gt() %>%
gt::tab_source_note(gt::md("*This data is simulated*"))
#attempt of going through the gtsummary table
for (i in 1:nrow(tab1[1:3,])) { # does one row at a time
pv = tab1[["_data"]]$p.value
num = tab1[i, "pv"]
name = tab1[i, "variable"]
if(pv <=0.05 ){
cat("The link between", name, "and is significant. ")
}
}
I ask about the gtsummary regression table because, I will have to do the same thing with the tbl_summary as well. I thought I would begin with the regression version. The idea is to get the gorgeous inline_text via an if else. All of this is triggered by the going down the p-value column, and then pulling the name of the variable and the amazing inline_text information into the sentence. I have looked through the available questions others have asked, but I haven't found anything that gets to the heart of this. If I have missed it, please, point me in the correct direction.
There is a data frame in every gtsummary table called x$table_body. I think it's easier to extract the information you need from there. Example below! (you could also wrap the last line in an inline_text() if that is better for you).
# install.packages("gtsummary")
library(gtsummary)
#> #BlackLivesMatter
library(tidyverse)
#simulated data
gender <- sample(c(0,1), size = 1000, replace = TRUE)
age <- round(runif(1000, 18, 80))
xb <- -9 + 3.5*gender + 0.2*age
p <- 1/(1 + exp(-xb))
y <- rbinom(n = 1000, size = 1, prob = p)
mod <- glm(y ~ gender + age, family = "binomial")
#create the gtsummary table
tab1 = mod %>% tbl_regression(exponentiate = TRUE)
# extract the variable names and the pvalues
tab1$table_body %>%
select(variable, p.value) %>%
filter(p.value <= 0.05) %>% # only keep the sig pvalues
deframe() %>%
imap(~str_glue("The link between 'y' and {.y} is significant ({style_pvalue(.x, prepend_p = TRUE)})."))
#> $gender
#> The link between 'y' and gender is significant (p<0.001).
#>
#> $age
#> The link between 'y' and age is significant (p<0.001).
Created on 2022-11-07 with reprex v2.0.2

Is there a function for substituting (or removing at all) explaining variables in a linear model (lm)?

I have a linear model with lots of explaining variables (independent variables)
model <- lm(y ~ x1 + x2 + x3 + ... + x100)
some of which are linear depended on each other (multicollinearity).
I want the machine to search for the name of the explaining variable which has the highest VIF coefficient (x2 for example), delete it from the formula and then run the old lm function with the new formula
model <- lm(y ~ x1 + x3 + ... + x100)
I already learned how to retrieve the name of the explaining variable which has the highest VIF coefficient:
max_vif <- function(x) {
vifac <- data.frame(vif(x))
nameofmax <- rownames(which(vifac == max(vifac), arr.ind = TRUE))
return(nameofmax)
}
But I still don't understand how to search the needed explaining variable, delete it from the formula and run the function again.
We can use the update function and paste in the column that needs to be removed. We first can fit a model, and then use update to change that model's formula. The model formula can be expressed as a character string, which allows you to concatenate the general formula .~. and whatever variable(s) you'd like removed (using the minus sign -).
Here is an example:
fit1 <- lm(wt ~ mpg + cyl + am, data = mtcars)
coef(fit1)
# (Intercept) mpg cyl am
# 4.83597190 -0.09470611 0.08015745 -0.52182463
rm_var <- "am"
fit2 <- update(fit1, paste0(".~. - ", rm_var))
coef(fit2)
# (Intercept) mpg cyl
# 5.07595833 -0.11908115 0.08625557
Using max_vif we can wrap this into a function:
rm_max_vif <- function(x){
# find variable(s) needing to be removed
rm_var <- max_vif(x)
# concatenate with "-" to remove variable(s) from formula
rm_var <- paste(paste0("-", rm_var), collapse = " ")
# update model
update(x, paste0(".~.", rm_var))
}
Problem solved!
I created a list containing all variables for lm model:
Price <- list(y,x1,...,x100)
Then I used different way for setting lm model:
model <- lm(y ~ ., data = Price)
So we can just delete variable with the highest VIF from Price list.
With the function i already came up the code will be:
Price <- list(y,x1,x2,...,x100)
model <- lm(y ~ ., data = Price)
max_vif <- function(x) { # Function for finding name of variable with the highest VIF
vifac <- data.frame(vif(x))
nameofmax <- rownames(which(vifac == max(vifac), arr.ind = TRUE))
return(nameofmax)
}
n <- max(data.frame(vif(model)))
while(n >= 5) { # Loop for deleting variable with the highest VIF from `Price` list one after another, untill there is no VIF equal or higher then 5
Price[[m]] <- NULL
model_auto <- lm(y ~ ., data = Price)
m <- max_vif(model)
n <- max(data.frame(vif(model)))
}

Implementing multinomial-Poisson transformation with multilevel models

I know variations of this question have been asked before but I haven't yet seen an answer on how to implement the multinomial Poisson transformation with multilevel models.
I decided to make a fake dataset and follow the method outlined here, also consulting the notes the poster mentions as well as the Baker paper on MP transformation.
In order to check if I'm doing the coding correctly, I decided to create a binary outcome variable as a first step; because glmer can handle binary response variables, this will let me check I'm correctly recasting the logit regression as multiple Poissons.
The context of this problem is running multilevel regressions with survey data where the outcome variable is response to a question and the possible predictors are demographic variables. As I mentioned above, I wanted to see if I could properly code the binary outcome variable as a Poisson regression before moving on to multi-level outcome variables.
library(dplyr)
library(lme4)
key <- expand.grid(sex = c('Male', 'Female'),
age = c('18-34', '35-64', '45-64'))
set.seed(256)
probs <- runif(nrow(key))
# Make a fake dataset with 1000 responses
n <- 1000
df <- data.frame(sex = sample(c('Male', 'Female'), n, replace = TRUE),
age = sample(c('18-34', '35-64', '45-64'), n, replace = TRUE),
obs = seq_len(n), stringsAsFactors = FALSE)
age <- model.matrix(~ age, data = df)[, -1]
sex <- model.matrix(~ sex, data = df)[, -1]
beta_age <- matrix(c(0, 1), nrow = 2, ncol = 1)
beta_sex <- matrix(1, nrow = 1, ncol = 1)
# Create class probabilities as a function of age and sex
probs <- plogis(
-0.5 +
age %*% beta_age +
sex %*% beta_sex +
rnorm(n)
)
id <- ifelse(probs > 0.5, 1, 0)
df$y1 <- id
df$y2 <- 1 - df$y1
# First run the regular hierarchical logit, just with a varying intercept for age
glm_out <- glmer(y1 ~ (1|age), family = 'binomial', data = df)
summary(glm_out)
#Next, two Poisson regressions
glm_1 <- glmer(y1 ~ (1|obs) + (1|age), data = df, family = 'poisson')
glm_2 <- glmer(y2 ~ (1|obs) + (1|age), data = df, family = 'poisson')
coef(glm_1)$age - coef(glm_2)$age
coef(glm_out)$age
The outputs for the last two lines are:
> coef(glm_1)$age - coef(glm_2)$age
(Intercept)
18-34 0.14718933
35-64 0.03718271
45-64 1.67755129
> coef(glm_out)$age
(Intercept)
18-34 0.13517758
35-64 0.02190587
45-64 1.70852847
These estimates seem close but they are not exactly the same. I'm thinking I've specified an equation wrong with the intercept.

Convincing R to exclude single level factors when using lm() in a for loop in some subsets (but not all)

I am working on a large dataset with 19 subcohorts for which I want to run a lineair regression model to estimate BMI.
One of the covariates I am using is sex, but some subcohorts consist only of men, which causes problems in my loop.
If I try to run a linear regression model, I get the following error:
tmp*`, value = contr.funs[1 + isOF[nn]]) :
contrasts can be applied only to factors with 2 or more levels
I have found a solution for this problem, by running seperate loops for subcohorts with men and subcohorts with men and women by the following (simplified) code:
men <- c(1,6,15) # Cohort nrs that only contain men
menandwomen <- c(2,3,4,5,7,8,9,10,11,12,13,14,16,17,18,19)
trenddpmodelm <-list()
for(i in men) {
trenddpmodelm[[i]] <- lm(BMI ~ age + sex,
data=subcohort[subcohort$centre_a==i, ],)
}
trenddpmodelmw <-list()
for(i in menandwomen) {
trenddpmodelmw[[i]] <- lm(BMI ~ age + sex,
data=subcohort[subcohort$centre_a==i, ],)
}
trenddpmodel <- c(list(trenddpmodelm[[1]]), list(trenddpmodelmw[[2]]), list(trenddpmodelmw[[3]]), list(trenddpmodelmw[[4]]), list(trenddpmodelmw[[5]]), list(trenddpmodelm[[6]]), list(trenddpmodelmw[[7]]), list(trenddpmodelmw[[8]]), list(trenddpmodelmw[[9]]), list(trenddpmodelmw[[10]]), list(trenddpmodelmw[[11]]), list(trenddpmodelmw[[12]]), list(trenddpmodelmw[[13]]), list(trenddpmodelmw[[14]]), list(trenddpmodelm[[15]]), list(trenddpmodelmw[[16]]), list(trenddpmodelmw[[17]]), list(trenddpmodelmw[[18]]), list(trenddpmodelmw[[19]]))
After this step, I extract relevant information from the summaries and put this in a df to export to excel.
My problem is that I will be running quite a lot of analyses, which will result in pages and pages of code.
My question is therefore: Is there a setting in R that I could use that allows non varying factors to be dropped from my lineair regression model in subcohorts where this is applicable? (similar to what happens in coxph; R gives a warning that the factor does not always vary, but the loop does run)
It is not like I cannot continue working without a solution, but I have been trying to find an answer to this question for days without succes and I think it must be possible somehow. Any advice is much appreciated :)
I would recommend building your formula dynamically within the loop.
DF <- list(Cohort1 = data.frame(bmi = rnorm(25, 24, 1),
age = rnorm(25, 50, 3),
sex = sample(c("F", "M"), 25, replace = TRUE)),
Cohort2 = data.frame(bmi = rnorm(15, 24, 1),
age = rnorm(15, 55, 4),
sex = rep("M", 15)))
candidate_vars <- c("age", "sex")
Models <- vector("list", length(DF))
for (i in seq_along(DF)){
# Determine if the variables are either numeric, or factor with more than 1 level
indep <- vapply(X = DF[[i]][candidate_vars],
FUN = function(x){
if (is.numeric(x)) return(TRUE)
else return(nlevels(x) > 1)
},
FUN.VALUE = logical(1))
# Write the formula
form <- paste("bmi ~ ", paste(candidate_vars[indep], collapse = " + "))
# Create the model
Models[[i]] <- lm(as.formula(form), data = DF[[i]])
}

R: cant get a lme{nlme} to fit when using self-constructed interaction variables

I'm trying to get a lme with self constructed interaction variables to fit. I need those for post-hoc analysis.
library(nlme)
# construct fake dataset
obsr <- 100
dist <- rep(rnorm(36), times=obsr)
meth <- dist+rnorm(length(dist), mean=0, sd=0.5); rm(dist)
meth <- meth/dist(range(meth)); meth <- meth-min(meth)
main <- data.frame(meth = meth,
cpgl = as.factor(rep(1:36, times=obsr)),
pbid = as.factor(rep(1:obsr, each=36)),
agem = rep(rnorm(obsr, mean=30, sd=10), each=36),
trma = as.factor(rep(sample(c(TRUE, FALSE), size=obsr, replace=TRUE), each=36)),
depr = as.factor(rep(sample(c(TRUE, FALSE), size=obsr, replace=TRUE), each=36)))
# check if all factor combinations are present
# TRUE for my real dataset; Naturally TRUE for the fake dataset
with(main, all(table(depr, trma, cpgl) >= 1))
# construct interaction variables
main$depr_trma <- interaction(main$depr, main$trma, sep=":", drop=TRUE)
main$depr_cpgl <- interaction(main$depr, main$cpgl, sep=":", drop=TRUE)
main$trma_cpgl <- interaction(main$trma, main$cpgl, sep=":", drop=TRUE)
main$depr_trma_cpgl <- interaction(main$depr, main$trma, main$cpgl, sep=":", drop=TRUE)
# model WITHOUT preconstructed interaction variables
form1 <- list(fixd = meth ~ agem + depr + trma + depr*trma + cpgl +
depr*cpgl +trma*cpgl + depr*trma*cpgl,
rndm = ~ 1 | pbid,
corr = ~ cpgl | pbid)
modl1 <- nlme::lme(fixed=form1[["fixd"]],
random=form1[["rndm"]],
correlation=corCompSymm(form=form1[["corr"]]),
data=main)
# model WITH preconstructed interaction variables
form2 <- list(fixd = meth ~ agem + depr + trma + depr_trma + cpgl +
depr_cpgl + trma_cpgl + depr_trma_cpgl,
rndm = ~ 1 | pbid,
corr = ~ cpgl | pbid)
modl2 <- nlme::lme(fixed=form2[["fixd"]],
random=form2[["rndm"]],
correlation=corCompSymm(form=form2[["corr"]]),
data=main)
The first model fits without any problems whereas the second model gives me following error:
Error in MEEM(object, conLin, control$niterEM) :
Singularity in backsolve at level 0, block 1
Nothing i found out about this error so far helped me to solve the problem. However the solution is probably pretty easy.
Can someone help me? Thanks in advance!
EDIT 1:
When i run:
modl3 <- lm(form1[["fixd"]], data=main)
modl4 <- lm(form2[["fixd"]], data=main)
The summaries reveal that modl4 (with the self constructed interaction variables) in contrast to modl3 shows many more predictors. All those that are in 4 but not in 3 show NA as coefficients. The problem therefore definitely lies within the way i create the interaction variables...
EDIT 2:
In the meantime I created the interaction variables "by hand" (mainly paste() and grepl()) - It seems to work now. However I would still be interested in how i could have realized it by using the interaction() function.
I should have only constructed the largest of the interaction variables (combining all 3 simple variables).
If i do so the model gets fit. The likelihoods then are very close to each other and the number of coefficients matches exactly.

Resources