R: Predicting with lmer, y ~ . formula error - r

Predicting values in new data from an lmer model throws an error when a period is used to represent predictors. Is there any way around this?
The answer to this similar question offers a way to automatically write out the full formula instead of using the period, but I'm curious if there's a way to get predictions from new data just using the period.
Here's a reproducible example:
mydata <- data.frame(
groups = rep(1:3, each = 100),
x = rnorm(300),
dv = rnorm(300)
)
train_subset <- sample(1:300, 300 * .8)
train <- mydata[train_subset,]
test <- mydata[-train_subset,]
# Returns an error
mod <- lmer(dv ~ . - groups + (1 | groups), data = train)
predict(mod, newdata = test)
predict(mod) # getting predictions for the original data works
# Writing the full formula without the period does not return an error, even though it's the exact same model
mod <- lmer(dv ~ x + (1 | groups), data = train)
predict(mod, newdata = test)

This should be fixed in the development branch of lme4 now. You can install from GitHub (see first line below) or wait a few weeks (early April-ish) for a new version to hit CRAN.
remotes::install_github("lme4/lme4") ## you will need compilers etc.
mydata <- data.frame(
groups = rep(1:3, each = 100),
x = rnorm(300),
dv = rnorm(300)
)
train_subset <- sample(1:300, 300 * .8)
train <- mydata[train_subset,]
test <- mydata[-train_subset,]
# Returns an error
mod <- lmer(dv ~ . - groups + (1 | groups), data = train)
p1 <- predict(mod, newdata = test)
mod2 <- lmer(dv ~ x + (1 | groups), data = train)
p2 <- predict(mod2, newdata = test)
identical(p1, p2) ## TRUE

Related

How to solve "cannot coerce class to data.frame?

Problems occur in Line 20: x3 <- lm(Salary ~ ...
Error in as.data.frame.default(data) : cannot coerce class ‘c("train", "train.formula")’ to a data.frame
How to solve?
attach(Hitters)
Hitters
library(caret)
set.seed(123)
# Define training control
set.seed(123)
train.control <- trainControl(method = "cv", number = 10)
# Train the model
x2 <- train(Salary ~., data = x, method = "lm",
trControl = train.control)
# Summarize the results
print(x)
x3 <- lm(Salary ~ poly(AtBat,3) + poly(Hits,3) + poly(Walks,3) + poly(CRuns,3) + poly(CWalks,3) + poly(PutOuts,3), data = x2)
summary(x3)
MSE = mean(x3$residuals^2)
print("Mean Squared Error: ")
print(MSE)
First, as #dcarlson already mentioned, you should define x.
Second, x3 does not return a data frame.
If you run
str(x2)
you'll see that all the elements you're using in the lm function are part of a data frame called trainingData.
So if you intend to use the lm function, use that as your data source in the lm function, NOT x2.
I've rewritten your code below.
PS I'm far from a R expert so if someone wants to shoot at this answer, go ahead, I'm always willing to learn ;)
attach(Hitters)
Hitters
library(caret)
set.seed(123)
# Define training control
set.seed(123)
train.control <- trainControl(method = "cv", number = 10)
# Train the model
x2 <- train(Salary ~., data = x, method = "lm", trControl = train.control)
# Summarize the results
print(x2)
# str(x2) # $trainingData data.frame
x2$trainingData[["AtBat"]]
m <- x2$trainingData
x3 <- lm(Salary ~ poly(AtBat,3) + poly(Hits,3) + poly(Walks,3) + poly(CRuns,3) + poly(CWalks,3) + poly(PutOuts,3), data = m)
summary(x3)
MSE = mean(x3$residuals^2)
cat("Mean Squared Error: ", MSE) # use cat to concatenate text and variable value in one line

Implementing multinomial-Poisson transformation with multilevel models

I know variations of this question have been asked before but I haven't yet seen an answer on how to implement the multinomial Poisson transformation with multilevel models.
I decided to make a fake dataset and follow the method outlined here, also consulting the notes the poster mentions as well as the Baker paper on MP transformation.
In order to check if I'm doing the coding correctly, I decided to create a binary outcome variable as a first step; because glmer can handle binary response variables, this will let me check I'm correctly recasting the logit regression as multiple Poissons.
The context of this problem is running multilevel regressions with survey data where the outcome variable is response to a question and the possible predictors are demographic variables. As I mentioned above, I wanted to see if I could properly code the binary outcome variable as a Poisson regression before moving on to multi-level outcome variables.
library(dplyr)
library(lme4)
key <- expand.grid(sex = c('Male', 'Female'),
age = c('18-34', '35-64', '45-64'))
set.seed(256)
probs <- runif(nrow(key))
# Make a fake dataset with 1000 responses
n <- 1000
df <- data.frame(sex = sample(c('Male', 'Female'), n, replace = TRUE),
age = sample(c('18-34', '35-64', '45-64'), n, replace = TRUE),
obs = seq_len(n), stringsAsFactors = FALSE)
age <- model.matrix(~ age, data = df)[, -1]
sex <- model.matrix(~ sex, data = df)[, -1]
beta_age <- matrix(c(0, 1), nrow = 2, ncol = 1)
beta_sex <- matrix(1, nrow = 1, ncol = 1)
# Create class probabilities as a function of age and sex
probs <- plogis(
-0.5 +
age %*% beta_age +
sex %*% beta_sex +
rnorm(n)
)
id <- ifelse(probs > 0.5, 1, 0)
df$y1 <- id
df$y2 <- 1 - df$y1
# First run the regular hierarchical logit, just with a varying intercept for age
glm_out <- glmer(y1 ~ (1|age), family = 'binomial', data = df)
summary(glm_out)
#Next, two Poisson regressions
glm_1 <- glmer(y1 ~ (1|obs) + (1|age), data = df, family = 'poisson')
glm_2 <- glmer(y2 ~ (1|obs) + (1|age), data = df, family = 'poisson')
coef(glm_1)$age - coef(glm_2)$age
coef(glm_out)$age
The outputs for the last two lines are:
> coef(glm_1)$age - coef(glm_2)$age
(Intercept)
18-34 0.14718933
35-64 0.03718271
45-64 1.67755129
> coef(glm_out)$age
(Intercept)
18-34 0.13517758
35-64 0.02190587
45-64 1.70852847
These estimates seem close but they are not exactly the same. I'm thinking I've specified an equation wrong with the intercept.

update on merMod object gives different fit

I am trying to refit a full model of class merMod with just the intercept (the null model). However, refitting using update.merMod gives a different answer than fitting the null model by hand, e.g.:
# Generate random data
set.seed(9)
dat <- data.frame(
x = do.call(c, lapply(1:5, function(x) rnorm(100, x))),
random = letters[1:5]
)
dat$y = rnbinom(500, mu = exp(dat$x), size = 1)
library(lme4)
# Get full model
full <- glmer.nb(y ~ x + (1 | random), dat)
# Write out intercept-only model by hand
null <- glmer.nb(y ~ 1 + (1 | random), dat)
# Update
null2 <- update(full, . ~ 1 -. + (1 | random))
VarCorr(null)
VarCorr(null2)
Any idea why this is an how I can use update to get the same vcov matrix?

R: simulating data with lmer and predict (or else)

I am fitting the following model
fit<- lmer(y ~ a + b + (1|c) + (1|a:d) , data=inputdata)
to real observations collected in "inputdata".
Now I want to generate various (1000) modelled datasets for a simulation based on the model parameters and the determined errors. I can use
pred <- predict(fit, newdata=list(a=val_a1, b=val_b1, c=val_c1, d = val_d1),
allow.new.levels = TRUE)
but this always provides the same (the most likely, mean value). Is there a way to get a distribution of values, meaning to draw from a predicted distribution?
As asked by #Adam Quek a reproducable example:
#creating dataset
a <- as.factor(sort(rep(1:4,5 )))
b <- rep(1:2,10)+0.5
c <- as.factor(c( sort(rep(1:2,5)),sort(rep(1:2,5)) ))
d <- as.factor(rep(1:5,4 ))
a <- c(a,a,a)
b <- c(b,b,b)
c <- c(c,c,c)
d <- c(d,d,d)
y <- rnorm(60)
inputdata = data.frame(y,a,b,c,d)
# fitting the model
fit<- lmer(y ~ a + b + (1|c) + (1|a:d) , data=inputdata)
# making specific predictions for a parameter set
val_a1 = 1
val_b1 = 2
val_c1 = 1
val_d1 = 4
pred <- predict(fit, newdata=list(a=val_a1, b=val_b1, c=val_c1, d = val_d1),
allow.new.levels = TRUE)
pred
what I obtain is:
0.2394255
If I do it again
pred <- predict(fit, newdata=list(a=val_a1, b=val_b1, c=val_c1, d = val_d1),
allow.new.levels = TRUE)
pred
I get of course:
0.2394255
but what I am searching for is a R function or routine that easily provides a suite of predictions that follow the distribution of my input values. Something like
for (i in 1:1000){
pred[i] <- predict(fit, newdata=list(a=val_a1, b=val_b1, c=val_c1, d =
val_d1),allow.new.levels = TRUE)
}
and mean(pred) = 0.2394255 but sd(pred) != 0
Thanks to #Alex W! bootMer does the job. Below for those who are interested the solution for the example:
m1 <- function(.) {
predict(., newdata=inputdata, re.form=NULL)
}
boot1 <- lme4::bootMer(fit, m1, nsim=1000, use.u=FALSE, type="parametric")
boot1$t[,1]
where boot1$t[,1]now contains the 1000 predictions when using the parameter values defined in inputdata[1,].
https://cran.r-project.org/web/packages/merTools/vignettes/Using_predictInterval.html
was a helpful link.

Predict Logistf

I'm using a R package called logistf to make a Logistc Regression and I saw that there's no predict function for new data in this package and predict package does not work with this, so I found a code that show how making this with new data:
fit<-logistf(Tax ~ L20+L24+L28+L29+L31+L32+L33+L36+S10+S15+S16+S17+S20, data=trainData)
betas <- coef(fit)
X <- model.matrix(fit, data=testData)
probs <- 1 / (1 + exp(-X %*% betas))
I want to make a cross validation version with this using fit$predict and the probabilities that probs generate for me. Has anyone ever done something like this before?
Other thing that I want to know is about fit$predict I'm making a binary logistic regression, and this function returns many values, are these values from class 0 or 1, how can I know this? Thanks
While the code that you wrote works perfectly, there is a concise way of getting the same results seemingly:
brglm_model <- brglm(formula = response ~ predictor , family = "binomial", data = train )
brglm_pred <- predict(object = brglm_model, newdata = test , type = "response")
About the CV, you have to write a few lines of code I guess:
#Setting the number of folds, and number of instances in each fold
n_folds <- 5
fold_size <- nrow(dataset) %/% 5
residual <- nrow(dataset) %% 5
#label the instances based on the number of folds
cv_labels <- c(rep(1,fold_size),rep(2,fold_size), rep(3,fold_size), rep(4,fold_size), rep(5,fold_size), rep(5,residual))
# the error term would differ based on each threshold value
t_seq <- seq(0.1,0.9,by = 0.1)
index_mat <- matrix(ncol = (n_folds+1) , nrow = length(t_seq))
index_mat[,1] <- t_seq
# the main loop for calculation of the CV error on each fold
for (i in 1:5){
train <- dataset %>% filter(cv_labels != i)
test <- dataset %>% filter(cv_labels == i )
brglm_cv_model <- brglm(formula = response_var ~ . , family = "binomial", data = train )
brglm_cv_pred <- predict(object = brglm_model, newdata = test , type = "response")
# error formula that you want, e.g. misclassification
counter <- 0
for (treshold in t_seq ) {
counter <- counter + 1
conf_mat <- table( factor(test$response_var) , factor(brglm_cv_pred>treshold, levels = c("FALSE","TRUE") ))
sen <- conf_mat[2,2]/sum(conf_mat[2,])
# other indices can be computed as follows
#spec <- conf_mat[1,1]/sum(conf_mat[1,])
#prec <- conf_mat[2,2]/sum(conf_mat[,2])
#F1 <- (2*prec * sen)/(prec+sen)
#accuracy <- (conf_mat[1,1]+conf_mat[2,2])/sum(conf_mat)
#here I am only interested in sensitivity
index_mat[counter,(i+1)] <- sen
}
}
# final data.frame would be the mean of sensitivity over each threshold value
final_mat <- matrix(nrow = length(t_seq), ncol = 2 )
final_mat[,1] <- t_seq
final_mat[,2] <- apply(X = index_mat[,-1] , MARGIN = 1 , FUN = mean)
final_mat <- data.frame(final_mat)
colnames(final_mat) <- c("treshold","sensitivity")
#why not having a look at the CV-sensitivity of the model over threshold values?
ggplot(data = final_mat) +
geom_line(aes(x = treshold, y = sensitivity ), color = "blue")

Resources