How to use predict function with my pooled results from mice()? - r

Hi I just started using R as part of a module in school. I have a data set with missing data and I have used mice() to impute the missing data. I'm now trying to use the predict function with my pooled results. However, I observed the following error:
Error in UseMethod("predict") :
no applicable method for 'predict' applied to an object of class "c('mipo', 'data.frame')"
I have included my entire code below and I'd greatly apprciate it if y'all can help a novice out. Thanks!
```{r}
library(magrittr)
library(dplyr)
train = read.csv("Train_Data.csv", na.strings=c("","NA"))
test = read.csv("Test_Data.csv", na.strings=c("","NA"))
cols <- c("naCardiac", "naFoodNutrition", "naGenitourinary", "naGastrointestinal", "naMusculoskeletal", "naNeurological", "naPeripheralVascular", "naPain", "naRespiratory", "naSkin")
train %<>%
mutate_each_(funs(factor(.)),cols)
test %<>%
mutate_each_(funs(factor(.)),cols)
str(train)
str(test)
```
```{r}
library(mice)
md.pattern(train)
```
```{r}
miTrain = mice(train, m = 5, maxit = 50, meth = "pmm")
```
```{r}
model = with(miTrain, lm(LOS ~ Age + Gender + Race + Temperature + RespirationRate + HeartRate + SystolicBP + DiastolicBP + MeanArterialBP + CVP + Braden + SpO2 + FiO2 + PO2_POCT + Haemoglobin + NumWBC + Haematocrit + NumPlatelets + ProthrombinTime + SerumAlbumin + SerumChloride + SerumPotassium + SerumSodium + SerumLactate + TotalBilirubin + ArterialpH + ArterialpO2 + ArterialpCO2 + ArterialSaO2 + Creatinine + Urea + GCS + naCardiac + GCS + naCardiac + naFoodNutrition + naGenitourinary + naGastrointestinal + naMusculoskeletal + naNeurological + naPeripheralVascular + naPain + naRespiratory + naSkin))
model
summary(model)
```
```{r}
modelResults = pool(model)
modelResults
```
```{r}
pred = predict(modelResults, newdata = test)
PredTest = data.frame(test$PatientID, modelResults)
str(PredTest)
summary(PredTest)
```

One slightly hacky way to achieve this may be to take one of the fitted models created by fit() and replace the stored coefficients with the final pooled estimates. I haven't done detailed testing but it seems to be working on this simple example:
library(mice)
imp <- mice(nhanes, maxit = 2, m = 2)
fit <- with(data = imp, exp = lm(bmi ~ hyp + chl))
pooled <- pool(fit)
# Copy one of the fitted lm models fit to
# one of the imputed datasets
pooled_lm = fit$analyses[[1]]
# Replace the fitted coefficients with the pooled
# estimates (need to check they are replaced in
# the correct order)
pooled_lm$coefficients = summary(pooled)$estimate
# Predict - predictions seem to match the
# pooled coefficients rather than the original
# lm that was copied
predict(fit$analyses[[1]], newdata = nhanes)
predict(pooled_lm, newdata = nhanes)
As far as I know predict() for a linear regression should only depend
on the coefficients, so you shouldn't have to replace other
stored values in the fitted model (but you would have to if applying
methods other than predict()).

Related

svyglm - how to code for a logistic regression model across all variables?

In R using GLM to include all variables you can simply use a . as shown How to succinctly write a formula with many variables from a data frame?
for example:
y <- c(1,4,6)
d <- data.frame(y = y, x1 = c(4,-1,3), x2 = c(3,9,8), x3 = c(4,-4,-2))
mod <- lm(y ~ ., data = d)
however I am struggling to do this with svydesign. I have many exploratory variables and an ID and weight variable, so first I create my survey design:
des <-svydesign(ids=~id, weights=~wt, data = df)
Then I try creating my binomial model using weights:
binom <- svyglm(y~.,design = des, family="binomial")
But I get the error:
Error in svyglm.survey.design(y ~ ., design = des, family = "binomial") :
all variables must be in design = argument
What am I doing wrong?
You typically wouldn't want to do this, because "all the variables" would include design metadata such as weights, cluster indicators, stratum indicators, etc
You can use col.names to extract all the variable names from a design object and then reformulate, probably after subsetting the names, eg with the api example in the package
> all_the_names <- colnames(dclus1)
> all_the_actual_variables <- all_the_names[c(2, 11:37)]
> reformulate(all_the_actual_variables,"y")
y ~ stype + pcttest + api00 + api99 + target + growth + sch.wide +
comp.imp + both + awards + meals + ell + yr.rnd + mobility +
acs.k3 + acs.46 + acs.core + pct.resp + not.hsg + hsg + some.col +
col.grad + grad.sch + avg.ed + full + emer + enroll + api.stu

Cross Validation Structural Equation Modeling

Not sure why it is difficult to find info on this topic.
I want to CV my SEM model. N = 360. I've pulled 70% of data into a train set and have built the model, first on theory then using modification indices. I also have a test data frame where I have the observed values (for well-being), but I want to use the model to predict the values. lavPredict only seems to be used to predict values of latent variables. Perhaps I'm missing something, but doesn't seem so straightforward as in lmer or basic linear regression. Does one just use the model fit indices from the test dataset? Seems like one should be able to compare observed and predicted values in SEM.
I've included some data here: https://drive.google.com/file/d/1AX50DFNik30Qsyiyp6XnPMETNfVXK83r/view?usp=sharing
Here is the final model I have through the train dataset. When I go to test it, I just get this
Error in lavPredict(fit.latent.8, newdata = test) :
inherits(object, "lavaan") is not TRUE
Thanks much!
fit.latent.8 <- '#factor loadings; measurement model portion
pl =~ exercisescore + mindfulnessscore + promistscore
sl =~ family_support + friendshipcount + friendshipnet + sense_of_community
trauma =~ neglectscore + abusescore + exposure + family_support + age + sesscore
#regressions: structural model
wellbeing ~ age + gender + ethnicity + sesscore + resiliencescore + pl + emotionalsupportscore + trauma
resiliencescore ~ age + sesscore + emotionalsupportscore + pl
emotionalsupportscore ~ sl + gender
#Covariances
friendshipnet~~age
friendshipnet ~~ abusescore
'
train.1 <- sem(fit.latent.8, data = train, meanstructure = TRUE, std.lv = TRUE)
summary(train.1, fit.measures = TRUE,standardized = TRUE, rsquare = TRUE, estimates = FALSE)
modindices(train.1, sort. = TRUE, minimum.value = 10)
test.1 <- sem(fit.latent.8, data = test, meanstructure = TRUE, std.lv = TRUE)
summary(test.1, fit.measures = TRUE,standardized = TRUE, rsquare = TRUE, estimates = FALSE)

R Predict using multiple models

I am new to R and trying to predict outcomes on a dataset using 4 different GLM's. I have tried running as one large model and while I do get results the model doesn't converge properly and I end up with N/A's. I therefore have four models:
model_team <- glm(mydata$OUT ~ TEAM + OPPONENT, family = "binomial",data = mydata )
model_conf <- glm(mydata$OUT ~ TCONF + OCONF, family = "binomial",data = mydata)
model_tstats <- glm(mydata$OUT ~ TPace + TORtg + TFTr + T3PAr + TTS. + TTRB. + TAST. + TSTL. + TBLK. + TeFG. + TTOV. + TORB. + TFT.FGA, family = "binomial",data = mydata)
model_ostats<- glm(mydata$OUT ~ OPace + OORtg + OFTr + O3PAr + OTS. + OTRB. + OAST. + OSTL. + OBLK. + OeFG. + OTOV. + OORB. + OFT.FGA, family = "binomial",data = mydata)
I then want to predict the outcomes using a different data set using the four models
predict(model_team, model_conf, model_tstats, model_ostats, fix, level = 0.95, type = "probs")
Is there a way to use all four models with joining them into one large set?
I don't really understand why you are trying to do what you are doing. I also don't have any example data that is a representation of the data you are working with. However, below is an example of how you could combine multiple GLMs into one using the resulting coefficients. Note that this will not work well if you have multicollinearity between the variables in your dataset.
# I used the iris dataset for my example
head(iris)
# Run several models
model1 <- glm(data = iris, Sepal.Length ~ Sepal.Width)
model2 <- glm(data = iris, Sepal.Length ~ Petal.Length)
model3 <- glm(data = iris, Sepal.Length ~ Petal.Width)
# Get combined intercept
intercept <- mean(
coef(model1)['(Intercept)'],
coef(model2)['(Intercept)'],
coef(model3)['(Intercept)'])
# Extract coefficients
coefs <- as.matrix(
c(coef(model1)[2],
coef(model2)[2],
coef(model3)[2])
# Get the feature values for the predictions
ds <- as.matrix(iris[,c('Sepal.Width', 'Petal.Length', 'Petal.Width')])
# Linear algebra: Matrix-multiply values with coefficients
prediction <- ds %*% coefs + intercept
# Let's look at the results
plot(iris$Petal.Length, prediction)

In R, is there a parsimonious or efficient way to get a regression prediction holding all covariates at their means?

I'm wondering if there is essentially a faster way of getting predictions from a regression model for certain values of the covariates without manually specifying the formulation. For example, if I wanted to get a prediction for a given dependent variable at means of the covariates, I can do something like this:
glm(ins ~ retire + age + hstatusg + qhhinc2 + educyear + married + hisp,
family = binomial, data = dat)
meanRetire <- mean(dat$retire)
meanAge <- mean(dat$age)
meanHStatusG <- mean(dat$hStatusG)
meanQhhinc2 <- mean(dat$qhhinc2)
meanEducyear <- mean(dat$educyear)
meanMarried <- mean(dat$married)
meanYear <- mean(dat$year)
ins_predict <- coef(r_3)[1] + coef(r_3)[2] * meanRetire + coef(r_3)[3] * meanAge +
coef(r_3)[4] * meanHStatusG + coef(r_3)[5] * meanQhhinc2 +
coef(r_3)[6] * meanEducyear + coef(r_3)[7] * meanMarried +
coef(r_3)[7] * meanHisp
Oh... There is a predict function:
fit <- glm(ins ~ retire + age + hstatusg + qhhinc2 + educyear + married + hisp,
family = binomial, data = dat)
newdat <- lapply(dat, mean) ## column means
lppred <- predict(fit, newdata = newdat) ## prediction of linear predictor
To get predicted response, use:
predict(fit, newdata = newdat, type = "response")
or (more efficiently from lppred):
binomial()$linkinv(lppred)

predict lme4 for type "response" (binary choices) -- Confidence intervals

I ran a mixed effect logistig regression with lme4 (type="response"). Now I used the predict feature and wanted also to determine confidence intervals.
I found this code ( http://glmm.wikidot.com/faq) for predictions, it works, but the CIs are not suitable for binary reponses (my predictions are between 0 and 1, and suddenly the CIs are between -3 and 3. Does anyone know where to adjust this?
library(lme4)
library(ggplot2) # Plotting
fm1<- glmer(choice~rating + indi + rating*indi + (1|ID),data=z,family="binomial")
newdat<-data.frame(indi=factor(c(1,1,1,1,1,1,2,2,2,2,2,2)),
rating=factor(1:6), ID=factor(rep(c(1:30), each=12)), choice=0)
newdat$prob<-predict(fm1,newdata=newdat, re.form=NULL, type="response")
mm <- model.matrix(terms(fm1),newdat)
newdat$choice<- predict(fm1,newdat)
## or newdat$choice<- mm %*% fixef(fm1)
pvar1 <- diag(mm %*% tcrossprod(vcov(fm1),mm))
tvar1 <- pvar1+VarCorr(fm1)$ID[1] ## must be adapted
tvar1 <-
newdat <- data.frame(
newdat
, plo = newdat$choice-2*sqrt(pvar1)
, phi = newdat$choice+2*sqrt(pvar1)
, tlo = newdat$choice-2*sqrt(tvar1)
, thi = newdat$choice+2*sqrt(tvar1)
)
#plot confidence
g0 <- ggplot(newdat, aes(x=rating, y=choice, colour=indi))+geom_point()
g0 + geom_errorbar(aes(ymin = plo, ymax = phi))+
opts(title="CI based on fixed-effects uncertainty ONLY")
#plot prediction
g0 + geom_errorbar(aes(ymin = tlo, ymax = thi))+
opts(title="CI based on FE uncertainty + RE variance")
Thanks a lot!
You need to use the inverse-link function, plogis() in this case:
newdat <- transform(newdat,
plo = plogis(plo),
phi = plogis(phi),
tlo = plogis(tlo),
thi = plogis(thi))
More generally if you have fitted a model gm1 the inverse-link function will be stored in gm1#resp$family$linkinv (although mucking around with the internals of an object like this is not guaranteed to remain compatible with future versions).

Resources