Prediction of single bagged tree models dependent on pre-processing using caret - r

I'm using the caret package to predict a time series with method treebag. caret estimates bagging regression trees with 25 bootstrap replications.
What I'm struggling to understand is how the final prediction of that 'treebag model' relates to the predictions made by each of the 25 trees, depending on whether I use caret::preProcess, or not.
I am aware of this question and the linked resources therein. (But could not draw the right conclusions from it.)
Here is an example using the economics data. Let's say I want to predict unemploy_rate, which has to be created first.
# packages
library(caret)
library(tidyverse)
# data
data("economics")
economics$unemploy_rate <- economics$unemploy / economics$pop * 100
x <- economics[, -c(1, 7)]
y <- economics[["unemploy_rate"]]
I wrote a function that extracts the 25 individual trees from the train object, makes a prediction for each tree, averages these 25 predictions, and compares this average with the prediction from the train object. It returns a plot.
predict_from_treebag <- function(model) {
# extract 25 trees from train object
bagged_trees <- map(.x = model$finalModel$mtrees, .f = pluck, "btree")
# make a prediction for each tree
pred_trees <- map(bagged_trees, .f = predict, newdata = x)
names(pred_trees) <- paste0("tree_", seq_along(pred_trees))
# aggreagte predictions
pred_trees <- as.data.frame(pred_trees) %>%
add_column(date = economics$date, .before = 1) %>%
gather(tree, value, matches("^tree")) %>%
group_by(date) %>%
mutate(mean_pred_from_trees = mean(value)) %>%
ungroup()
# add prediction from train object
pred_trees$bagging_model_prediction = predict(model, x)
pred_trees <- pred_trees %>%
gather(model, pred_value, 4:5)
# plot
p <- ggplot(data = pred_trees, aes(date)) +
geom_line(aes(y = value, group = tree), alpha = .2) +
geom_line(aes(y = pred_value, col = model)) +
theme_minimal() +
theme(
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
legend.position = "bottom"
)
p
}
Now I estimate two models, the first will be unscaled, the second will be centered and scaled.
preproc_opts <- list(unscaled = NULL,
scaled = c("center", "scale"))
# estimate the models
models <- map(preproc_opts, function(preproc)
train(
x = x,
y = y,
trControl = trainControl(method = "none"), # since there are no tuning parameters for this model
metric = "RMSE",
method = "treebag",
preProcess = preproc
))
# apply predict_from_treebag to each model
imap(.x = models,
.f = ~{predict_from_treebag(.x) + labs(title = .y)})
The results are shown below. The unscaled model prediction is the average of the 25 trees but why is each prediction from the 25 trees a constant when I use preProcess?
Thank you for any advice where I might be wrong.

The problem is in this part of the code:
pred_trees <- map(bagged_trees, .f = predict, newdata = x)
in the function predict_from_treebag
this predict function is in fact predict.rpart since
class(bagged_trees[[1]])
predict.rpart does not know that you pre-processed the data in caret.
Here is a quick fix:
predict_from_treebag <- function(model) {
# extract 25 trees from train object
bagged_trees <- map(.x = model$finalModel$mtrees, .f = pluck, "btree")
x <- economics[, -c(1, 7)]
# make a prediction for each tree
newdata = if(is.null(model$preProcess)) x else predict(model$preProcess, x)
pred_trees <- map(bagged_trees, .f = predict, newdata = newdata)
names(pred_trees) <- paste0("tree_", seq_along(pred_trees))
# aggreagte predictions
pred_trees <- as.data.frame(pred_trees) %>%
add_column(date = economics$date, .before = 1) %>%
gather(tree, value, matches("^tree")) %>%
group_by(date) %>%
mutate(mean_pred_from_trees = mean(value)) %>%
ungroup()
# add prediction from train object
pred_trees$bagging_model_prediction = predict(model, x)
pred_trees <- pred_trees %>%
gather(model, pred_value, 4:5)
# plot
p <- ggplot(data = pred_trees, aes(date)) +
geom_line(aes(y = value, group = tree), alpha = .2) +
geom_line(aes(y = pred_value, col = model)) +
theme_minimal() +
theme(
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
legend.position = "bottom"
)
p
}
Now after running:
preproc_opts <- list(unscaled = NULL,
scaled = c("center", "scale"))
models <- map(preproc_opts, function(preproc)
train(
x = x,
y = y,
trControl = trainControl(method = "none"), # since there are no tuning parameters for this model
metric = "RMSE",
method = "treebag",
preProcess = preproc
))
map2(.x = models,
.y = names(models),
.f = ~{predict_from_treebag(.x) + labs(title = .y)})
the result is in line with the expected

Related

Neural network: predictions are just the same prediction despite changes in the model parameters

I'd like to fit a neural network using brulee but despite the several changes in the model parameters (changes in all the parameters), I always have the almost same value in the predictions. In my case:
# Open the data set
data_train_sub <- read.csv("https://raw.githubusercontent.com/Leprechault/trash/main/cc_test_ds.csv")
# Model parameters
hidden_units <-c(4)
epochs <-c(50)
dropout <-c(0.01)
learn_rate <- c(0.01)
activation <- c("relu")
penalty <- c(0.01)
validation <-c(0.80)
# Training data set
data_train <- data_train_sub[1:1250,]
# Validation data set
data_test <- data_train_sub[1251:1500,]
# Model fitting
fit <- brulee_mlp(x = as.matrix(data_train[, 2:ncol(data_train)]),
y = data_train$cc,
hidden_units = hidden_units,
epochs = epochs, dropout = dropout, learn_rate = learn_rate, activation = activation,
penalty = penalty,validation=validation)
#Plot
predict(fit, data_test) %>%
bind_cols(data_test) %>%
ggplot(aes(x = .pred, y = cc)) +
geom_abline(col = "green") +
geom_point(alpha = .3) +
lims(x = c(0, 1.0), y = c(0, 1.0)) +
coord_fixed(ratio = 1)
This sounds strange to me. I would appreciate any help.
Thanks in advance!
The main issues were the outliers mentioned above and that you needed to standardize your predictors to be on the same scale.
Although the model doesn't fit great, here is a modified version with more complexity (but gives different predicted values). I also added PCA which helps a small amount (but you could leave that step out of the recipe).
library(tidymodels)
library(brulee)
tidymodels_prefer()
theme_set(theme_bw())
options(pillar.advice = FALSE, pillar.min_title_chars = Inf)
data_train_sub <- read.csv("https://raw.githubusercontent.com/Leprechault/trash/main/cc_test_ds.csv")
# Model parameters
hidden_units <-c(50) # more hidden units
epochs <-c(500) # more iterations
dropout <-c(0) # since we are using penalization, no dropout
learn_rate <- c(0.01)
activation <- c("relu")
penalty <- c(0.01)
validation <-c(0.20) # hold out 20%
# Training data set
data_train <- data_train_sub[1:1250,]
# There are two extreme outliers:
data_train_2 <- data_train %>% slice(-c(64, 162))
# Validation data set
data_test <- data_train_sub[1251:1500,]
rec <-
recipe(cc ~ ., data = data_train_2) %>%
step_normalize(all_predictors()) %>%
step_pca(all_predictors())
set.seed(1)
# Model fitting
fit <- brulee_mlp(rec, data = data_train_2,
hidden_units = hidden_units,
epochs = epochs, dropout = dropout, learn_rate = learn_rate, activation = activation,
penalty = penalty,validation=validation)
# check convergence
autoplot(fit)
#Plot
predict(fit, data_test) %>%
bind_cols(data_test) %>%
ggplot(aes(x = .pred, y = cc)) +
geom_abline(col = "green") +
geom_point(alpha = .3) +
lims(x = c(0, 1.0), y = c(0, 1.0)) +
coord_fixed(ratio = 1)
Created on 2023-01-04 by the reprex package (v2.0.1)

How to generate 1000 data sets and use Ridge and Lasso regression for all of them in R?

So I need to generate 1000 data sets with 200 observations in R from this model: model
and use Lasso and Ridge regression for all of them. Then I need to get beta_j coefficients for Lasso and Ridge. Can anyone help? Thank you already!
The setup is as you described in the image:
library(magrittr)
library(tidyverse)
library(glmnet)
M <- 9
beta <- c(c(0, 3, 2, 1, 0.5, 0.3),
rep(0, 10 - 6))
beta <- beta[-1] #glmnet contains the intercept
sigma <- diag(M) + 0.5 - 0.5 * diag(M)
sigma
N <- 200
G <- 1000
Now, to make the X and the right beta:
Xj <- mvtnorm::rmvnorm(n = N, sigma = sigma) %>%
set_colnames(paste0("x_", seq_len(ncol(.))))
# X <- cbind(intercept = 1, Xj) # glmnet contains the intercept
X <- Xj
epsilon <- rnorm(n = N, sd = 0.5)
beta %>% length
X %>% ncol()
y <- tcrossprod(beta, X) + epsilon
y
For each dataset, to model estimates has to be found:
list(
lasso =
glmnet::cv.glmnet(
X, y, family = "gaussian",
alpha = 1,
intercept = FALSE
),
ridge =
glmnet::cv.glmnet(
X, y, family = "gaussian",
alpha = 0,
intercept = FALSE
)
) %>%
print() %>%
map_df(. %>% coef() %>% as.matrix() %>% t() %>% as_tibble(), .id = "type")
Now, one could use replicate but the number of datasets is very large.
We will have to use parallel programming here...
library(furrr)
plan(multisession, workers = 4)
seq_len(G) %>%
# seq_len(50) %>%
furrr::future_map_dfr(
~ {
Xj <- mvtnorm::rmvnorm(n = N, sigma = sigma) %>%
set_colnames(paste0("x_", seq_len(ncol(.))))
# X <- cbind(intercept = 1, Xj) # glmnet contains the intercept
X <- Xj
epsilon <- rnorm(n = N, sd = 0.5)
y <- tcrossprod(beta, X) + epsilon
list(
lasso =
glmnet::cv.glmnet(
X, y, family = "gaussian",
alpha = 1,
intercept = FALSE,
parallel = FALSE
),
ridge =
glmnet::cv.glmnet(
X, y, family = "gaussian",
alpha = 0,
intercept = FALSE,
parallel = FALSE
)
) %>%
# print() %>%
map_df(. %>%
coef() %>%
as.matrix() %>%
t() %>%
as_tibble(), .id = "type") %>%
mutate(rep = .x)
},
.progress = TRUE,
.options = furrr_options(seed = TRUE)
) ->
results
This will give a progress-bar, and a reps column that ties with dataset
belongs to which model estimates.
Let us try to summarise these results somehow:
results %>%
glimpse() %>%
pivot_longer(c(`(Intercept)`, starts_with("x_")),
names_to = "parameter", values_to = "estimate") %>%
glimpse() %>%
# ggplot(aes(estimate, group = interaction(type, parameter))) +
ggplot(aes(estimate)) +
geom_vline(data = tibble(true_beta = beta, parameter = paste0("x_", 1:9)) %>%
add_row(true_beta = 0, parameter = "(Intercept)"),
aes(xintercept = true_beta)) +
# geom_density() +
stat_bin(geom = "step", aes(y = after_stat(density))) +
facet_grid(type ~ parameter, scales = "free") +
ggpubr::theme_pubclean()
For each parameter, there are a bunch of estimates, and they are then plotted
as a histogram, and then the true values are vertical lines:
The results are quite surprising to me atleast:
Instead of coef one can use glmnet::coef.glmnet, and provide s = c("lambda.1se", "lambda.min"). Just for fun, here's how the plot would look if both of these hyper-parameter lambdas was used:

Visualize Multilevel Growth Model with nlme/ggplot2 vs lme4/ggplot2

I am trying to visualize the results of an nlme object without success. When I do so with an lmer object, the correct plot is created. My goal is to use nlme and visualize a fitted growth curve for each individual with ggplot2. The predict() function seems to work differently with nlme and lmer objects.
model:
#AR1 with REML
autoregressive <- lme(NPI ~ time,
data = data,
random = ~time|patient,
method = "REML",
na.action = "na.omit",
control = list(maxlter=5000, opt="optim"),
correlation = corAR1())
nlme visualization attempt:
data <- na.omit(data)
data$patient <- factor(data$patient,
levels = 1:23)
ggplot(data, aes(x=time, y=NPI, colour=factor(patient))) +
geom_point(size=1) +
#facet_wrap(~patient) +
geom_line(aes(y = predict(autoregressive,
level = 1)), size = 1)
when I use:
data$fit<-fitted(autoregressive, level = 1)
geom_line(aes(y = fitted(autoregressive), group = patient))
it returns the same fitted values for each individual and so ggplot produces the same growth curve for each. Running test <-data.frame(ranef(autoregressive, level=1)) returns varying intercepts and slopes by patient id. Interestingly, when I fit the model with lmer and run the below code it returns the correct plot. Why does predict() work differently with nlme and lmer objects?
timeREML <- lmer(NPI ~ time + (time | patient),
data = data,
REML=T, na.action=na.omit)
ggplot(data, aes(x = time, y = NPI, colour = factor(patient))) +
geom_point(size=3) +
#facet_wrap(~patient) +
geom_line(aes(y = predict(timeREML)))
In creating a reproducible example, I found that the error was not occurring in predict() nor in ggplot() but instead in the lme model.
Data:
###libraries
library(nlme)
library(tidyr)
library(ggplot2)
###example data
df <- data.frame(replicate(78, sample(seq(from = 0,
to = 100, by = 2), size = 25,
replace = F)))
##add id
df$id <- 1:nrow(df)
##rearrange cols
df <- df[c(79, 1:78)]
##sort columns
df[,2:79] <- lapply(df[,2:79], sort)
##long format
df <- gather(df, time, value, 2:79)
##convert time to numeric
df$time <- factor(df$time)
df$time <- as.numeric(df$time)
##order by id, time, value
df <- df[order(df$id, df$time),]
##order value
df$value <- sort(df$value)
Model 1 with no NA values fits successfully.
###model1
model1 <- lme(value ~ time,
data = df,
random = ~time|id,
method = "ML",
na.action = "na.omit",
control = list(maxlter=5000, opt="optim"),
correlation = corAR1(0, form=~time|id,
fixed=F))
Introducing NA's causes invertible coefficient matrix error in model 1.
###model 1 with one NA value
df[3,3] <- NA
model1 <- lme(value ~ time,
data = df,
random = ~time|id,
method = "ML",
na.action = "na.omit",
control = list(maxlter=2000, opt="optim"),
correlation = corAR1(0, form=~time|id,
fixed=F))
But not in model 2, which has a more simplistic within-group AR(1) correlation structure.
###but not in model2
model2 <- lme(value ~ time,
data = df,
random = ~time|id,
method = "ML",
na.action = "na.omit",
control = list(maxlter=2000, opt="optim"),
correlation = corAR1(0, form = ~1 | id))
However, changing opt="optim" to opt="nlminb" fits model 1 successfully.
###however changing the opt to "nlminb", model 1 runs
model3 <- lme(value ~ time,
data = df,
random = ~time|id,
method = "ML",
na.action = "na.omit",
control = list(maxlter=2000, opt="nlminb"),
correlation = corAR1(0, form=~time|id,
fixed=F))
The code below visualizes model 3 (formerly model 1) successfully.
df <- na.omit(df)
ggplot(df, aes(x=time, y=value)) +
geom_point(aes(colour = factor(id))) +
#facet_wrap(~id) +
geom_line(aes(y = predict(model3, level = 0)), size = 1.3, colour = "black") +
geom_line(aes(y = predict(model3, level=1, group=id), colour = factor(id)), size = 1)
Note that I am not exactly sure what changing the optimizer from "optim" to "nlminb" does and why it works.

Include weibull fit in ggsurvplot

I would like to fit a weibull curve to some event data and then include the fitted weibull curve in a survival plot plotted by survminer::ggsurvplot. Any ideas of how?
Here is an example to work on:
A function for simulating weibull data:
# N = sample size
# lambda = scale parameter in h0()
# rho = shape parameter in h0()
# beta = fixed effect parameter
# rateC = rate parameter of the exponential distribution of C
simulWeib <- function(N, lambda, rho, beta, rateC)
{
# covariate --> N Bernoulli trials
x <- sample(x=c(0, 1), size=N, replace=TRUE, prob=c(0.5, 0.5))
# Weibull latent event times
v <- runif(n=N)
Tlat <- (- log(v) / (lambda * exp(x * beta)))^(1 / rho)
# censoring times
C <- rexp(n=N, rate=rateC)
# follow-up times and event indicators
time <- pmin(Tlat, C)
status <- as.numeric(Tlat <= C)
# data set
data.frame(id=1:N,
time=time,
status=status,
x=x)
}
generate data
set.seed(1234)
betaHat <- rep(NA, 1e3)
for(k in 1:1e3)
{
dat <- simulWeib(N=100, lambda=0.01, rho=1, beta=-0.6, rateC=0.001)
fit <- coxph(Surv(time, status) ~ x, data=dat)
betaHat[k] <- fit$coef
}
#Estimate a survival function
survfit(Surv(as.numeric(time), x)~1, data=dat) -> out0
#plot
library(survminer)
ggsurvplot(out0, data = dat, risk.table = TRUE)
gg1 <- ggsurvplot(
out0, # survfit object with calculated statistics.
data = dat, # data used to fit survival curves.
risk.table = TRUE, # show risk table.
pval = TRUE, # show p-value of log-rank test.
conf.int = TRUE, # show confidence intervals for
# point estimaes of survival curves.
xlim = c(0,2000), # present narrower X axis, but not affect
# survival estimates.
break.time.by = 500, # break X axis in time intervals by 500.
ggtheme = theme_minimal(), # customize plot and risk table with a theme.
risk.table.y.text.col = T, # colour risk table text annotations.
risk.table.y.text = FALSE,
surv.median.line = "hv",
color = "darkgreen",
conf.int.fill = "lightblue",
title = "Survival probability",# show bars instead of names in text annotations
# in legend of risk table
)
gg1
As far as I see this, it is not possible do it with ggsurvplot at this moment.
I created an issue requesting this feature: https://github.com/kassambara/survminer/issues/276
You can plot survivor curves of a weibull model with ggplot2 like this:
library("survival")
wbmod <- survreg(Surv(time, status) ~ x, data = dat)
s <- seq(.01, .99, by = .01)
t_0 <- predict(wbmod, newdata = data.frame(x = 0),
type = "quantile", p = s)
t_1 <- predict(wbmod, newdata = data.frame(x = 1),
type = "quantile", p = s)
smod <- data.frame(time = c(t_0, t_1),
surv = rep(1 - s, times = 2),
strata = rep(c(0, 1), each = length(s)),
upper = NA, lower = NA)
head(surv_summary(cm))
library("ggplot2")
ggplot() +
geom_line(data = smod, aes(x = time, y = surv, color = factor(strata))) +
theme_classic()
However to my knowledge you cannot use survminer (yet):
library("survminer")
# wrong:
ggsurvplot(smod)
# does not work:
gg1$plot + geom_line(data = smod, aes(x = time, y = surv, color = factor(strata)))
The following works for me. Probably the credit goes to Heidi filling a feature request.
Hope, someone finds this useful.
library(survminer)
library(tidyr)
s <- with(lung,Surv(time,status))
sWei <- survreg(s ~ as.factor(sex),dist='weibull',data=lung)
fKM <- survfit(s ~ sex,data=lung)
pred.sex1 = predict(sWei, newdata=list(sex=1),type="quantile",p=seq(.01,.99,by=.01))
pred.sex2 = predict(sWei, newdata=list(sex=2),type="quantile",p=seq(.01,.99,by=.01))
df = data.frame(y=seq(.99,.01,by=-.01), sex1=pred.sex1, sex2=pred.sex2)
df_long = gather(df, key= "sex", value="time", -y)
p = ggsurvplot(fKM, data = lung, risk.table = T)
p$plot = p$plot + geom_line(data=df_long, aes(x=time, y=y, group=sex))

How to graph my multiple linear regression model (caret)?

I have created an multiple linear regression model and would now like to plot it. But I can't seem to figure it out. Any help would be greatly appreciated! I used baruto to find the feature attributes and then used train() to get the model. When I try to plot model_lm I get the error:
There are no tuning parameters with more than 1 value.
Here is my code at what I have attempted so far:
rt_train <- rttotal2
rt_train$year <- NULL
#rt_train$box_office <- NULL
#impute na and address multicoliniearity
preproc <- preProcess(rt_train, method = c("knnImpute","center",
"scale"))
rt_proc <- predict(preproc, rt_train)
rt_proc$box_office <- rt_train$box_office
sum(is.na(rt_proc))
titles <- rt_proc$titles
rt_proc$titles <- NULL
#rt_train$interval <- as.factor(rt_train$interval)
dmy <- dummyVars(" ~ .", data = rt_proc,fullRank = T)
rt_transform <- data.frame(predict(dmy, newdata = rt_proc))
index <- createDataPartition(rt_transform$interval, p =.75, list = FALSE)
train_m <- rt_transform[index, ]
rt_test <- rt_transform[-index, ]
str(rt_train)
y_train <- train_m$box_office
y_test <-rt_test$box_office
train_m$box_office <- NULL
rt_test$box_office <- NULL
#selected feature attributes
boruta.train <- Boruta(interval~., train_m, doTrace =1)
#graph to see most important var to interval
lz<-lapply(1:ncol(boruta.train$ImpHistory),function(i)
boruta.train$ImpHistory[is.finite(boruta.train$ImpHistory[,i]),i])
names(lz) <- colnames(boruta.train$ImpHistory)
plot(boruta.train, xlab = "", xaxt = "n")
Labels <- sort(sapply(lz,median))
axis(side = 1,las=2,labels = names(Labels),
at = 1:ncol(boruta.train$ImpHistory), cex.axis = 0.7)
#get most important attributes
final.boruta <- TentativeRoughFix(boruta.train)
print(final.boruta)
getSelectedAttributes(final.boruta, withTentative = F)
boruta.rt_df <- attStats(final.boruta)
boruta.rt_df
boruta.rt_df <- setDT(boruta.rt_df, keep.rownames = TRUE)[]
predictors <- boruta.rt_df %>%
filter(., decision =="Confirmed") %>%
select(., rn)
predictors <- unlist(predictors)
control <- trainControl(method="repeatedcv",
number=10,
repeats=6)
#look at residuals
#p-value is very small so reject H0 that predictors have no effect so
#we can use rotten tomatoes to predict box_office ranges
train_m$interval <- NULL
model_lm <- train(train_m[,predictors],
y_train, method='lm',
trControl = control, tuneLength = 10)
model_lm #.568
#
plot(model_lm)
plot(model_lm)
z <- varImp(object=model_lm)
z <- setDT(z, keep.rownames = TRUE)
z$model <- NULL
z$calledFrom <- NULL
row.names(z)
plot(varImp(object=model_lm),main="Linear Model Variable Importance")
predictions<-predict.train(object=model_lm,rt_test[,predictors],type="raw")
table(predictions)
#get coeff
interc <- coef(model_lm$finalModel)
slope <- coef(model_lm$finalModel)
ggplot(data = rt_train, aes(y = box_office)) +
geom_point() +
geom_abline(slope = slope, intercept = interc, color = 'red')
This is what some of my input looks like. Thank you!!
Here is an example using the inbuilt data set cars:
data(cars, package = "datasets")
library(caret)
build the model
control <- trainControl(method = "repeatedcv",
number = 10,
repeats = 6)
model_lm <- train(dist ~ speed, data = cars, method='lm',
trControl = control, tuneLength = 10)
I will assume you would like to plot the final model.
You can use the caret predict.train function to get the predictions from the model and plot them:
pred <- predict(model_lm, cars)
pred <- data.frame(pred = pred, speed = cars$speed)
additionally you can provide the cars data set to geom point and plot the observations:
library(ggplot2)
ggplot(data = pred)+
geom_line(aes(x = speed, y = pred))+
geom_point(data = cars, aes(x=speed, y = dist))
if you would like to obtain the confidence or prediction interval you can use the predict.lm function on model_lm$finalModel:
Here is an example for the prediction interval:
pred <- predict(model_lm$finalModel, cars, se.fit = TRUE, interval = "prediction")
pred <- data.frame(pred = pred$fit[,1], speed = cars$speed, lwr = pred$fit[,2], upr = pred$fit[,3])
pred_int <- ggplot(data = pred)+
geom_line(aes(x = speed, y = pred))+
geom_point(data = cars, aes(x = speed, y = dist)) +
geom_ribbon(aes(ymin = lwr, ymax = upr, x = speed), alpha = 0.2)
or the confidence interval:
pred <- predict(model_lm$finalModel, cars, se.fit = TRUE, interval = "confidence")
pred <- data.frame(pred = pred$fit[,1], speed = cars$speed, lwr = pred$fit[,2], upr = pred$fit[,3])
pred_conf <- ggplot(data = pred)+
geom_line(aes(x = speed, y = pred))+
geom_point(data = cars, aes(x = speed, y = dist)) +
geom_ribbon(aes(ymin = lwr, ymax = upr, x = speed), alpha = 0.2)
plotting them side by side:
library(cowplot)
plot_grid(pred_int, pred_conf)
to plot the linear dependence on two variables you can use a 3D plot, for more than 3 it will be a problem.

Resources