I'm quite an R newbie and facing the following challange.
I'll share my code here but applied to a different dataframe since I cannot share the original dataframe.
This is my code:
fit = gam( carb ~ te(cyl, hp, k=c(3,4)), data = mtcars)
plot(fit,rug=F,pers=T,theta=45,main="test")
using my company's data, this generates a nice surface with the predicted values on the Z axes.
I would like to add the actual response values as red dots on Z axis so that I could see where predicted values are under/over estimating the actual reponse.
Would you know what parameter I should add to plot in order to do that?
Many thanks
As #李哲源 pointed out in the comments, you shouldn't use plot here, because it's not flexible enough. Here's a version based on the referenced question Rough thin-plate spline fitting (thin-plate spline interpolation) in R with mgcv.
# First, get the fit
library(mgcv)
fit <- gam( carb ~ te(cyl, hp, k=c(3,4)), data = mtcars)
# Now expand it to a grid so that persp will work
steps <- 30
cyl <- with(mtcars, seq(min(cyl), max(cyl), length = steps) )
hp <- with(mtcars, seq(min(hp), max(hp), length = steps) )
newdat <- expand.grid(cyl = cyl, hp = hp)
carb <- matrix(predict(fit, newdat), steps, steps)
# Now plot it
p <- persp(cyl, hp, carb, theta = 45, col = "yellow")
# To add the points, you need the same 3d transformation
obs <- with(mtcars, trans3d(cyl, hp, carb, p))
pred <- with(mtcars, trans3d(cyl, hp, fitted(fit), p))
points(obs, col = "red", pch = 16)
# Add segments to show where the points are in 3d
segments(obs$x, obs$y, pred$x, pred$y)
That produces the following plot:
You might not want to make predictions so far from the observed data. You can put NA values into carb to avoid that. This code does that:
exclude <- exclude.too.far(rep(cyl,steps),
rep(hp, rep(steps, steps)),
mtcars$cyl,
mtcars$hp, 0.15) # 0.15 chosen by trial and error
carb[exclude] <- NA
p <- persp(cyl, hp, carb, theta = 45, col = "yellow")
obs <- with(mtcars, trans3d(cyl, hp, carb, p))
pred <- with(mtcars, trans3d(cyl, hp, fitted(fit), p))
points(obs, col = "red", pch = 16)
segments(obs$x, obs$y, pred$x, pred$y)
That produces this plot:
Finally, you might want to use the rgl package to get a dynamic graph instead. After the same manipulations as above, use this code to do the plotting:
library(rgl)
persp3d(cyl, hp, carb, col="yellow", polygon_offset = 1)
surface3d(cyl, hp, carb, front = "lines", back = "lines")
with(mtcars, points3d(cyl, hp, carb, col = "red"))
with(mtcars, segments3d(rep(cyl, each = 2),
rep(hp, each = 2),
as.numeric(rbind(fitted(fit),
carb))))
Here's one possible view:
You can use the mouse to rotate this one if you want to see it from a different angle. One other advantage is that points that should be hidden by the surface really are hidden; in persp, they'll plot on top even if they should be behind it.
Related
fit1 = lm(price ~ . , data = car)
fit2 = lm(log(price) ~ . , data = car)
I'm not sure how to convert log(price) to price in fit2 Won't it just become the same thing as fit1 if I do convert it? Please help.
Let's take a very simple example. Suppose I have some data points like this:
library(ggplot2)
df <- data.frame(x = 1:10, y = (1:10)^2)
(p <- ggplot(df, aes(x, y)) + geom_point())
I want to try to fit a model to them, but don't know what form this should take. I try a linear regression first and plot the resultant prediction:
mod1 <- lm(y ~ x, data = df)
(p <- p + geom_line(aes(y = predict(mod1)), color = "blue"))
Next I try a linear regression on log(y). Whatever results I get from predictions from this model will be predicted values of log(y). But I don't want log(y) predictions, I want y predictions, so I need to take the 'anti-log' of the prediction. We do this in R by doing exp:
mod2 <- lm(log(y) ~ x, data = df)
(p <- p + geom_line(aes(y = exp(predict(mod2))), color = "red"))
But we can see that we have different regression lines. That's because when we took the log of y, we were effectively fitting a straight line on the plot of log(y) against x. When we transform the axis back to a non-log axis, our straight line becomes an exponential curve. We can see this more clearly by drawing our plot again with a log-transformed y axis:
p + scale_y_log10(limits = c(1, 500))
Created on 2020-08-04 by the reprex package (v0.3.0)
I have a simple polynomial regression which I do as follows
attach(mtcars)
fit <- lm(mpg ~ hp + I(hp^2))
Now, I plot as follows
> plot(mpg~hp)
> points(hp, fitted(fit), col='red', pch=20)
This gives me the following
I want to connect these points into a smooth curve, using lines gives me the following
> lines(hp, fitted(fit), col='red', type='b')
What am I missing here. I want the output to be a smooth curve which connects the points
I like to use ggplot2 for this because it's usually very intuitive to add layers of data.
library(ggplot2)
fit <- lm(mpg ~ hp + I(hp^2), data = mtcars)
prd <- data.frame(hp = seq(from = range(mtcars$hp)[1], to = range(mtcars$hp)[2], length.out = 100))
err <- predict(fit, newdata = prd, se.fit = TRUE)
prd$lci <- err$fit - 1.96 * err$se.fit
prd$fit <- err$fit
prd$uci <- err$fit + 1.96 * err$se.fit
ggplot(prd, aes(x = hp, y = fit)) +
theme_bw() +
geom_line() +
geom_smooth(aes(ymin = lci, ymax = uci), stat = "identity") +
geom_point(data = mtcars, aes(x = hp, y = mpg))
Try:
lines(sort(hp), fitted(fit)[order(hp)], col='red', type='b')
Because your statistical units in the dataset are not ordered, thus, when you use lines it's a mess.
Generally a good way to go is to use the predict() function. Pick some x values, use predict() to generate corresponding y values, and plot them. It can look something like this:
newdat = data.frame(hp = seq(min(mtcars$hp), max(mtcars$hp), length.out = 100))
newdat$pred = predict(fit, newdata = newdat)
plot(mpg ~ hp, data = mtcars)
with(newdat, lines(x = hp, y = pred))
See Roman's answer for a fancier version of this method, where confidence intervals are calculated too. In both cases the actual plotting of the solution is incidental - you can use base graphics or ggplot2 or anything else you'd like - the key is just use the predict function to generate the proper y values. It's a good method because it extends to all sorts of fits, not just polynomial linear models. You can use it with non-linear models, GLMs, smoothing splines, etc. - anything with a predict method.
After variable selection I usually end up in a model with a numerical covariable (2nd or 3rd degree). What I want to do is to plot using emmeans package preferentially. Is there a way of doing it?
I can do it using predict:
m1 <- lm(mpg ~ poly(disp,2), data = mtcars)
df <- cbind(disp = mtcars$disp, predict.lm(m1, interval = "confidence"))
df <- as.data.frame(df)
ggplot(data = df, aes(x = disp, y = fit)) +
geom_line() +
geom_ribbon(aes(ymin = lwr, ymax = upr, x = disp, y = fit),alpha = 0.2)
I didn't figured out a way of doing it using emmip neither emtrends
For illustration purposes, how could I do it using mixed models via lme?
m1 <- lme(mpg ~ poly(disp,2), random = ~1|factor(am), data = mtcars)
I suspect that your issue is due to the fact that by default, covariates are reduced to their means in emmeans. You can use theat or cov.reduce arguments to specify a larger number of values. See the documentation for ref_grid and vignette(“basics”, “emmeans”), or the index of vignette topics.
Using sjPlot:
plot_model(m1, terms = "disp [all]", type = "pred")
gives the same graphic.
Using emmeans:
em1 <- ref_grid(m1, at = list(disp = seq(min(mtcars$disp), max(mtcars$disp), 1)))
emmip(em1, ~disp, CIs = T)
returns a graphic with a small difference in layout. An alternative is to add the result to an object and plot as the way that I want to:
d1 <- emmip(em1, ~disp, CIs = T, plotit = F)
I am trying to create a quadratic prediction line for a quadratic model. I am using the Auto dataset that comes with R. I had no trouble creating the prediction line for a linear model. However, the quadratic model yields crazy looking lines. Here is my code.
# Linear Model
plot(Auto$horsepower, Auto$mpg,
main = "MPG versus Horsepower",
pch = 20)
lin_mod = lm(mpg ~ horsepower,
data = Auto)
lin_pred = predict(lin_mod)
lines(
Auto$horsepower, lin_pred,
col = "blue", lwd = 2
)
# The Quadratic model
Auto$horsepower2 = Auto$horsepower^2
quad_model = lm(mpg ~ horsepower2,
data = Auto)
quad_pred = predict(quad_model)
lines(
Auto$horsepower,
quad_pred,
col = "red", lwd = 2
)
I am 99% sure that the issue is the prediction function. Why can't I produce a neat looking quadratic prediction curve? The following code I tried does not work—could it be related?:
quad_pred = predict(quad_model, data.frame(horsepower = Auto$horsepower))
Thanks!
The issue is that the x-axis values aren't sorted. It wouldn't matter if was a linear model but it would be noticeable if it was polynomial. I created a new sorted data set and it works fine:
library(ISLR) # To load data Auto
# Linear Model
plot(Auto$horsepower, Auto$mpg,
main = "MPG versus Horsepower",
pch = 20)
lin_mod = lm(mpg ~ horsepower,
data = Auto)
lin_pred = predict(lin_mod)
lines(
Auto$horsepower, lin_pred,
col = "blue", lwd = 2
)
# The Quadratic model
Auto$horsepower2 = Auto$horsepower^2
# Sorting Auto by horsepower2
Auto2 <- Auto[order(Auto$horsepower2), ]
quad_model = lm(mpg ~ horsepower2,
data = Auto2)
quad_pred = predict(quad_model)
lines(
Auto2$horsepower,
quad_pred,
col = "red", lwd = 2
)
One option is to create the sequence of x-values for which you would like to plot the fitted lines. This can be useful if your data has a "gap" or if you wish to plot the fitted lines outside of the range of the x-variables.
# load dataset; if necessary run install.packages("ISLR")
data(Auto, package = "ISLR")
# since only 2 variables at issue, use short names
mpg <- Auto$mpg
hp <- Auto$horsepower
# fit linear and quadratic models
lmod <- lm(mpg ~ hp)
qmod <- lm(mpg ~ hp + I(hp^2))
# plot the data
plot(x=hp, y=mpg, pch=20)
# use predict() to find coordinates of points to plot
x_coords <- seq(from=floor(min(hp)), to=ceiling(max(hp)), by=1)
y_coords_lmod <- predict(lmod, newdata=data.frame(hp=x_coords))
y_coords_qmod <- predict(qmod, newdata=data.frame(hp=x_coords))
# alternatively, calculate this manually using the fitted coefficients
y_coords_lmod <- coef(lmod)[1] + coef(lmod)[2]*x_coords
y_coords_qmod <- coef(qmod)[1] + coef(qmod)[2]*x_coords + coef(qmod)[3]*x_coords^2
# add the fitted lines to the plot
points(x=x_coords, y=y_coords_lmod, type="l", col="blue")
points(x=x_coords, y=y_coords_qmod, type="l", col="red")
Alternatively, using ggplot2:
ggplot(Auto, aes(x = horsepower, y = mpg)) + geom_point() +
stat_smooth(aes(x = horsepower, y = mpg), method = "lm", formula = y ~ x, colour = "red") +
stat_smooth(aes(x = horsepower, y = mpg), method = "lm", formula = y ~ poly(x, 2), colour = "blue")
I have a simple polynomial regression which I do as follows
attach(mtcars)
fit <- lm(mpg ~ hp + I(hp^2))
Now, I plot as follows
> plot(mpg~hp)
> points(hp, fitted(fit), col='red', pch=20)
This gives me the following
I want to connect these points into a smooth curve, using lines gives me the following
> lines(hp, fitted(fit), col='red', type='b')
What am I missing here. I want the output to be a smooth curve which connects the points
I like to use ggplot2 for this because it's usually very intuitive to add layers of data.
library(ggplot2)
fit <- lm(mpg ~ hp + I(hp^2), data = mtcars)
prd <- data.frame(hp = seq(from = range(mtcars$hp)[1], to = range(mtcars$hp)[2], length.out = 100))
err <- predict(fit, newdata = prd, se.fit = TRUE)
prd$lci <- err$fit - 1.96 * err$se.fit
prd$fit <- err$fit
prd$uci <- err$fit + 1.96 * err$se.fit
ggplot(prd, aes(x = hp, y = fit)) +
theme_bw() +
geom_line() +
geom_smooth(aes(ymin = lci, ymax = uci), stat = "identity") +
geom_point(data = mtcars, aes(x = hp, y = mpg))
Try:
lines(sort(hp), fitted(fit)[order(hp)], col='red', type='b')
Because your statistical units in the dataset are not ordered, thus, when you use lines it's a mess.
Generally a good way to go is to use the predict() function. Pick some x values, use predict() to generate corresponding y values, and plot them. It can look something like this:
newdat = data.frame(hp = seq(min(mtcars$hp), max(mtcars$hp), length.out = 100))
newdat$pred = predict(fit, newdata = newdat)
plot(mpg ~ hp, data = mtcars)
with(newdat, lines(x = hp, y = pred))
See Roman's answer for a fancier version of this method, where confidence intervals are calculated too. In both cases the actual plotting of the solution is incidental - you can use base graphics or ggplot2 or anything else you'd like - the key is just use the predict function to generate the proper y values. It's a good method because it extends to all sorts of fits, not just polynomial linear models. You can use it with non-linear models, GLMs, smoothing splines, etc. - anything with a predict method.