I need to plot the relationship between x and y where polynomials of x predict y. This is done using the poly() function in order to ensure polynomials are orthogonal.
How do I plot this relationship considering linear, quadratic and cubic terms together ? The issue is the coefficients for the different terms are not scaled as x is.
I provide some example code below. I have tried reassigning the contrast values for each polynomial to x.
This solution gives impossible predicted values.
Thank you in advance for your help !
Best wishes,
Eric
Here is an example code:
x = sample(0:6,100,replace = TRUE)
y = (x*0.2) + (x^2*.05) + (x^3*0.001)
y = y + rnorm(100)
x = poly(x,3)
m = lm(y~x)
TAB = summary(m)$coefficients
### Reassigning the corresponding contrast values to each polynomial of x:
eq = function(x,TAB,start) {
#argument 'start' is used to determine the position of the linear coefficient, quadratic and cubic follow
pols = poly(x,3)
x1=pols[,1]; x2=pols[,2]; x3=pols[,3]
TAB[1,1] + x1[x]*TAB[start,1] + x2[x] * TAB[start+1,1] + x3[x] * TAB[start+2,1]
}
plot(eq(0:7,TAB,2))
Actually, you can use poly directly in formula for lm().
y ~ poly(x, 3) in lm() might be what you want.
For plot, I'll use ggplot2 package which has geom_smooth() function. It can draw the fitted curve. You should specify
method = "lm" argument
and the formula
library(tidyverse)
x <- sample(0:6,100,replace = TRUE)
y <- (x*0.2) + (x^2*.05) + (x^3*0.001)
eps <- rnorm(100)
(df <- data_frame(y = y + eps, x = x))
#> # A tibble: 100 x 2
#> y x
#> <dbl> <int>
#> 1 3.34 4
#> 2 1.23 5
#> 3 1.38 3
#> 4 -0.115 2
#> 5 1.94 5
#> 6 3.87 6
#> 7 -0.707 3
#> 8 0.954 3
#> 9 1.19 3
#> 10 -1.34 0
#> # ... with 90 more rows
Using your simulated data set,
df %>%
ggplot() + # this should be declared at first with the data set
aes(x, y) + # aesthetic
geom_point() + # data points
geom_smooth(method = "lm", formula = y ~ poly(x, 3)) # lm fit
If you want to remove the points: erase geom_point()
df %>%
ggplot() +
aes(x, y) +
geom_smooth(method = "lm", formula = y ~ poly(x, 3))
transparency solution: control alpha less than 1
df %>%
ggplot() +
aes(x, y) +
geom_point(alpha = .3) +
geom_smooth(method = "lm", formula = y ~ poly(x, 3))
Related
I am trying to understand how linear modelling can be used to as an alternative to the t-test when analysing gene expression data. For a single gene, I have a dataframe of 20 gene expression values altogether in group 1 (n=10) and group 2 (n=10).
gexp = data.frame(expression = c(2.7,0.4,1.8,0.8,1.9,5.4,5.7,2.8,2.0,4.0,3.9,2.8,3.1,2.1,1.9,6.4,7.5,3.6,6.6,5.4),
group = c(rep(1, 10), rep(2, 10)))
The data can be (box)plotted using ggplot as shown below:
plot <- gexp %>%
ggplot(aes(x = group, y = expression)) +
geom_boxplot() +
geom_point()
plot
I wish to model the expression in groups 1 and 2 using the regression formula:
Y = Beta0 + (Beta1 x X) + e where Y is the expression I want to model and X represents the two groups that are encoded as 0 and 1 respectively. Therefore, the expression in group 1 (when x = 0) is equal to Beta0; and the expression in group 2 (when x = 1) is equal to Beta0 + Beta1.
If this is modelled with:
mod1 <- lm(expression ~ group, data = gexp)
mod1
The above code outputs an intercept of 2.75 and a slope of 1.58. It is the visualisation of the linear model that I don't understand. I would be grateful for a clear explanation of the below code:
plot +
geom_point(data = data.frame(x = c(1, 2), y = c(2.75, 4.33)),
aes(x = x, y = y),
colour = "red", size = 5) +
geom_abline(intercept = coefficients(mod1)[1] - coefficients(mod1)[2],
slope = coefficients(mod1)[2])
I get why the data.frame values are the ones chosen (the value of 4.33 is the sum of the intercept, Beta0 and the slope, Beta1) , but it is the geom_abline arguments I do not understand. Why is the intercept calculation as shown? In the text I am using it states, '...we need to subtract the slope from the intercept when plotting the linear model because groups 1 and 2 are encoded as 0 and 1 in the model, but plotted as 1 and 2 on the figure.' I don't follow this point and would be grateful for an explanation, without getting too technical.
I believe your code is correct if the group variable was encoded as a factor.
library(ggplot2)
gexp = data.frame(expression = c(2.7,0.4,1.8,0.8,1.9,5.4,5.7,2.8,2.0,4.0,3.9,2.8,3.1,2.1,1.9,6.4,7.5,3.6,6.6,5.4),
group = factor(c(rep(1, 10), rep(2, 10))))
plot <-
ggplot(gexp, aes(x = group, y = expression)) +
geom_boxplot() +
geom_point()
mod1 <- lm(expression ~ group, data = gexp)
plot +
geom_point(data = data.frame(x = c(1, 2), y = c(2.75, 4.33)),
aes(x = x, y = y),
colour = "red", size = 5) +
geom_abline(intercept = coefficients(mod1)[1] - coefficients(mod1)[2],
slope = coefficients(mod1)[2])
Created on 2022-03-30 by the reprex package (v2.0.1)
To understand the difference between factors and integers in specifying linear models, you can have a look at the model matrix.
model.matrix(y ~ f, data = data.frame(f = 1:3, y = 1))
#> (Intercept) f
#> 1 1 1
#> 2 1 2
#> 3 1 3
#> attr(,"assign")
#> [1] 0 1
model.matrix(y ~ f, data = data.frame(f = factor(1:3), y = 1))
#> (Intercept) f2 f3
#> 1 1 0 0
#> 2 1 1 0
#> 3 1 0 1
#> attr(,"assign")
#> [1] 0 1 1
#> attr(,"contrasts")
#> attr(,"contrasts")$f
#> [1] "contr.treatment"
Created on 2022-03-30 by the reprex package (v2.0.1)
In the first model matrix, what you specify is what you get: you're modelling something as a function of the intercept and the f variable. In this model, you account for that f = 2 is twice as much as f = 1.
This works a little bit differently when f is a factor. A k-level factor gets split up in k-1 dummy variables, where each dummy variable encodes with 1 or 0 whether it deviates from the reference level (the first factor level). By modelling it in this way, you don't consider that the 2nd factor level might be twice the 1st factor level.
Because in ggplot2, the first factor level is displayed at position = 1 and not at position = 0 (how it is modelled), your calculated intercept is off. You need to subtract 1 * slope from the calculated intercept to get it to display right in ggplot2.
For a very basic demonstration, I'm trying to show that the log transformation linear model is the best one for a given set of data. To demonstrate that I'm looking to compare it to standard lm, square root etc, to show that graphically, the log transform of the linear model fits best as compared to the other 2. The question is, how do create multiple overlapping different lm lines in one plot,? If I could label them that would also be great?
Here is sample true data with starter ggplot
library(tidyverse)
p=runif(100,1,100)
q=6+3*log(p)+rnorm(100)
sample <- data.frame(p,q)
ggplot(data = sample) +
geom_point(mapping = aes(x = p, y = q))
You could compute the lines yourself, e.g. like this:
# Make a tibble containing name of transform and the actual function
transforms <- tibble(Transform = c("log", "sqrt", "linear"),
Function = list(log, sqrt, function(x) x))
# Compute the regression coefs and turn it into a tidy table
lm_df <- transforms %>%
group_by(Transform) %>%
group_modify(~ {
lm(q ~ .x$Function[[1]](p), data = sample) %>%
broom::tidy() %>%
select(term, estimate) %>%
pivot_longer(estimate) %>%
mutate(Function = .x$Function)
})
> lm_df
# A tibble: 6 x 5
# Groups: Transform [3]
Transform term name value Function
<chr> <chr> <chr> <dbl> <list>
1 linear (Intercept) estimate 12.6 <fn>
2 linear .x$Function[[1]](p) estimate 0.0834 <fn>
3 log (Intercept) estimate 5.89 <fn>
4 log .x$Function[[1]](p) estimate 2.99 <fn>
5 sqrt (Intercept) estimate 9.35 <fn>
6 sqrt .x$Function[[1]](p) estimate 1.11 <fn>
# Evaluate the functions at different x values
lm_df <- lm_df %>%
pivot_wider(names_from = term, values_from = value) %>%
rename("Intercept" = `(Intercept)`, "Slope" = `.x$Function[[1]](p)`) %>%
group_modify(~ {
tibble(
y = .x$Intercept + .x$Slope * .x$Function[[1]](seq(0, max(sample$p))),
x = seq(0, max(sample$p))
)
})
> lm_df
# A tibble: 300 x 3
# Groups: Transform [3]
Transform y x
<chr> <dbl> <int>
1 linear 12.6 0
2 linear 12.7 1
3 linear 12.8 2
4 linear 12.9 3
5 linear 12.9 4
6 linear 13.0 5
7 linear 13.1 6
8 linear 13.2 7
9 linear 13.3 8
10 linear 13.4 9
# ... with 290 more rows
# Plot the functions
ggplot() +
geom_point(data = sample, mapping = aes(x = p, y = q)) +
geom_line(data = lm_df, aes(x = x, y = y, color = Transform))
This doesn't handle the labeling (you could use annotate() to add labels manually), but:
gg0 <- ggplot(data = sample, aes(x=p, y=q)) + geom_point()
gg0 + geom_smooth(method="lm", formula=y~x) +
geom_smooth(method="lm", formula=y~log(x), colour="red") +
geom_smooth(method="lm", formula=y~sqrt(x), colour="purple")
ggplot provides various "smoothing methods" or "formulas" that determine the form of the trend line. However it is unclear to me how the parameters of the formula are specified and how I can get the exponential formula to fit my data. In other words how to tell ggplot that it should fit the parameter inside the exp.
df <- data.frame(x = c(65,53,41,32,28,26,23,19))
df$y <- c(4,3,2,8,12,8,20,15)
x y
1 65 4
2 53 3
3 41 2
4 32 8
5 28 12
6 26 8
7 23 20
8 19 15
p <- ggplot(data = df, aes(x = x, y = y)) +
geom_smooth(method = "glm", se=FALSE, color="black", formula = y ~ exp(x)) +
geom_point()
p
Problematic fit:
However if the parameter inside the exponential is fit then the form of the trend line becomes reasonable:
p <- ggplot(data = df, aes(x = x, y = y)) +
geom_smooth(method = "glm", se=FALSE, color="black", formula = y ~ exp(-0.09 * x)) +
geom_point()
p
Here is an approach with method nls instead of glm.
You can pass additional parameters to nls with a list supplied in method.args =. Here we define starting values for the a and r coefficients to be fit from.
library(ggplot2)
ggplot(data = df, aes(x = x, y = y)) +
geom_smooth(method = "nls", se = FALSE,
formula = y ~ a * exp(r * x),
method.args = list(start = c(a = 10, r = -0.01)),
color = "black") +
geom_point()
As discussed in the comments, the best way to get the coefficients on the graph is by fitting the model outside the ggplot call.
model.coeff <- coef(nls( y ~ a * exp(r * x), data = df, start = c(a = 50, r = -0.04)))
ggplot(data = df, aes(x = x, y = y)) +
geom_smooth(method = "nls", se = FALSE,
formula = y ~ a * exp(r * x),
method.args = list(start = c(a = 50, r = -0.04)),
color = "black") +
geom_point() +
geom_text(x = 40, y = 15,
label = as.expression(substitute(italic(y) == a %.% italic(e)^(r %.% x),
list(a = format(unname(model.coeff["a"]),digits = 3),
r = format(unname(model.coeff["r"]),digits = 3)))),
parse = TRUE)
Firstly, to pass additional parameters to the function passed to the method param of geom_smooth, you can pass a list of named parameters to method.args.
Secondly, the problem you're seeing is that glm is placing the coefficient in front of the whole term: y ~ coef * exp(x) instead of inside: y ~ exp(coef * x) like you want. You could use optimization to solve the latter outside of glm, but you can fit it into the GLM paradigm by a transformation: a log link. This works because it's like taking the equation you want to fit, y = exp(coef * x), and taking the log of both sides, so you're now fitting log(y) = coef * x, which is equivalent to what you want to fit and works with the GLM paradigm. (This ignores the intercept. It also ends up in transformed link units, but it's easy enough to convert back if you like.)
You can run this outside of ggplot to see what the models look like:
df <- data.frame(
x = c(65,53,41,32,28,26,23,19),
y <- c(4,3,2,8,12,8,20,15)
)
bad_model <- glm(y ~ exp(x), family = gaussian(link = 'identity'), data = df)
good_model <- glm(y ~ x, family = gaussian(link = 'log'), data = df)
# this is bad
summary(bad_model)
#>
#> Call:
#> glm(formula = y ~ exp(x), family = gaussian(link = "identity"),
#> data = df)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -7.7143 -2.9643 -0.8571 3.0357 10.2857
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 9.714e+00 2.437e+00 3.986 0.00723 **
#> exp(x) -3.372e-28 4.067e-28 -0.829 0.43881
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for gaussian family taken to be 41.57135)
#>
#> Null deviance: 278.00 on 7 degrees of freedom
#> Residual deviance: 249.43 on 6 degrees of freedom
#> AIC: 56.221
#>
#> Number of Fisher Scoring iterations: 2
# this is better
summary(good_model)
#>
#> Call:
#> glm(formula = y ~ x, family = gaussian(link = "log"), data = df)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -3.745 -2.600 0.046 1.812 6.080
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 3.93579 0.51361 7.663 0.000258 ***
#> x -0.05663 0.02054 -2.757 0.032997 *
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for gaussian family taken to be 12.6906)
#>
#> Null deviance: 278.000 on 7 degrees of freedom
#> Residual deviance: 76.143 on 6 degrees of freedom
#> AIC: 46.728
#>
#> Number of Fisher Scoring iterations: 6
From here, you can reproduce what geom_smooth is going to do: make a sequence of x values across the domain and use the predictions as the y values for the line:
# new data is a sequence across the domain of the model
new_df <- data.frame(x = seq(min(df$x), max(df$x), length = 501))
# `type = 'response'` because we want values for y back in y units
new_df$bad_pred <- predict(bad_model, newdata = new_df, type = 'response')
new_df$good_pred <- predict(good_model, newdata = new_df, type = 'response')
library(tidyr)
library(ggplot2)
new_df %>%
# reshape to long form for ggplot
gather(model, y, contains('pred')) %>%
ggplot(aes(x, y)) +
geom_line(aes(color = model)) +
# plot original points on top
geom_point(data = df)
Of course, it's a lot easier to let ggplot handle all that for you:
ggplot(df, aes(x, y)) +
geom_smooth(
method = 'glm',
formula = y ~ x,
method.args = list(family = gaussian(link = 'log'))
) +
geom_point()
consider this simple example
dataframe <- data_frame(x = c(1,2,3,4,5,6),
y = c(12,24,24,34,12,15))
> dataframe
# A tibble: 6 x 2
x y
<dbl> <dbl>
1 1 12
2 2 24
3 3 24
4 4 34
5 5 12
6 6 15
dataframe %>% ggplot(., aes(x = x, y = y)) +
geom_point() +
geom_smooth(method = 'lm', formula = y~x)
Here the standard errors are computed with the default option. However, I would like to use the robust variance-covariance matrix available in the package sandwich and lmtest
That is, using vcovHC(mymodel, "HC3")
Is there a way to get that in a simple way using the geom_smooth() function?
UPDATE: 2021-03-17 It was recently pointed out to me that the ggeffects package handles different VCOVs automatically, including the trickier HAC case that I originally demonstrated below. Quick example of the latter:
library(ggeffects)
library(sandwich) ## For HAC and other robust VCOVs
d <- data.frame(x = c(1,2,3,4,5,6),
y = c(12,24,24,34,12,15))
reg1 <- lm(y ~ x, data = d)
plot(ggpredict(reg1, "x", vcov.fun = "vcovHAC"))
#> Loading required namespace: ggplot2
## This gives you a regular ggplot2 object. So you can add layers as you
## normally would. E.g. If you'd like to compare with the original data...
library(ggplot2)
last_plot() +
geom_point(data = d, aes(x, y)) +
labs(caption = 'Shaded region indicates HAC 95% CI.')
Created on 2021-03-17 by the reprex package (v1.0.0)
My original answer follows below...
HC robust SEs (simple)
This is easily done now thanks to the estimatr package and its family of lm_robust functions. E.g.
library(tidyverse)
library(estimatr)
d <- data.frame(x = c(1,2,3,4,5,6),
y = c(12,24,24,34,12,15))
d %>%
ggplot(aes(x = x, y = y)) +
geom_point() +
geom_smooth(method = 'lm_robust', formula = y~x, fill="#E41A1C") + ## Robust (HC) SEs
geom_smooth(method = 'lm', formula = y~x, col = "grey50") + ## Just for comparison
labs(
title = "Plotting HC robust SEs in ggplot2",
subtitle = "Regular SEs in grey for comparison"
) +
theme_minimal()
Created on 2020-03-08 by the reprex package (v0.3.0)
HAC robust SEs (a bit more legwork)
The one caveat is that estimatr does not yet offer support for HAC (i.e. heteroscedasticity and autocorrelation consistent) SEs a la Newey-West. However, it is possible to obtain these manually with the sandwich package... which is kind of what the original question was asking anyway. You can then plot them using geom_ribbon().
I'll say for the record that HAC SEs don't make much sense for this particular data set. But here's an example of how you could do it, riffing off this excellent SO answer on a related topic.
library(tidyverse)
library(sandwich)
d <- data.frame(x = c(1,2,3,4,5,6),
y = c(12,24,24,34,12,15))
reg1 <- lm(y~x, data = d)
## Generate a prediction DF
pred_df <- data.frame(fit = predict(reg1))
## Get the design matrix
X_mat <- model.matrix(reg1)
## Get HAC VCOV matrix and calculate SEs
v_hac <- NeweyWest(reg1, prewhite = FALSE, adjust = TRUE) ## HAC VCOV (adjusted for small data sample)
#> Warning in meatHAC(x, order.by = order.by, prewhite = prewhite, weights =
#> weights, : more weights than observations, only first n used
var_fit_hac <- rowSums((X_mat %*% v_hac) * X_mat) ## Point-wise variance for predicted mean
se_fit_hac <- sqrt(var_fit_hac) ## SEs
## Add these to pred_df and calculate the 95% CI
pred_df <-
pred_df %>%
mutate(se_fit_hac = se_fit_hac) %>%
mutate(
lwr_hac = fit - qt(0.975, df=reg1$df.residual)*se_fit_hac,
upr_hac = fit + qt(0.975, df=reg1$df.residual)*se_fit_hac
)
pred_df
#> fit se_fit_hac lwr_hac upr_hac
#> 1 20.95238 4.250961 9.149822 32.75494
#> 2 20.63810 2.945392 12.460377 28.81581
#> 3 20.32381 1.986900 14.807291 25.84033
#> 4 20.00952 1.971797 14.534936 25.48411
#> 5 19.69524 2.914785 11.602497 27.78798
#> 6 19.38095 4.215654 7.676421 31.08548
## Plot it
bind_cols(
d,
pred_df
) %>%
ggplot(aes(x = x, y = y, ymin=lwr_hac, ymax=upr_hac)) +
geom_point() +
geom_ribbon(fill="#E41A1C", alpha=0.3, col=NA) + ## Robust (HAC) SEs
geom_smooth(method = 'lm', formula = y~x, col = "grey50") + ## Just for comparison
labs(
title = "Plotting HAC SEs in ggplot2",
subtitle = "Regular SEs in grey for comparison",
caption = "Note: Do HAC SEs make sense for this dataset? Definitely not!"
) +
theme_minimal()
Created on 2020-03-08 by the reprex package (v0.3.0)
Note that you could also use this approach to manually calculate and plot other robust SE predictions (e.g. HC1, HC2,etc.) if you so wished. All you would need to do is use the relevant sandwich estimator. For instance, using vcovHC(reg1, type = "HC2") instead of NeweyWest(reg1, prewhite = FALSE, adjust = TRUE) will give you an identical HC-robust CI to the first example that uses the estimatr package.
I am very new to this whole robust SE thing, but I was able to generate the following:
zz = '
x y
1 1 12
2 2 24
3 3 24
4 4 34
5 5 12
6 6 15
'
df <- read.table(text = zz, header = TRUE)
df
library(sandwich)
library(lmtest)
lm.model<-lm(y ~ x, data = df)
coef(lm.model)
se = sqrt(diag(vcovHC(lm.model, type = "HC3")))
fit = predict(lm.model)
predframe <- with(df,data.frame(x,
y = fit,
lwr = fit - 1.96 * se,
upr = fit + 1.96 * se))
library(ggplot2)
ggplot(df, aes(x = x, y = y))+
geom_point()+
geom_line(data = predframe)+
geom_ribbon(data = predframe, aes(ymin = lwr,ymax = upr), alpha = 0.3)
I have a data frame x1, which was generated with the following piece of code,
x <- c(1:10)
y <- x^3
z <- y-20
s <- z/3
t <- s*6
q <- s*y
x1 <- cbind(x,y,z,s,t,q)
x1 <- data.frame(x1)
I would like to extract the y-axis intercept and the slope of the linear regression fit for the data,
x y z s t q
1 1 1 -19 -6.333333 -38 -6.333333
2 2 8 -12 -4.000000 -24 -32.000000
3 3 27 7 2.333333 14 63.000000
4 4 64 44 14.666667 88 938.666667
5 5 125 105 35.000000 210 4375.000000
6 6 216 196 65.333333 392 14112.000000
7 7 343 323 107.666667 646 36929.666667
8 8 512 492 164.000000 984 83968.000000
9 9 729 709 236.333333 1418 172287.000000
10 10 1000 980 326.666667 1960 326666.666667
I use the following codes to melt and plot three columns of data,
xm <- melt(x1, id=names(x1)[1], measure=names(x1)[c(2, 4, 5)], variable = "cols")
plt <- ggplot(xm) +
geom_point(aes(x=x,y= value, color=cols), size=3) +
labs(x = "x", y = "y")
Now what I require is to get a linear least squares fit for all the data separately and store the resulting intercept and slope in a new data frame.
I use plt + geom_abline() but I don't get the desired result. Could someone let me know how to resolve this.
I suppose you're looking for geom_smooth. If you call this function with the argument method = "lm", it will calculate a linear fit for all groups:
ggplot(xm, aes(x = x, y = value, color = cols)) +
geom_point(size = 3) +
labs(x = "x", y = "y") +
geom_smooth(method = "lm", se = FALSE)
You can also specify a quadratic fit with the poly function and the formula argument:
ggplot(xm, aes(x = x, y = value, color=cols)) +
geom_point(size = 3) +
labs(x = "x", y = "y") +
geom_smooth(method = "lm", se = FALSE, formula = y ~ poly(x, 2))
To extract the corresponding regression coefficients, you can use this approach:
# create a list of coefficients
fits <- by(xm[-2], xm$cols, function(i) coef(lm(value ~ x, i)))
# create a data frame
data.frame(cols = names(fits), do.call(rbind, fits))
# cols X.Intercept. x
# y y -277.20000 105.40000
# s s -99.06667 35.13333
# t t -594.40000 210.80000
If you want a quadratic fit, just replace value ~ x with value ~ poly(x, 2).