Calculating odds ratio from glm output - r

It is my first time doing logistic regressions and I am currently trying to teach myself how to find the odds ratio. I got my coefficients from r as shown below.
(Intercept) totalmins
0.2239254 1.2424020
To exponentiate the regression coefficient I did the following:
exp1.242/exp1.242+1 = 0.77
Really not sure if this is the correct process or not.
Any advice on how I would go about calculating odds ratio would be greatly appreciated
detection- 1/0 data if animal was detected at site
total mins- time animal spent at site
here's the output
glm(formula = detection ~ totalmins, family = binomial(link = "logit"),
data = data)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.81040 -0.63571 0.00972 0.37355 1.16771
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.49644 0.81818 -1.829 0.0674 .
totalmins 0.21705 0.08565 2.534 0.0113
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 41.194 on 33 degrees of freedom
Residual deviance: 21.831 on 32 degrees of freedom
(1 observation deleted due to missingness)
AIC: 25.831
Number of Fisher Scoring iterations: 8

This model evaluates the log odds of detecting an animal at the site based on the time in minutes that the animal spent on the site. The model output indicates:
log odds(animal detected | time on site) = -1.49644 + 0.21705 * minutes animal on site
To convert to odds ratios, we exponentiate the coefficients:
odds(animal detected) = exp(-1.49644) * exp(0.21705 * minutes animal on site)
Therefore, the odds and probability of detection if the animal spends 0 minutes on site is e(-1.49644) or 0.2239. The odds ratio of detection if an animal is on site for X minutes is calculated as follows. We'll model odds ratios for minutes 0 through 10, and calculate the associated probability of detection.
# odds of detection if animal on site for X minutes
coef_df <- data.frame(intercept=rep(-1.49644,11),
slopeMinutes=rep(0.21705,11),
minutesOnSite=0:10)
coef_df$minuteValue <- coef_df$minutesOnSite * coef_df$slopeMinutes
coef_df$intercept_exp <- exp(coef_df$intercept)
coef_df$slope_exp <- exp(coef_df$minuteValue)
coef_df$odds <- coef_df$intercept_exp * coef_df$slope_exp
coef_df$probability <- coef_df$odds / (1 + coef_df$odds)
...and the output:
> coef_df[,c(3:4,6:8)]
minutesOnSite intercept_exp slope_exp odds probability
1 0 0.2239 1.000 0.2239 0.1830
2 1 0.2239 1.242 0.2782 0.2177
3 2 0.2239 1.544 0.3456 0.2569
4 3 0.2239 1.918 0.4294 0.3004
5 4 0.2239 2.383 0.5335 0.3479
6 5 0.2239 2.960 0.6629 0.3986
7 6 0.2239 3.678 0.8235 0.4516
8 7 0.2239 4.569 1.0232 0.5057
9 8 0.2239 5.677 1.2712 0.5597
10 9 0.2239 7.053 1.5793 0.6123
11 10 0.2239 8.763 1.9622 0.6624
>
See also How to get probability from GLM output for another example using space shuttle autolander data from the MASS package.

Related

Why such large degrees of freedom on some levels in a linear mixed effects model?

I have activity budget data from wild orangutans for which I am investigating if there is a difference in the time they spend feeding, resting and travelling before a forest fire event and after the fire event. I am running a linear mixed effects model with the minutes spent feeding on a particular day as my response variable (with the number of minutes the orangutan is awake as an offset). Fire period and age/sex class are fixed effects, and orangutan ID is the random effect.
I have 2 levels of the fire_time factor ('pre' and 'post'), 4 levels of the Age_Sex factor ('SAF', 'FM', 'UFM', 'Adolescent'), 47 orangutans for the random effect and a total of 817 datapoints in this dataset.
My dataframe looks like this:
head(F)
Follow_num Ou_name Date Month fire_time Age_Sex Primary_Act AP_obs minutesin24hr Perc_of_waking_day Perc_of_24hr
1 2029 Teresia 2011-10-04 Oct-11 pre SAF Feeding 625 310 49.60 21.53
5 2030 Teresia 2011-10-05 Oct-11 pre SAF Feeding 610 285 46.72 19.79
9 2032 Teresia 2011-10-09 Oct-11 pre SAF Feeding 620 340 54.84 23.61
13 2034 Teresia 2011-10-11 Oct-11 pre SAF Feeding 670 405 60.45 28.13
17 2038 Victor 2011-10-27 Oct-11 pre FM Feeding 675 155 22.96 10.76
21 2040 Nero 2011-11-03 Nov-11 pre FM Feeding 640 295 46.09 20.49
The code for my model is as follows:
library(lme4)
lmer(minutesin24hr ~ Age_Sex + fire_time + (1|Ou_name), data = F, offset = AP_obs, REML = TRUE, na.action = "na.fail")
When I run this model using the lmerTest package to check degrees of freedom and p-values, it seems I have very large degrees of freedom for the levels that are significant (see Age_SexSAF and fire_timepre).
lmerTestmodel <- lmerTest::lmer(minutesin24hr ~ Age_Sex + fire_time + (1|Ou_name), data = F, offset = AP_obs, REML = TRUE, na.action = "na.fail")
REML criterion at convergence: 9370.7
Scaled residuals:
Min 1Q Median 3Q Max
-3.8955 -0.6304 0.1006 0.7141 2.3109
Random effects:
Groups Name Variance Std.Dev.
Ou_name (Intercept) 1636 40.44
Residual 5460 73.89
Number of obs: 817, groups: Ou_name, 47
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
(Intercept) -188.614 14.711 26.765 -12.821 6.14e-13 ***
Age_SexFM -20.297 17.978 24.696 -1.129 0.2698
Age_SexSAF -25.670 11.799 318.473 -2.176 0.0303 *
Age_SexUFM 12.925 22.806 27.319 0.567 0.5755
fire_timepre -29.558 6.214 709.117 -4.757 2.38e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) Ag_SFM A_SSAF A_SUFM
Age_SexFM -0.741
Age_SexSAF -0.505 0.374
Age_SexUFM -0.598 0.480 0.302
fire_timepr -0.298 -0.015 0.149 0.034
I imagine these large degrees of freedom are making the p-values significant so am sceptical about the model. Why is it I am getting such large degrees of freedom on just these two levels? There are more data in the Age_SexSAF and fire_timepre levels but it doesn't seem normal to me.
I am planning on reporting the estimate, confidence intervals and p-values in my thesis but am concerned about reporting if these degrees of freedom are wrong.
Apologies if this may be a naïve question, this is the first time I have ventured into mixed effects models. Any advice is greatly appreciated, thanks!

How is Pr(>|t|) in a linear regression in R calculated?

What formula is used to calculate the value of Pr(>|t|) that is output when linear regression is performed by R?
I understand that the value of Pr (> | t |) is a p-value, but I do not understand how the value is calculated.
For example, although the value of Pr (> | t |) of x1 is displayed as 0.021 in the output result below, I want to know how this value was calculated
x1 <- c(10,20,30,40,50,60,70,80,90,100)
x2 <- c(20,30,60,70,100,110,140,150,180,190)
y <- c(100,120,150,180,210,220,250,280,310,330)
summary(lm(y ~ x1+x2))
Call:
lm(formula = y ~ x1 + x2)
Residuals:
Min 1Q Median 3Q Max
-6 -2 0 2 6
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 74.0000 3.4226 21.621 1.14e-07 ***
x1 1.8000 0.6071 2.965 0.021 *
x2 0.4000 0.3071 1.303 0.234
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 4.781 on 7 degrees of freedom
Multiple R-squared: 0.9971, Adjusted R-squared: 0.9963
F-statistic: 1209 on 2 and 7 DF, p-value: 1.291e-09
Basically, the values in the column t-value are obtained by dividing the coefficient estimate (which is in the Estimate column) by the standard error.
For example in your case in the second row we get that:
tval = 1.8000 / 0.6071 = 2.965
The column you are interested in is the p-value. It is the probability that the absolute value of t-distribution is greater than 2.965. Using the symmetry of the t-distribution this probability is:
2 * pt(abs(tval), rdf, lower.tail = FALSE)
Here rdf denotes the residual degrees of freedom, which in our case is equal to 7:
rdf = number of observations minus total number of coefficient = 10 - 3 = 7
And a simple check shows that this is indeed what R does:
2 * pt(2.965, 7, lower.tail = FALSE)
[1] 0.02095584

LME4 GLMMs are different when constructed as success | trials vs raw data?

Why are these GLMMs so different?
Both are made with lme4, both use the same data, but one is framed in terms of successes and trials (m1bin) while one just uses the raw accuracy data (m1). Have I been completely mistaken thinking that lme4 figures out the binomial structure from the raw data this whole time? (BRMS does it just fine.) I'm scared, now, that some of my analyses will change.
d:
uniqueid dim incorrectlabel accuracy
1 A10LVHTF26QHQC:3X4MXAO0BGONT6U9HL2TG8P9YNBRW8 incidental marginal 0
2 A10LVHTF26QHQC:3X4MXAO0BGONT6U9HL2TG8P9YNBRW8 incidental extreme 1
3 A10LVHTF26QHQC:3X4MXAO0BGONT6U9HL2TG8P9YNBRW8 relevant marginal 1
4 A10LVHTF26QHQC:3X4MXAO0BGONT6U9HL2TG8P9YNBRW8 incidental marginal 1
5 A10LVHTF26QHQC:3X4MXAO0BGONT6U9HL2TG8P9YNBRW8 relevant marginal 0
6 A10LVHTF26QHQC:3X4MXAO0BGONT6U9HL2TG8P9YNBRW8 incidental marginal 0
dbin:
uniqueid dim incorrectlabel right count
<fctr> <fctr> <fctr> <int> <int>
1 A10LVHTF26QHQC:3X4MXAO0BGONT6U9HL2TG8P9YNBRW8 incidental extreme 3 3
2 A10LVHTF26QHQC:3X4MXAO0BGONT6U9HL2TG8P9YNBRW8 incidental marginal 1 5
3 A10LVHTF26QHQC:3X4MXAO0BGONT6U9HL2TG8P9YNBRW8 relevant extreme 3 4
4 A10LVHTF26QHQC:3X4MXAO0BGONT6U9HL2TG8P9YNBRW8 relevant marginal 3 4
5 A16HSMUJ7C7QA7:3DY46V3X3PI4B0HROD2HN770M46557 incidental extreme 3 4
6 A16HSMUJ7C7QA7:3DY46V3X3PI4B0HROD2HN770M46557 incidental marginal 2 4
> summary(m1bin)
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: binomial ( logit )
Formula: cbind(right, count) ~ dim * incorrectlabel + (1 | uniqueid)
Data: dbin
AIC BIC logLik deviance df.resid
398.2 413.5 -194.1 388.2 151
Scaled residuals:
Min 1Q Median 3Q Max
-1.50329 -0.53743 0.08671 0.38922 1.28887
Random effects:
Groups Name Variance Std.Dev.
uniqueid (Intercept) 0 0
Number of obs: 156, groups: uniqueid, 39
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.48460 0.13788 -3.515 0.00044 ***
dimrelevant -0.13021 0.20029 -0.650 0.51562
incorrectlabelmarginal -0.15266 0.18875 -0.809 0.41863
dimrelevant:incorrectlabelmarginal -0.02664 0.27365 -0.097 0.92244
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) dmrlvn incrrc
dimrelevant -0.688
incrrctlblm -0.730 0.503
dmrlvnt:ncr 0.504 -0.732 -0.690
> summary(m1)
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: binomial ( logit )
Formula: accuracy ~ dim * incorrectlabel + (1 | uniqueid)
Data: d
AIC BIC logLik deviance df.resid
864.0 886.2 -427.0 854.0 619
Scaled residuals:
Min 1Q Median 3Q Max
-1.3532 -1.0336 0.7524 0.9350 1.1514
Random effects:
Groups Name Variance Std.Dev.
uniqueid (Intercept) 0.04163 0.204
Number of obs: 624, groups: uniqueid, 39
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.140946 0.088242 1.597 0.1102
dim1 0.155923 0.081987 1.902 0.0572 .
incorrectlabel1 0.180156 0.081994 2.197 0.0280 *
dim1:incorrectlabel1 0.001397 0.082042 0.017 0.9864
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) dim1 incrr1
dim1 0.010
incrrctlbl1 0.128 0.006
dm1:ncrrct1 0.005 0.138 0.010
I figured they'd be the same. Modeling both in BRMS gives the same models with the same estimates.
They should be the same (up to small numerical differences: see below), except for the log-likelihoods and metric based on them (although differences among a series of models in log-likelihoods/AIC/etc. should be the same). I think your problem is using cbind(right, count) rather than cbind(right, count-right): from ?glm,
For binomial ... families the response can also be specified as ... a two-column matrix with the columns giving the numbers of successes and failures.
(emphasis added to point out this is not number of successes and total, but successes and failures).
Here's an example with one of the built-in data sets, comparing fits to an aggregated and a disaggregated data set:
library(lme4)
library(dplyr)
## disaggregate
cbpp_disagg <- cbpp %>% mutate(obs=seq(nrow(cbpp))) %>%
group_by(obs,herd,period,incidence) %>%
do(data.frame(disease=rep(c(0,1),c(.$size-.$incidence,.$incidence))))
nrow(cbpp_disagg) == sum(cbpp$size) ## check
g1 <- glmer(cbind(incidence,size-incidence)~period+(1|herd),
family=binomial,cbpp)
g2 <- glmer(disease~period+(1|herd),
family=binomial,cbpp_disagg)
## compare results
all.equal(fixef(g1),fixef(g2),tol=1e-5)
all.equal(VarCorr(g1),VarCorr(g2),tol=1e-6)

How do I use the glm() function?

I'm trying to fit a general linear model (GLM) on my data using R. I have a Y continuous variable and two categorical factors, A and B. Each factor is coded as 0 or 1, for presence or absence.
Even if just looking at the data I see a clear interaction between A and B, the GLM says that p-value>>>0.05. Am I doing something wrong?
First of all I create the data frame including my data for the GLM, which consists on a Y dependent variable and two factors, A and B. These are two level factors (0 and 1). There are 3 replicates per combination.
A<-c(0,0,0,1,1,1,0,0,0,1,1,1)
B<-c(0,0,0,0,0,0,1,1,1,1,1,1)
Y<-c(0.90,0.87,0.93,0.85,0.98,0.96,0.56,0.58,0.59,0.02,0.03,0.04)
my_data<-data.frame(A,B,Y)
Let’s see how it looks like:
my_data
## A B Y
## 1 0 0 0.90
## 2 0 0 0.87
## 3 0 0 0.93
## 4 1 0 0.85
## 5 1 0 0.98
## 6 1 0 0.96
## 7 0 1 0.56
## 8 0 1 0.58
## 9 0 1 0.59
## 10 1 1 0.02
## 11 1 1 0.03
## 12 1 1 0.04
As we can see just looking on the data, there is a clear interaction between factor A and factor B, as the value of Y dramatically decreases when A and B are present (that is A=1 and B=1). However, using the glm function I get no significant interaction between A and B, as p-value>>>0.05
attach(my_data)
## The following objects are masked _by_ .GlobalEnv:
##
## A, B, Y
my_glm<-glm(Y~A+B+A*B,data=my_data,family=binomial)
## Warning: non-integer #successes in a binomial glm!
summary(my_glm)
##
## Call:
## glm(formula = Y ~ A + B + A * B, family = binomial, data = my_data)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -0.275191 -0.040838 0.003374 0.068165 0.229196
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 2.1972 1.9245 1.142 0.254
## A 0.3895 2.9705 0.131 0.896
## B -1.8881 2.2515 -0.839 0.402
## A:B -4.1747 4.6523 -0.897 0.370
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 7.86365 on 11 degrees of freedom
## Residual deviance: 0.17364 on 8 degrees of freedom
## AIC: 12.553
##
## Number of Fisher Scoring iterations: 6
While you state Y is continuous, the data shows that Y is rather a fraction. Hence, probably the reason you tried to apply GLM in the first place.
To model fractions (i.e. continuous values bounded by 0 and 1) can be done with logistic regression if certain assumptions are fullfilled. See the following cross-validated post for details: https://stats.stackexchange.com/questions/26762/how-to-do-logistic-regression-in-r-when-outcome-is-fractional. However, from the data description it is not clear that those assumptions are fullfilled.
An alternative to model fractions are beta regression or fractional repsonse models.
See below how to apply those methods to your data. The results of both methods are consistent in terms of signs and significance.
# Beta regression
install.packages("betareg")
library("betareg")
result.betareg <-betareg(Y~A+B+A*B,data=my_data)
summary(result.betareg)
# Call:
# betareg(formula = Y ~ A + B + A * B, data = my_data)
#
# Standardized weighted residuals 2:
# Min 1Q Median 3Q Max
# -2.7073 -0.4227 0.0682 0.5574 2.1586
#
# Coefficients (mean model with logit link):
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) 2.1666 0.2192 9.885 < 2e-16 ***
# A 0.6471 0.3541 1.828 0.0676 .
# B -1.8617 0.2583 -7.206 5.76e-13 ***
# A:B -4.2632 0.5156 -8.268 < 2e-16 ***
#
# Phi coefficients (precision model with identity link):
# Estimate Std. Error z value Pr(>|z|)
# (phi) 71.57 29.50 2.426 0.0153 *
# ---
# Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#
# Type of estimator: ML (maximum likelihood)
# Log-likelihood: 24.56 on 5 Df
# Pseudo R-squared: 0.9626
# Number of iterations: 62 (BFGS) + 2 (Fisher scoring)
# ----------------------------------------------------------
# Fractional response model
install.packages("frm")
library("frm")
frm(Y,cbind(A, B, AB=A*B),linkfrac="logit")
*** Fractional logit regression model ***
# Estimate Std. Error t value Pr(>|t|)
# INTERCEPT 2.197225 0.157135 13.983 0.000 ***
# A 0.389465 0.530684 0.734 0.463
# B -1.888120 0.159879 -11.810 0.000 ***
# AB -4.174668 0.555642 -7.513 0.000 ***
#
# Note: robust standard errors
#
# Number of observations: 12
# R-squared: 0.992
The family=binomial implies Logit (Logistic) Regression, which is itself produces a binary result.
From Quick-R
Logistic Regression
Logistic regression is useful when you are predicting a binary outcome
from a set of continuous predictor variables. It is frequently
preferred over discriminant function analysis because of its less
restrictive assumptions.
The data shows an interaction. Try to fit a different model, logistic is not appropriate.
with(my_data, interaction.plot(A, B, Y, fixed = TRUE, col = 2:3, type = "l"))
An analysis of variance shows clear significance for all factors and interaction.
fit <- aov(Y~(A*B),data=my_data)
summary(fit)
Df Sum Sq Mean Sq F value Pr(>F)
A 1 0.2002 0.2002 130.6 3.11e-06 ***
B 1 1.1224 1.1224 732.0 3.75e-09 ***
A:B 1 0.2494 0.2494 162.7 1.35e-06 ***
Residuals 8 0.0123 0.0015
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

constrained multiple linear regression in R

Suppose I have to estimate coefficients a,b in regression:
y=a*x+b*z+c
I know in advance that y is always in range y>=0 and y<=x, but regression model produces sometimes y outside of this range.
Sample data:
mydata<-data.frame(y=c(0,1,3,4,9,11),x=c(1,3,4,7,10,11),z=c(1,1,1,9,6,7))
round(predict(lm(y~x+z,data=mydata)),2)
1 2 3 4 5 6
-0.87 1.79 3.12 4.30 9.34 10.32
First predicted value is <0.
I tried model without intercept: all predictions are >0, but third prediction of y is >x (4.03>3)
round(predict(lm(y~x+z-1,data=mydata)),2)
1 2 3 4 5 6
0.76 2.94 4.03 4.67 8.92 9.68
I also considered to model proportion y/x instead of y:
mydata$y2x<-mydata$y/mydata$x
round(predict(lm(y2x~x+z,data=mydata)),2)
1 2 3 4 5 6
0.15 0.39 0.50 0.49 0.97 1.04
round(predict(lm(y2x~x+z-1,data=mydata)),2)
1 2 3 4 5 6
0.08 0.33 0.46 0.47 0.99 1.07
But now sixth prediction is >1, but proportion should be in range [0,1].
I also tried to apply method where glm is used with offset option: Regression for a Rate variable in R
and
http://en.wikipedia.org/wiki/Poisson_regression#.22Exposure.22_and_offset
but this was not successfull.
Please note, in my data dependent variable: proportion y/x is both zero-inflated and one-inflated.
Any idea, what is suitable approach to build a model in R ('glm','lm')?
You're on the right track: if 0 ≤ y ≤ x then 0 ≤ (y/x) ≤ 1. This suggests fitting y/x to a logistic model in glm(...). Details are below, but considering that you've only got 6 points, this is a pretty good fit.
The major concern is that the model is not valid unless the error in (y/x) is Normal with constant variance (or, equivalently, the error in y increases with x). If this is true then we should get a (more or less) linear Q-Q plot, which we do.
One nuance: the interface to the glm logistic model wants two columns for y: "number of successes (S)" and "number of failures (F)". It then calculates the probability as S/(S+F). So we have to provide two columns which mimic this: y and x-y. Then glm(...) will calculate y/(y+(x-y)) = y/x.
Finally, the fit summary suggests that x is important and z may or may not be. You might want to try a model that excludes z and see if that improves AIC.
fit = glm(cbind(y,x-y)~x+z, data=mydata, family=binomial(logit))
summary(fit)
# Call:
# glm(formula = cbind(y, x - y) ~ x + z, family = binomial(logit),
# data = mydata)
# Deviance Residuals:
# 1 2 3 4 5 6
# -0.59942 -0.35394 0.62705 0.08405 -0.75590 0.81160
# Coefficients:
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) -2.0264 1.2177 -1.664 0.0961 .
# x 0.6786 0.2695 2.518 0.0118 *
# z -0.2778 0.1933 -1.437 0.1507
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
# (Dispersion parameter for binomial family taken to be 1)
# Null deviance: 13.7587 on 5 degrees of freedom
# Residual deviance: 2.1149 on 3 degrees of freedom
# AIC: 15.809
par(mfrow=c(2,2))
plot(fit) # residuals, Q-Q, Scale-Location, and Leverage Plots
mydata$pred <- predict(fit, type="response")
par(mfrow=c(1,1))
plot(mydata$y/mydata$x,mydata$pred,xlim=c(0,1),ylim=c(0,1), xlab="Actual", ylab="Predicted")
abline(0,1, lty=2, col="blue")

Resources