Summary Extract Correlation Coefficient - r

I am using lm() on a large data set in R. Using summary() one can get lot of details about linear regression between these two parameters.
The part I am confused with is which one is the correct parameter in the Coefficients: section of summary, to use as correlation coefficient?
Sample Data
c1 <- c(1:10)
c2 <- c(10:19)
output <- summary(lm(c1 ~ c2))
Summary
Call:
lm(formula = c1 ~ c2)
Residuals:
Min 1Q Median 3Q Max
-2.280e-15 -8.925e-16 -2.144e-16 4.221e-16 4.051e-15
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -9.000e+00 2.902e-15 -3.101e+15 <2e-16 ***
c2 1.000e+00 1.963e-16 5.093e+15 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.783e-15 on 8 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: 1
F-statistic: 2.594e+31 on 1 and 8 DF, p-value: < 2.2e-16
Is this the correlation coefficient I should use?
output$coefficients[2,1]
1
Please suggest, thanks.

The full variance covariance matrix of the coefficient estimates is:
fm <- lm(c1 ~ c2)
vcov(fm)
and in particular sqrt(diag(vcov(fm))) equals coef(summary(fm))[, 2]
The corresponding correlation matrix is:
cov2cor(vcov(fm))
The correlation between the coefficient estimates is:
cov2cor(vcov(fm))[1, 2]

Related

R: Same values for confidence and prediction intervals using predict()

When I try to make prediction and confidence intervals, around a a linear regression model, with two continuous variables and two categorical variables (that may act as dummy variables), results for the two intervals are exactly the same. I'm using the predict() function.
I've already tried with other datasets, that have continuous and discrete variables, but not categorical or dichotomous variables, and intervals are different. I tried removing some variables from the regression model, and the intervals are still the same. On the other hand, I've already compared my data.frame with ones that are exemplified on the R documentation, and I think the problem isn't there.
#linear regression model: modeloReducido
summary(modeloReducido)
> Call: lm(formula = V ~ T * W + P * G, data = Datos)
>
> Residuals:
> Min 1Q Median 3Q Max
> -7.5579 -1.6222 0.3286 1.6175 10.4773
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.937674 3.710133 0.253 0.800922
T -12.864441 2.955519 -4.353 2.91e-05 ***
W 0.013926 0.001432 9.722 < 2e-16 ***
P 12.142109 1.431102 8.484 8.14e-14 ***
GBaja 15.953421 4.513963 3.534 0.000588 ***
GMedia 0.597568 4.546935 0.131 0.895669
T:W 0.014283 0.001994 7.162 7.82e-11 ***
P:GBaja -3.249681 2.194803 -1.481 0.141418
P:GMedia -5.093860 2.147673 -2.372 0.019348 *
> --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>
> Residual standard error: 3.237 on 116 degrees of freedom Multiple
> R-squared: 0.9354, Adjusted R-squared: 0.931 F-statistic: 210 on
> 8 and 116 DF, p-value: < 2.2e-16
#Prediction Interval
newdata1.2 <- data.frame(T=1,W=1040,P=10000,G="Media")
#EP
opt1.PI <- predict.lm(modeloReducido, newdata1.2,
interval="prediction", level=.95)
#Confidence interval
newdata1.1 <- data.frame(T=1,W=1040,P=10000,G="Media")
#EP
opt1.CI <- predict(modeloReducido, newdata1.1,
interval="confidence", level=.95)
opt1.CI
#fit lwr upr
#1 70500.51 38260.24 102740.8
opt1.PI
# fit lwr upr
# 1 70500.51 38260.24 102740.8
opt1.PI and opt1.CI should be different.
The Excel file that I was given out is in the following link:
https://www.filehosting.org/file/details/830581/Datos%20Tarea%204.xlsx

Function to determine if f statistic is significant

Is there a function in R to calculate the critical value of F-statistic and compare it to the F-statistic to determine if it is significant or not? I have to calculate thousands of linear models and at the end create a dataframe with the r squared values, p-values, f-statistic, coefficients etc. for each linear model.
> summary(mod)
Call:
lm(formula = log2umi ~ Age + Sex, data = df)
Residuals:
Min 1Q Median 3Q Max
-0.01173 -0.01173 -0.01173 -0.01152 0.98848
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.0115203 0.0018178 6.337 2.47e-10 ***
Age -0.0002679 0.0006053 -0.443 0.658
SexM 0.0002059 0.0024710 0.083 0.934
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.1071 on 7579 degrees of freedom
Multiple R-squared: 2.644e-05, Adjusted R-squared: -0.0002374
F-statistic: 0.1002 on 2 and 7579 DF, p-value: 0.9047
I am aware of this question: How do I get R to spit out the critical value for F-statistic based on ANOVA?
But is there one function on its own that will compare the two values and spit out True or False?
EDIT:
I wrote this, but just out of curiosity if anyone knows a better way please let me know.
f_sig is a named vector that I will later add to the dataframe
model <- lm(log2umi~Age + Sex, df)
f_crit <- qf(1-0.05, summary(model)$fstatistic[2], summary(model)$fstatistic[3] )
f <- summary(mod)$fstatistic[1]
if (f > f_crit) {
f_sig[gen] = 0 #True
} else {
f_sig[gen] = 1 #False
}

Swap x and y variables in lm() function in R

Trying to get a summary output of the linear model created with the lm() function in R but no matter which way I set it up I get my desired y value as my x input. My desired output is the summary of the model where Winnings is the y output and averagedist is the input. This is my current output:
Call:
lm(formula = Winnings ~ averagedist, data = combineddata)
Residuals:
Min 1Q Median 3Q Max
-20.4978 -5.2992 -0.3824 6.0887 23.4764
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.882e+02 7.577e-01 380.281 < 2e-16 ***
Winnings 1.293e-06 2.023e-07 6.391 8.97e-10 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 8.343 on 232 degrees of freedom
Multiple R-squared: 0.1497, Adjusted R-squared: 0.146
F-statistic: 40.84 on 1 and 232 DF, p-value: 8.967e-10
Have tried flipping the order and defining the variables using y = winnings, x = averagedist but always get the same output.
Used summary(lm(Winnings ~ averagedist,combineddata)) as an alternative way to set it up and that seemed to do the trick, as opposed the two step method:
str<-lm(Winnings ~ averagedist,combineddata)
summary(str)

Select regression coefs by name

After running a regression, how can I select the variable name and corresponding parameter estimate?
For example, after running the following regression, I obtain:
set.seed(1)
n=1000
x=rnorm(n,0,1)
y=.6*x+rnorm(n,0,sqrt(1-.6)^2)
(reg1=summary(lm(y~x)))
Call:
lm(formula = y ~ x)
Residuals:
Min 1Q Median 3Q Max
-1.2994 -0.2688 -0.0055 0.3022 1.4577
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.006475 0.013162 -0.492 0.623
x 0.602573 0.012723 47.359 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.4162 on 998 degrees of freedom
Multiple R-squared: 0.6921, Adjusted R-squared: 0.6918
F-statistic: 2243 on 1 and 998 DF, p-value: < 2.2e-16
I would like to be able to select the coefficient by the variable names (e.g., (Intercept) -0.006475)
I have tried the following but nothing works...
attr(reg1$coefficients,"terms")
names(reg1$coefficients)
Note: This works reg1$coefficients[1,1] but I want to be able to call it by the name rather than row / column.
The package broom tidies a lot of regression models very nicely.
require(broom)
set.seed(1)
n=1000
x=rnorm(n,0,1)
y=.6*x+rnorm(n,0,sqrt(1-.6)^2)
model = lm(y~x)
tt <- tidy(model, conf.int=TRUE)
subset(tt,term=="x")
## term estimate std.error statistic p.value conf.low conf.high
## 2 x 0.602573 0.01272349 47.35908 1.687125e-257 0.5776051 0.6275409
with(tt,tt[term=="(Intercept)","estimate"])
## [1] -0.006474794
So, your code doesn't run the way you have it. I changed it a bit:
set.seed(1)
n=1000
x=rnorm(n,0,1)
y=.6*x+rnorm(n,0,sqrt(1-.6)^2)
model = lm(y~x)
Now, I can call coef(model)["x"] or coef(model)["(Intercept)"] and get the values.
> coef(model)["x"]
x
0.602573
> coef(model)["(Intercept)"]
(Intercept)
-0.006474794

Polynomial Regression nonsense Predictions

Suppose I want to fit a linear regression model with degree two (orthogonal) polynomial and then predict the response. Here are the codes for the first model (m1)
x=1:100
y=-2+3*x-5*x^2+rnorm(100)
m1=lm(y~poly(x,2))
prd.1=predict(m1,newdata=data.frame(x=105:110))
Now let's try the same model but instead of using $poly(x,2)$, I will use its columns like:
m2=lm(y~poly(x,2)[,1]+poly(x,2)[,2])
prd.2=predict(m2,newdata=data.frame(x=105:110))
Let's look at the summaries of m1 and m2.
> summary(m1)
Call:
lm(formula = y ~ poly(x, 2))
Residuals:
Min 1Q Median 3Q Max
-2.50347 -0.48752 -0.07085 0.53624 2.96516
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.677e+04 9.912e-02 -169168 <2e-16 ***
poly(x, 2)1 -1.449e+05 9.912e-01 -146195 <2e-16 ***
poly(x, 2)2 -3.726e+04 9.912e-01 -37588 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.9912 on 97 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: 1
F-statistic: 1.139e+10 on 2 and 97 DF, p-value: < 2.2e-16
> summary(m2)
Call:
lm(formula = y ~ poly(x, 2)[, 1] + poly(x, 2)[, 2])
Residuals:
Min 1Q Median 3Q Max
-2.50347 -0.48752 -0.07085 0.53624 2.96516
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.677e+04 9.912e-02 -169168 <2e-16 ***
poly(x, 2)[, 1] -1.449e+05 9.912e-01 -146195 <2e-16 ***
poly(x, 2)[, 2] -3.726e+04 9.912e-01 -37588 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.9912 on 97 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: 1
F-statistic: 1.139e+10 on 2 and 97 DF, p-value: < 2.2e-16
So m1 and m2 are basically the same. Now let's look at the predictions prd.1 and prd.2
> prd.1
1 2 3 4 5 6
-54811.60 -55863.58 -56925.56 -57997.54 -59079.52 -60171.50
> prd.2
1 2 3 4 5 6
49505.92 39256.72 16812.28 -17827.42 -64662.35 -123692.53
Q1: Why prd.2 is significantly different from prd.1?
Q2: How can I obtain prd.1 using the model m2?
m1 is the right way to do this. m2 is entering a whole world of pain...
To do predictions from m2, the model needs to know it was fitted to an orthogonal set of basis functions, so that it uses the same basis functions for the extrapolated new data values. Compare: poly(1:10,2)[,2] with poly(1:12,2)[,2] - the first ten values are not the same. If you fit the model explicitly with poly(x,2) then predict understands all that and does the right thing.
What you have to do is make sure your predicted locations are transformed using the same set of basis functions as used to create the model in the first place. You can use predict.poly for this (note I call my explanatory variables x1 and x2 so that its easy to match the names up):
px = poly(x,2)
x1 = px[,1]
x2 = px[,2]
m3 = lm(y~x1+x2)
newx = 90:110
pnew = predict(px,newx) # px is the previous poly object, so this calls predict.poly
prd.3 = predict(m3, newdata=data.frame(x1=pnew[,1],x2=pnew[,2]))

Resources