When I try to make prediction and confidence intervals, around a a linear regression model, with two continuous variables and two categorical variables (that may act as dummy variables), results for the two intervals are exactly the same. I'm using the predict() function.
I've already tried with other datasets, that have continuous and discrete variables, but not categorical or dichotomous variables, and intervals are different. I tried removing some variables from the regression model, and the intervals are still the same. On the other hand, I've already compared my data.frame with ones that are exemplified on the R documentation, and I think the problem isn't there.
#linear regression model: modeloReducido
summary(modeloReducido)
> Call: lm(formula = V ~ T * W + P * G, data = Datos)
>
> Residuals:
> Min 1Q Median 3Q Max
> -7.5579 -1.6222 0.3286 1.6175 10.4773
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.937674 3.710133 0.253 0.800922
T -12.864441 2.955519 -4.353 2.91e-05 ***
W 0.013926 0.001432 9.722 < 2e-16 ***
P 12.142109 1.431102 8.484 8.14e-14 ***
GBaja 15.953421 4.513963 3.534 0.000588 ***
GMedia 0.597568 4.546935 0.131 0.895669
T:W 0.014283 0.001994 7.162 7.82e-11 ***
P:GBaja -3.249681 2.194803 -1.481 0.141418
P:GMedia -5.093860 2.147673 -2.372 0.019348 *
> --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>
> Residual standard error: 3.237 on 116 degrees of freedom Multiple
> R-squared: 0.9354, Adjusted R-squared: 0.931 F-statistic: 210 on
> 8 and 116 DF, p-value: < 2.2e-16
#Prediction Interval
newdata1.2 <- data.frame(T=1,W=1040,P=10000,G="Media")
#EP
opt1.PI <- predict.lm(modeloReducido, newdata1.2,
interval="prediction", level=.95)
#Confidence interval
newdata1.1 <- data.frame(T=1,W=1040,P=10000,G="Media")
#EP
opt1.CI <- predict(modeloReducido, newdata1.1,
interval="confidence", level=.95)
opt1.CI
#fit lwr upr
#1 70500.51 38260.24 102740.8
opt1.PI
# fit lwr upr
# 1 70500.51 38260.24 102740.8
opt1.PI and opt1.CI should be different.
The Excel file that I was given out is in the following link:
https://www.filehosting.org/file/details/830581/Datos%20Tarea%204.xlsx
Related
I usually use SAS but I trying to use R more. I am trying to show how categorizing a continuous independent variable messes up regressions. So I created some data:
set.seed(1234) #sets a seed. It is good to use the same seed all the time.
x <- rnorm(100) #X is now normally distributed with mean 0 and sd 1, N - 100
y <- 3*x + rnorm(100,0,10) #Y is related to x, but with some noise
x2 <- cut(x, 2) #Cuts x into 2 parts
then I ran a regression on x2:
m2 <- lm(y~as.factor(x2)) #A model with the cut variable
summary(m2)
and the summary was what I expected: A coefficient for the intercept and one for the dummy variable:
Call:
lm(formula = y ~ as.factor(x2))
Residuals:
Min 1Q Median 3Q Max
-30.4646 -6.5614 0.4409 5.4936 29.6696
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.403 1.290 -1.088 0.2795
as.factor(x2)(0.102,2.55] 4.075 2.245 1.815 0.0726 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 10.56 on 98 degrees of freedom
Multiple R-squared: 0.03253, Adjusted R-squared: 0.02265
F-statistic: 3.295 on 1 and 98 DF, p-value: 0.07257
But when I graphed x vs. y and added a line for the regression from m2, the line was smooth - I would have expected a jump where x2 goes from 0 to 1.
plot(x,y)
abline(reg = m2)
What am I doing wrong? Or am I missing something basic?
Is there a function in R to calculate the critical value of F-statistic and compare it to the F-statistic to determine if it is significant or not? I have to calculate thousands of linear models and at the end create a dataframe with the r squared values, p-values, f-statistic, coefficients etc. for each linear model.
> summary(mod)
Call:
lm(formula = log2umi ~ Age + Sex, data = df)
Residuals:
Min 1Q Median 3Q Max
-0.01173 -0.01173 -0.01173 -0.01152 0.98848
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.0115203 0.0018178 6.337 2.47e-10 ***
Age -0.0002679 0.0006053 -0.443 0.658
SexM 0.0002059 0.0024710 0.083 0.934
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.1071 on 7579 degrees of freedom
Multiple R-squared: 2.644e-05, Adjusted R-squared: -0.0002374
F-statistic: 0.1002 on 2 and 7579 DF, p-value: 0.9047
I am aware of this question: How do I get R to spit out the critical value for F-statistic based on ANOVA?
But is there one function on its own that will compare the two values and spit out True or False?
EDIT:
I wrote this, but just out of curiosity if anyone knows a better way please let me know.
f_sig is a named vector that I will later add to the dataframe
model <- lm(log2umi~Age + Sex, df)
f_crit <- qf(1-0.05, summary(model)$fstatistic[2], summary(model)$fstatistic[3] )
f <- summary(mod)$fstatistic[1]
if (f > f_crit) {
f_sig[gen] = 0 #True
} else {
f_sig[gen] = 1 #False
}
I need to build a model that predicts a response based on 2 predicting variables. I am using R as the software.
I have tried the below methods with given R squared values:
1. Linear Regression - 0.556
2. Decision Tree Regression - 0.608
3. linear Regression (after removing outliers using the cooks distance method) - 0.6068
4. Polynomial regression (power of 3) on data without outliers - 0.608
when I check the assumptions, I see the below graph -
we can see that none of the assumptions seem to be fulfilled.
is there some different regression model I should use? I have confirmed that the data I am working on is clean.
The summary of output for linear regression is as below
Call:
lm(formula = Freight ~ TotalWeight + distance, data = data)
Residuals:
Min 1Q Median 3Q Max
-1104.56 -60.39 -17.69 28.99 2076.90
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.286e+01 7.141e+00 4.601 4.49e-06 ***
TotalWeight 9.666e-02 2.246e-03 43.042 < 2e-16 ***
distance 5.235e-05 2.884e-06 18.152 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 165.1 on 1790 degrees of freedom
(3 observations deleted due to missingness)
Multiple R-squared: 0.5556, Adjusted R-squared: 0.5551
F-statistic: 1119 on 2 and 1790 DF, p-value: < 2.2e-16
As we see, both the independent variables have extremely small p values, i.e. they are highly relevant.
The 95% confidence interval is
2.5 % 97.5 %
(Intercept) 1.885358e+01 4.686585e+01
TotalWeight 9.225246e-02 1.010612e-01
distance 4.669026e-05 5.800235e-05
Is there any method I can use to better fit the data.
I am using lm() on a large data set in R. Using summary() one can get lot of details about linear regression between these two parameters.
The part I am confused with is which one is the correct parameter in the Coefficients: section of summary, to use as correlation coefficient?
Sample Data
c1 <- c(1:10)
c2 <- c(10:19)
output <- summary(lm(c1 ~ c2))
Summary
Call:
lm(formula = c1 ~ c2)
Residuals:
Min 1Q Median 3Q Max
-2.280e-15 -8.925e-16 -2.144e-16 4.221e-16 4.051e-15
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -9.000e+00 2.902e-15 -3.101e+15 <2e-16 ***
c2 1.000e+00 1.963e-16 5.093e+15 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.783e-15 on 8 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: 1
F-statistic: 2.594e+31 on 1 and 8 DF, p-value: < 2.2e-16
Is this the correlation coefficient I should use?
output$coefficients[2,1]
1
Please suggest, thanks.
The full variance covariance matrix of the coefficient estimates is:
fm <- lm(c1 ~ c2)
vcov(fm)
and in particular sqrt(diag(vcov(fm))) equals coef(summary(fm))[, 2]
The corresponding correlation matrix is:
cov2cor(vcov(fm))
The correlation between the coefficient estimates is:
cov2cor(vcov(fm))[1, 2]
I'm using R and I want to translate the results from lm() to an equation.
My model is:
Residuals:
Min 1Q Median 3Q Max
-0.048110 -0.023948 -0.000376 0.024511 0.044190
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.17691 0.00909 349.50 < 2e-16 ***
poly(QPB2_REF1, 2)1 0.64947 0.03015 21.54 2.66e-14 ***
poly(QPB2_REF1, 2)2 0.10824 0.03015 3.59 0.00209 **
B2DBSA_REF1DONSON -0.20959 0.01286 -16.30 3.17e-12 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.03015 on 18 degrees of freedom
Multiple R-squared: 0.9763, Adjusted R-squared: 0.9724
F-statistic: 247.6 on 3 and 18 DF, p-value: 8.098e-15
Do you have any idea?
I tried to have something like
f <- function(x) {3.17691 + 0.64947*x +0.10824*x^2 -0.20959*1 + 0.03015^2}
but when I tried to set a x, the f(x) value is incorrect.
Your output indicates that the model includes use of the poly function which be default orthogonalizes the polynomials (includes centering the x's and other things). In your formula there is no orthogonalization done and that is the likely difference. You can refit the model using raw=TRUE in the call to poly to get the raw coefficients that can be multiplied by $x$ and $x^2$.
You may also be interested in the Function function in the rms package which automates creating functions from fitted models.
Edit
Here is an example:
library(rms)
xx <- 1:25
yy <- 5 - 1.5*xx + 0.1*xx^2 + rnorm(25)
plot(xx,yy)
fit <- ols(yy ~ pol(xx,2))
mypred <- Function(fit)
curve(mypred, add=TRUE)
mypred( c(1,25, 3, 3.5))
You need to use the rms functions for fitting (ols and pol for this example instead of lm and poly).
If you want to calculate y-hat based on the model, you can just use predict!
Example:
set.seed(123)
my_dat <- data.frame(x=1:10, e=rnorm(10))
my_dat$y <- with(my_dat, x*2 + e)
my_lm <- lm(y~x, data=my_dat)
summary(my_lm)
Result:
Call:
lm(formula = y ~ x, data = my_dat)
Residuals:
Min 1Q Median 3Q Max
-1.1348 -0.5624 -0.1393 0.3854 1.6814
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.5255 0.6673 0.787 0.454
x 1.9180 0.1075 17.835 1e-07 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.9768 on 8 degrees of freedom
Multiple R-squared: 0.9755, Adjusted R-squared: 0.9724
F-statistic: 318.1 on 1 and 8 DF, p-value: 1e-07
Now, instead of making a function like 0.5255 + x * 1.9180 manually, I just call predict for my_lm:
predict(my_lm, data.frame(x=11:20))
Same result as this (not counting minor errors from rounding the slope/intercept estimates):
0.5255 + (11:20) * 1.9180
If you are looking for actually visualizing or writing out a complex equation (e.g. something that has restricted cubic spline transformations), I recommend using the rms package, fitting your model, and using the latex function to see it in latex
my_lm <- ols(y~x, data=my_dat)
latex(my_lm)
Note you will need to render the latex code so as to see your equation. There are websites and, if you are using a Mac, Mac Tex software, that will render it for you.