I am trying to model my data in which the response variable is between 0 and 1, so I have decided to use fractional response model in R. From my current understanding, the fractional response model is similar to logistic regression, but it uses qausi-likelihood method to determine parameters. I am not sure I understand it correctly.
So far what I have tried is the frm from package frm and glm on the following data, which is the same as this OP
library(foreign)
mydata <- read.dta("k401.dta")
Further, I followed the procedures in this OP in which glm is used. However, with the same dataset with frm, it returns different SE
library(frm)
y <- mydata$prate
x <- mydata[,c('mrate', 'age', 'sole', 'totemp1')]
myfrm <- frm(y, x, linkfrac = 'logit')
frm returns,
*** Fractional logit regression model ***
Estimate Std. Error t value Pr(>|t|)
INTERCEPT 1.074062 0.048902 21.963 0.000 ***
mrate 0.573443 0.079917 7.175 0.000 ***
age 0.030895 0.002788 11.082 0.000 ***
sole 0.363596 0.047595 7.639 0.000 ***
totemp1 -0.057799 0.011466 -5.041 0.000 ***
Note: robust standard errors
Number of observations: 4734
R-squared: 0.124
With glm, I use
myglm <- glm(prate ~ mrate + totemp1 + age + sole, data = mydata, family = quasibinomial('logit'))
summary(myglm)
Call:
glm(formula = prate ~ mrate + totemp1 + age + sole, family = quasibinomial("logit"),
data = mydata)
Deviance Residuals:
Min 1Q Median 3Q Max
-3.1214 -0.1979 0.2059 0.4486 0.9146
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.074062 0.047875 22.435 < 2e-16 ***
mrate 0.573443 0.048642 11.789 < 2e-16 ***
totemp1 -0.057799 0.011912 -4.852 1.26e-06 ***
age 0.030895 0.003148 9.814 < 2e-16 ***
sole 0.363596 0.051233 7.097 1.46e-12 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for quasibinomial family taken to be 0.2913876)
Null deviance: 1166.6 on 4733 degrees of freedom
Residual deviance: 1023.7 on 4729 degrees of freedom
AIC: NA
Number of Fisher Scoring iterations: 6
Which one should I rely on? Is it better to use glm instead of frm since I have seen the OP that SE estimated could be different
The differences in the two approaches stem from different degree of freedom corrections in the computation of the robust standard errors. Using similar defaults, the results will be identical. See the following example:
library(foreign)
library(frm)
library(sandwich)
library(lmtest)
df <- read.dta("http://fmwww.bc.edu/ec-p/data/wooldridge/401k.dta")
df$prate <- df$prate/100
y <- df$prate
x <- df[,c('mrate', 'age', 'sole', 'totemp')]
myfrm <- frm(y, x, linkfrac = 'logit')
*** Fractional logit regression model ***
Estimate Std. Error t value Pr(>|t|)
INTERCEPT 0.931699 0.084077 11.081 0.000 ***
mrate 0.952872 0.137079 6.951 0.000 ***
age 0.027934 0.004879 5.726 0.000 ***
sole 0.340332 0.080658 4.219 0.000 ***
totemp -0.000008 0.000003 -2.701 0.007 ***
Now the GLM:
myglm <- glm(prate ~ mrate + totemp + age + sole,
data = df, family = quasibinomial('logit'))
coeftest(myglm, vcov.=vcovHC(myglm, type="HC0"))
z test of coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.9316994257 0.0840772572 11.0815 < 0.00000000000000022 ***
mrate 0.9528723652 0.1370808798 6.9512 0.000000000003623 ***
totemp -0.0000082352 0.0000030489 -2.7011 0.006912 **
age 0.0279338963 0.0048785491 5.7259 0.000000010291017 ***
sole 0.3403324262 0.0806576852 4.2195 0.000024488075931 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
With HC0, the standard errors are identical. That is, frm uses HC0 by default. See this post for an extensive discussion. The defaults used by sandwich are probably better in some situations, though I would suspect that it does not matter much in general. You can see this already from your results: the differences are numerically very small.
You need to divide the prate variable by 100. You might also have to upgrade your version of frm.
Related
I want to extract a selected output from the lm function. This is the code i have,
fastfood <- openintro::fastfood
L1 = lm(formula = calories~sat_fat +fiber+sugar, fastfood)
summary(L1)
This is the output
Call:
lm(formula = calories ~ sat_fat + fiber + sugar, data = fastfood)
Residuals:
Min 1Q Median 3Q Max
-680.18 -88.97 -24.65 57.46 1501.07
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 113.334 15.760 7.191 2.36e-12 ***
sat_fat 30.839 1.180 26.132 < 2e-16 ***
fiber 24.396 2.444 9.983 < 2e-16 ***
sugar 8.890 1.120 7.938 1.37e-14 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 160.8 on 499 degrees of freedom
(12 observations deleted due to missingness)
Multiple R-squared: 0.6726, Adjusted R-squared: 0.6707
F-statistic: 341.7 on 3 and 499 DF, p-value: < 2.2e-16
I need to extract only the follwing from above output ? How do i get to this?
sat_fat 30.839.
Most commonly coef() is used to return the coefficients e.g.
coef(L1)
coef(L1)['sat_fat']
You may also want to look at tidy in the broom package which returns a nice summary as a dataframe, with coefficients in the estimate column.
I have data from an experiment with two conditions (dichotomous IV: 'condition'). I also want to make use of another IV which is metric ('hh'). My DV is also metric ('attention.hh'). I've already run a multiple regression model with an interaction of my IVs. Therefore, I centered the metric IV by doing this:
hh.cen <- as.numeric(scale(data$hh, scale = FALSE))
with these variables I ran the following analysis:
model.hh <- lm(attention.hh ~ hh.cen * condition, data = data)
summary(model.hh)
The results are as follows:
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.04309 3.83335 0.011 0.991
hh.cen 4.97842 7.80610 0.638 0.525
condition 4.70662 5.63801 0.835 0.406
hh.cen:condition -13.83022 11.06636 -1.250 0.215
However, the theory behind my analysis tells me, that I should expect a quadratic relation of my metric IV (hh) and the DV (but only in one condition).
Looking at the plot, one could at least imply this relation:
Of course I want to test this statistically. However, I'm struggling now how to compute the lineare regression model.
I have two solutions I think that should be good, leading to different outcomes. Unfortunately, I don't know which is the right one now. I know, that by including interactions (and 3-way interactions) into the model, I also have to include all simple/main effects as well.
Solution: Including all terms on their own:
therefore I first compute the squared IV:
attention.hh.cen <- scale(data$attention.hh, scale = FALSE)
now i can compute the linear model:
sqr.model.1 <- lm(attention.hh.cen ~ condition + hh.cen + hh.sqr + (condition : hh.cen) + (condition : hh.sqr) , data = data)
summary(sqr.model.1)
This leads to the following outcome:
Call:
lm(formula = attention.hh.cen ~ condition + hh.cen + hh.sqr +
(condition:hh.cen) + (condition:hh.sqr), data = data)
Residuals:
Min 1Q Median 3Q Max
-53.798 -14.527 2.912 13.111 49.119
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.3475 3.5312 -0.382 0.7037
condition -9.2184 5.6590 -1.629 0.1069
hh.cen 4.0816 6.0200 0.678 0.4996
hh.sqr 5.0555 8.1614 0.619 0.5372
condition:hh.cen -0.3563 8.6864 -0.041 0.9674
condition:hh.sqr 33.5489 13.6448 2.459 0.0159 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 20.77 on 87 degrees of freedom
Multiple R-squared: 0.1335, Adjusted R-squared: 0.08365
F-statistic: 2.68 on 5 and 87 DF, p-value: 0.02664
Solution: R includes all main effects of an interaction by using the *
sqr.model.2 <- lm(attention.hh.cen ~ condition * I(hh.cen^2), data = data)
summary(sqr.model.2)
IMHO, this should also be fine -- however, the output is not the same as the one received by the code above
Call:
lm(formula = attention.hh.cen ~ condition * I(hh.cen^2), data = data)
Residuals:
Min 1Q Median 3Q Max
-52.297 -13.353 2.508 12.504 49.740
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.300 3.507 -0.371 0.7117
condition -8.672 5.532 -1.567 0.1206
I(hh.cen^2) 4.490 8.064 0.557 0.5791
condition:I(hh.cen^2) 32.315 13.190 2.450 0.0162 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 20.64 on 89 degrees of freedom
Multiple R-squared: 0.1254, Adjusted R-squared: 0.09587
F-statistic: 4.252 on 3 and 89 DF, p-value: 0.007431
I'd rather go with solution number 1 but I'm not sure about that.
Maybe someone has a better solution or can help me out?
Once I've done an STL or decomposition on time series data, how do I extract the models for each component?
For example, how do I get the slope and intercept for the trend, the period for the seasonal data, and so on?
I can provide sample data if needed, but this is a generic question.
As a partial answer to your question, the trend can be extracted quite easily if it is linear. Here's an example:
library(forecast)
plot(decompose(AirPassengers))
In the case of a linear trend we can use the tslm() function to extract the intercept and the slope
tslm(AirPassengers ~ trend)
Call:
lm(formula = formula, data = "AirPassengers", na.action = na.exclude)
Coefficients:
(Intercept) trend
87.653 2.657
To obtain a fit including the seasons, this could be extended like
fit <- tslm(AirPassengers ~ trend + season)
> summary(fit)
Call:
lm(formula = formula, data = "AirPassengers", na.action = na.exclude)
Residuals:
Min 1Q Median 3Q Max
-42.121 -18.564 -3.268 15.189 95.085
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 63.50794 8.38856 7.571 5.88e-12 ***
trend 2.66033 0.05297 50.225 < 2e-16 ***
season2 -9.41033 10.74941 -0.875 0.382944
season3 23.09601 10.74980 2.149 0.033513 *
season4 17.35235 10.75046 1.614 0.108911
season5 19.44202 10.75137 1.808 0.072849 .
season6 56.61502 10.75254 5.265 5.58e-07 ***
season7 93.62136 10.75398 8.706 1.17e-14 ***
season8 90.71103 10.75567 8.434 5.32e-14 ***
season9 39.38403 10.75763 3.661 0.000363 ***
season10 0.89037 10.75985 0.083 0.934177
season11 -35.51996 10.76232 -3.300 0.001244 **
season12 -9.18029 10.76506 -0.853 0.395335
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 26.33 on 131 degrees of freedom
Multiple R-squared: 0.9559, Adjusted R-squared: 0.9518
F-statistic: 236.5 on 12 and 131 DF, p-value: < 2.2e-16
If I interpret this result correctly, there is a monthly average increase of 2.66 passengers, and there are on average 9.4 passengers less in the second month than in the first month, etc.
I'm using R and I want to translate the results from lm() to an equation.
My model is:
Residuals:
Min 1Q Median 3Q Max
-0.048110 -0.023948 -0.000376 0.024511 0.044190
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.17691 0.00909 349.50 < 2e-16 ***
poly(QPB2_REF1, 2)1 0.64947 0.03015 21.54 2.66e-14 ***
poly(QPB2_REF1, 2)2 0.10824 0.03015 3.59 0.00209 **
B2DBSA_REF1DONSON -0.20959 0.01286 -16.30 3.17e-12 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.03015 on 18 degrees of freedom
Multiple R-squared: 0.9763, Adjusted R-squared: 0.9724
F-statistic: 247.6 on 3 and 18 DF, p-value: 8.098e-15
Do you have any idea?
I tried to have something like
f <- function(x) {3.17691 + 0.64947*x +0.10824*x^2 -0.20959*1 + 0.03015^2}
but when I tried to set a x, the f(x) value is incorrect.
Your output indicates that the model includes use of the poly function which be default orthogonalizes the polynomials (includes centering the x's and other things). In your formula there is no orthogonalization done and that is the likely difference. You can refit the model using raw=TRUE in the call to poly to get the raw coefficients that can be multiplied by $x$ and $x^2$.
You may also be interested in the Function function in the rms package which automates creating functions from fitted models.
Edit
Here is an example:
library(rms)
xx <- 1:25
yy <- 5 - 1.5*xx + 0.1*xx^2 + rnorm(25)
plot(xx,yy)
fit <- ols(yy ~ pol(xx,2))
mypred <- Function(fit)
curve(mypred, add=TRUE)
mypred( c(1,25, 3, 3.5))
You need to use the rms functions for fitting (ols and pol for this example instead of lm and poly).
If you want to calculate y-hat based on the model, you can just use predict!
Example:
set.seed(123)
my_dat <- data.frame(x=1:10, e=rnorm(10))
my_dat$y <- with(my_dat, x*2 + e)
my_lm <- lm(y~x, data=my_dat)
summary(my_lm)
Result:
Call:
lm(formula = y ~ x, data = my_dat)
Residuals:
Min 1Q Median 3Q Max
-1.1348 -0.5624 -0.1393 0.3854 1.6814
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.5255 0.6673 0.787 0.454
x 1.9180 0.1075 17.835 1e-07 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.9768 on 8 degrees of freedom
Multiple R-squared: 0.9755, Adjusted R-squared: 0.9724
F-statistic: 318.1 on 1 and 8 DF, p-value: 1e-07
Now, instead of making a function like 0.5255 + x * 1.9180 manually, I just call predict for my_lm:
predict(my_lm, data.frame(x=11:20))
Same result as this (not counting minor errors from rounding the slope/intercept estimates):
0.5255 + (11:20) * 1.9180
If you are looking for actually visualizing or writing out a complex equation (e.g. something that has restricted cubic spline transformations), I recommend using the rms package, fitting your model, and using the latex function to see it in latex
my_lm <- ols(y~x, data=my_dat)
latex(my_lm)
Note you will need to render the latex code so as to see your equation. There are websites and, if you are using a Mac, Mac Tex software, that will render it for you.
Suppose I want to fit a linear regression model with degree two (orthogonal) polynomial and then predict the response. Here are the codes for the first model (m1)
x=1:100
y=-2+3*x-5*x^2+rnorm(100)
m1=lm(y~poly(x,2))
prd.1=predict(m1,newdata=data.frame(x=105:110))
Now let's try the same model but instead of using $poly(x,2)$, I will use its columns like:
m2=lm(y~poly(x,2)[,1]+poly(x,2)[,2])
prd.2=predict(m2,newdata=data.frame(x=105:110))
Let's look at the summaries of m1 and m2.
> summary(m1)
Call:
lm(formula = y ~ poly(x, 2))
Residuals:
Min 1Q Median 3Q Max
-2.50347 -0.48752 -0.07085 0.53624 2.96516
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.677e+04 9.912e-02 -169168 <2e-16 ***
poly(x, 2)1 -1.449e+05 9.912e-01 -146195 <2e-16 ***
poly(x, 2)2 -3.726e+04 9.912e-01 -37588 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.9912 on 97 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: 1
F-statistic: 1.139e+10 on 2 and 97 DF, p-value: < 2.2e-16
> summary(m2)
Call:
lm(formula = y ~ poly(x, 2)[, 1] + poly(x, 2)[, 2])
Residuals:
Min 1Q Median 3Q Max
-2.50347 -0.48752 -0.07085 0.53624 2.96516
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.677e+04 9.912e-02 -169168 <2e-16 ***
poly(x, 2)[, 1] -1.449e+05 9.912e-01 -146195 <2e-16 ***
poly(x, 2)[, 2] -3.726e+04 9.912e-01 -37588 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.9912 on 97 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: 1
F-statistic: 1.139e+10 on 2 and 97 DF, p-value: < 2.2e-16
So m1 and m2 are basically the same. Now let's look at the predictions prd.1 and prd.2
> prd.1
1 2 3 4 5 6
-54811.60 -55863.58 -56925.56 -57997.54 -59079.52 -60171.50
> prd.2
1 2 3 4 5 6
49505.92 39256.72 16812.28 -17827.42 -64662.35 -123692.53
Q1: Why prd.2 is significantly different from prd.1?
Q2: How can I obtain prd.1 using the model m2?
m1 is the right way to do this. m2 is entering a whole world of pain...
To do predictions from m2, the model needs to know it was fitted to an orthogonal set of basis functions, so that it uses the same basis functions for the extrapolated new data values. Compare: poly(1:10,2)[,2] with poly(1:12,2)[,2] - the first ten values are not the same. If you fit the model explicitly with poly(x,2) then predict understands all that and does the right thing.
What you have to do is make sure your predicted locations are transformed using the same set of basis functions as used to create the model in the first place. You can use predict.poly for this (note I call my explanatory variables x1 and x2 so that its easy to match the names up):
px = poly(x,2)
x1 = px[,1]
x2 = px[,2]
m3 = lm(y~x1+x2)
newx = 90:110
pnew = predict(px,newx) # px is the previous poly object, so this calls predict.poly
prd.3 = predict(m3, newdata=data.frame(x1=pnew[,1],x2=pnew[,2]))