Why does xtnbreg, fe in Stata produce different findings than femlm and glm.nb (both with fixed effects) in R? - r

I have estimated the following negative binomial regression model with group fixed effects in Stata. The data are time series cross sectional. The panelvar is group and the timevar is time.
tsset group time
xtnbreg y x1 x2 x3 + x4 + x5, fe
I want to replicate these findings in R. To do this, I have tried these 4 models:
nb1 <- femlm(y ~ x1 + x2 + x3 + x4 + x5 | group, panel.id = ~group + time, family = "negbin", mydata)
nb2 <- fenegbin(y ~ x1 + x2 + x3 + x4 + x5 | group, panel.id = ~group + time, mydata)
nb3 <- glm.nb(y ~ x1 + x2 + x3 + x4 + x5 + factor(group), data=mydata)
nb4 <- glmmadmb(y ~ x1 + x2 + x3 + x4 + x5 + factor(group), data = mydata, family="nbinom")
The results produced by nb1-4 are all identical, but different from the results produced by xtnbreg in Stata. The coefficients, standard errors, and p-values are all substantively different.
I have tried replicating a standard negative binomial regression in Stata and R and have been able to do so successfully.
Does anyone have any idea what's going on here? I have reviewed related posts on this forum (such as this one: is there an R function for Stata's xtnbreg?) and have not found any answers.

SOLVED (mostly): The R code that reproduces the results generated by xtnbreg, fe in Stata:
nb5 <- pglm(y ~ x1 + x2 + x3 + x4 + x5 ,family = negbin, data = mydata, effect = "individual", model="within", index = "group")
I found the solution on RPubs: https://rpubs.com/cuborican/xtpoisson.
I still do not know why this works, only that it does. I suspect that Ben is correct and it has something to do with estimating conditional vs unconditional ML. If anyone knows for sure, please share.

Related

avoid repeatedly writing model formula when fitting a number of linear regression models

I'd like to run a number of similar linear regression models in R, such as
lm(y ~ x1 + x2 + x3 + x4 + x5, data = df)
lm(y ~ x1 + x2 + x3 + x4 + x5 + x6, data = df)
lm(y ~ x1 + x2 + x3 + x4 + x5 + x6 + x7, data = df)
How can I assign part of this to a "base" formula, to avoid repeating it many times? This would be the base:
y ~ x1 + x2 + x3 + x4 + x5
Then how can I do something like the following (obviously not working)?
lm(base + x6, data = df)
Searching on Stack Overflow I realized that I could make a data frame with only variables of interest and use . to shorten the model formula, but I wonder if this could be avoided.
You can update a model formula with update.formula. For example:
base <- y ~ x1 + x2 + x3 + x4 + x5
update.formula(base, . ~ . + x6)
#y ~ x1 + x2 + x3 + x4 + x5 + x6
Here is a strings version if you want to provide new variable name as character:
## `deparse` damp a model formula to a string
formula(paste(deparse(base), "x6", sep = " + "))
In fact, you can even update your model directly
fit <- lm(base, dat); update.default(fit, . ~ . + x6)
This idea that updates the whole model worked the best. Only update() was needed in my case.
I wrote update.default and update.formula so that you know what function to look for when you do ? for the documentation.

Heteroscedastic test in multivariate analysis in r

I have different models (linear (lm), gls, GARCH) where I would like to check them for Heteroscedasticity.
However for the lm-model it is very easy, visually and with tests as follows:
fit1 <- lm(formula = X0~X1 + X5 + X7 + X8 + X9 + X10 + X11 + X12,
data = my_data, weights = NULL)
residualPlots(fit1)
bptest(fit1)
ncvTest(fit1)
For the other models it is not so easy!
Do you have any ideas?
Ιndicatively reported, that I have a sample with 15 variables of X and one dependent variable Y.
fit22<-gls(Y~X1+X2+X3+X4+X5+X6+X7+X8+X9+X10+X11+X12+X13+X14+X15,data=my_data,correlation = corARMA(p=0,q=1,form = ~ 1))

How to plot ols with r.c. splines

I'd like to plot the predicted line of the regression that contains a restricted cubic spline due to non-linearity in the model and the standard error bands. I can get the predicted points, but am not sure to to just plot the lines and error bands. ggplot is preferred, or base graphics is fine also. Thanks.
Here is an example from the documentation:
library(rms)
# Fit a complex model and approximate it with a simple one
x1 <- runif(200)
x2 <- runif(200)
x3 <- runif(200)
x4 <- runif(200)
y <- x1 + x2 + rnorm(200)
f <- ols(y ~ rcs(x1,4) + x2 + x3 + x4)
pred <- fitted(f) # or predict(f) or f$linear.predictors
f2 <- ols(pred ~ rcs(x1,4) + x2 + x3 + x4, sigma=1)
fastbw(f2, aics=100000)
options(datadist=NULL)
And a plot of the predicted values of the model:
plot(predict(f2))
The rms package has a number of helpful functions for this purpose. It is worth looking at http://biostat.mc.vanderbilt.edu/wiki/Main/RmS
In this instance, you can simple set datadist (which set up distribution summaries for predictor variables) appropriately and then use plot(Predict(f) or ggplot(Predict(f))
set.seed(5)
# Fit a complex model and approximate it with a simple one
x1 <- runif(200)
x2 <- runif(200)
x3 <- runif(200)
x4 <- runif(200)
y <- x1 + x2 + rnorm(200)
f <- ols(y ~ rcs(x1,4) + x2 + x3 + x4)
ddist <- datadist(x1,x2,x3,x4)
options(datadist='ddist')
plot(Predict(f))
ggplot(Predict(f))

Linear regression of same outcome, similar number of covariates and one unique covariate in each model

I want to run linear regression for the same outcome and a number of covariates minus one covariate in each model. I have looked at the example on this page but could that did not provide what I wanted.
Sample data
a <- data.frame(y = c(30,12,18), x1 = c(7,6,9), x2 = c(6,8,5),
x3 = c(4,-2,-3), x4 = c(8,3,-3), x5 = c(4,-4,-2))
m1 <- lm(y ~ x1 + x4 + x5, data = a)
m2 <- lm(y ~ x2 + x4 + x5, data = a)
m3 <- lm(y ~ x3 + x4 + x5, data = a)
How could I run these models in a short way and and without repeating the same covariates again and again?
Following this example you could do this:
lapply(1:3, function(i){
lm(as.formula(sprintf("y ~ x%i + x4 + x5", i)), a)
})

How to succinctly write a formula with many variables from a data frame?

Suppose I have a response variable and a data containing three covariates (as a toy example):
y = c(1,4,6)
d = data.frame(x1 = c(4,-1,3), x2 = c(3,9,8), x3 = c(4,-4,-2))
I want to fit a linear regression to the data:
fit = lm(y ~ d$x1 + d$x2 + d$y2)
Is there a way to write the formula, so that I don't have to write out each individual covariate? For example, something like
fit = lm(y ~ d)
(I want each variable in the data frame to be a covariate.) I'm asking because I actually have 50 variables in my data frame, so I want to avoid writing out x1 + x2 + x3 + etc.
There is a special identifier that one can use in a formula to mean all the variables, it is the . identifier.
y <- c(1,4,6)
d <- data.frame(y = y, x1 = c(4,-1,3), x2 = c(3,9,8), x3 = c(4,-4,-2))
mod <- lm(y ~ ., data = d)
You can also do things like this, to use all variables but one (in this case x3 is excluded):
mod <- lm(y ~ . - x3, data = d)
Technically, . means all variables not already mentioned in the formula. For example
lm(y ~ x1 * x2 + ., data = d)
where . would only reference x3 as x1 and x2 are already in the formula.
A slightly different approach is to create your formula from a string. In the formula help page you will find the following example :
## Create a formula for a model with a large number of variables:
xnam <- paste("x", 1:25, sep="")
fmla <- as.formula(paste("y ~ ", paste(xnam, collapse= "+")))
Then if you look at the generated formula, you will get :
R> fmla
y ~ x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 + x9 + x10 + x11 +
x12 + x13 + x14 + x15 + x16 + x17 + x18 + x19 + x20 + x21 +
x22 + x23 + x24 + x25
Yes of course, just add the response y as first column in the dataframe and call lm() on it:
d2<-data.frame(y,d)
> d2
y x1 x2 x3
1 1 4 3 4
2 4 -1 9 -4
3 6 3 8 -2
> lm(d2)
Call:
lm(formula = d2)
Coefficients:
(Intercept) x1 x2 x3
-5.6316 0.7895 1.1579 NA
Also, my information about R points out that assignment with <- is recommended over =.
An extension of juba's method is to use reformulate, a function which is explicitly designed for such a task.
## Create a formula for a model with a large number of variables:
xnam <- paste("x", 1:25, sep="")
reformulate(xnam, "y")
y ~ x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 + x9 + x10 + x11 +
x12 + x13 + x14 + x15 + x16 + x17 + x18 + x19 + x20 + x21 +
x22 + x23 + x24 + x25
For the example in the OP, the easiest solution here would be
# add y variable to data.frame d
d <- cbind(y, d)
reformulate(names(d)[-1], names(d[1]))
y ~ x1 + x2 + x3
or
mod <- lm(reformulate(names(d)[-1], names(d[1])), data=d)
Note that adding the dependent variable to the data.frame in d <- cbind(y, d) is preferred not only because it allows for the use of reformulate, but also because it allows for future use of the lm object in functions like predict.
I build this solution, reformulate does not take care if variable names have white spaces.
add_backticks = function(x) {
paste0("`", x, "`")
}
x_lm_formula = function(x) {
paste(add_backticks(x), collapse = " + ")
}
build_lm_formula = function(x, y){
if (length(y)>1){
stop("y needs to be just one variable")
}
as.formula(
paste0("`",y,"`", " ~ ", x_lm_formula(x))
)
}
# Example
df <- data.frame(
y = c(1,4,6),
x1 = c(4,-1,3),
x2 = c(3,9,8),
x3 = c(4,-4,-2)
)
# Model Specification
columns = colnames(df)
y_cols = columns[1]
x_cols = columns[2:length(columns)]
formula = build_lm_formula(x_cols, y_cols)
formula
# output
# "`y` ~ `x1` + `x2` + `x3`"
# Run Model
lm(formula = formula, data = df)
# output
Call:
lm(formula = formula, data = df)
Coefficients:
(Intercept) x1 x2 x3
-5.6316 0.7895 1.1579 NA
```
You can check the package leaps and in particular the function regsubsets()
functions for model selection. As stated in the documentation:
Model selection by exhaustive search, forward or backward stepwise, or sequential replacement

Resources