I am attempting to run a model with county, year, and state:year fixed effects. The lm() approach looks like this:
lm <- lm(data = mydata, formula = y ~ x + county + year + state:year
where county, year, and state:year are all factors.
Because I have a large number of counties, running the model is very slow using lm(). More frustrating given the number of models I need to produce, lm() produces a much larger object than plm(). This plm() command yields the same coefficients and levels of significance for my main variables.
plm <- plm(data = mydata, formula = y ~ x + year + state:year, index = "county", model = "within"
However, these produce substantially different R-squared, Adj. R-squared, etc. I thought I could solve the R-squared problem by calculating the R-squared for plm by hand:
SST <- sum((mydata$y - mean(mydata$y))^2)
fit <- (mydata$y - plm$residuals)
SSR <- sum((fit - mean(mydata$y))^2)
R2 <- SSR / SST
I tested the R-squared code with lm and got the same result reported by summary(lm). However, when I calculated R-squared for plm I got a different R-squared (and it was greater than 1).
At this point I checked what the coefficients for my fixed effects in plm were and they were different than the coefficients in lm.
Can someone please 1) help me understand why I'm getting these differing results and 2) suggest the most efficient way to construct the models I need and obtain correct R-squareds? Thanks!
Related
I have panel data from external assets of 102 countries over ~ 20-40 years, depending on the country.
I tried predicting the probability for a financial crisis, depending on log(total_liabilities to see whether an increase in foreign investment and other capital positions can help predict a crisis.
plm1 <- plm(crisis ~ log_total_liabilities + lag1_log_tot_lia + lag2_log_tot_lia + lag3_log_tot_lia
+ factor(year) + factor(country), data = dt2, index=c("year", "country"), model="pooling")
summary(plm1)
I started by estimating a plm model, regressing on my crisis dummy.
To estimate the predictive ability, I wanted to generate a ROC and AUC value, given the regression
# Plot of True Positive Rate Against the False Positive Rate
pred1 <- predict(plm1)
pred2 <- prediction(pred1,as.numeric(plm1$crisis))
plot(performance(pred2,"tpr","fpr"), las=0, main="plm1")
I get errors like:
Error: not fitting arguments / variables" (translated from German) or
"all arguments/variables need to have the same length" (translated
from German).
Another approach to obtaining Roc values would start with
When changing pred1 <- predict(plm1, dt2) (dt2 is my data frame, containing also some variables I had not used in the plm1 regression), the error differs:
The format of predictions is invalid. It couldn't be coerced to a list.
Are PLMs simply not made for ROC calculations? And if so, how come that the paper attached presents AUROC values for a linear probability model with fixed effects? (See second last row)
And if no, what am I doing wrong?
I attached the screenshot of the paper and my dataset.
CSV File with datasat
Screenshot of paper with OLS AUROC value
AUC-ROC only works for only binary classification problems. As you used a fixed effects regression, the predicted values produced after plm1, pred1, is a continuous one.
I want to balance my data using the WeightIt package in R (method= ebal). I have used a code similar to the one below;
#Balancing covariates between treatment groups (binary)
W1 <- weightit(treat ~ age + educ + married + nodegree + re74, data = lalonde, method = "ebal", estimand = "ATT")
match.data(W1)
The outcome is my data table with an additional column called weights. What do those weights mean and how do I go on from here? (My next step would be to do a logit regression with a balanced dataset)
Thank you so much for helping!
weightit() estimates weights that, when applied to a dataset, yield balance in the treatment groups. To estimate effects in the weighted sample, include the weights in a regression of the outcome on the treatment. This is demonstrated in the WeightIt vignette.
You should not used match.data() with WeightIt. I'm not sure where you found the code to do that. match.data() is for use with MatchIt, which is a different package with its own functions. The fact that match.data() happened to work with WeightIt is unintended behavior and should not be relied on.
To estimate the effect of the treatment on a binary outcome (which I'll denote as Y in the code below and assume is in the lalonde dataset, even though in reality it is not), you would run the following after running the first line in your code above:
fit <- glm(Y ~ treat, data = lalonde, weights = W1$weights, family = binomial)
lmtest::coeftest(fit, vcov. = sandwich::vcovHC)
The coefficient on treat is the log odds ratio of the outcome.
I'm interesting in performing a GAM regression on data from a national wide survey which presents sample weights. I read with interest this post.
I selected my vars of interest generating a DF:
nhanesAnalysis <- nhanesDemo %>%
select(fpl,
age,
gender,
persWeight,
psu,
strata)
Than, for what I understood, I generated a weighted DF with the following code:
library(survey)
nhanesDesign <- svydesign( id = ~psu,
strata = ~strata,
weights = ~persWeight,
nest = TRUE,
data = nhanesAnalysis)
Let's say that I would select only subjects with ageā„30:
ageDesign <- subset(nhanesDesign, age >= 30)
Now, I would fit a GAM model (fpl ~ s(age) + gender) with mgcv package. Is it possible to do so with the weights argument or using svydesign object ageDesign ?
EDIT
I was wondering if is it correct to extrapolate computed weights from the an svyglm object and use it for weights argument in GAM.
This is more difficult than it looks. There are two issues
You want to get the right amount of smoothing
You want valid standard errors.
Just giving the sampling weights to mgcv::gam() won't do either of these: gam() treats the weights as frequency weights and so will think it has a lot more data than it actually has. You will get undersmoothing and underestimated standard errors because of the weights, and you will also likely get underestimated standard errors because of the cluster sampling.
The simple work-around is to use regression splines (splines package) instead. These aren't quite as good as the penalised splines used by mgcv, but the difference usually isn't a big deal, and they work straightforwardly with svyglm. You do need to choose how many degrees of freedom to assign.
library(splines)
svglm(fpl ~ ns(age,4) + gender, design = nhanesDesign)
Hello (first timer here),
I would like to estimate a "two-way" cluster-robust variance-covariance matrix in R. I am using a particular canned routine from the "multiwayvcov" library. My question relates solely to the set-up of the cluster.vcov function in R. I have panel data of various crime outcomes. My cross-sectional unit is the "precinct" (over 40 precincts) and I observe crime in those precincts over several "months" (i.e., 24 months). I am evaluating an intervention that 'turns on' (dummy coded) for only a few months throughout the year.
I include "precinct" and "month" fixed effects (i.e., a full set of precinct and month dummies enter the model). I have only one independent variable I am assessing. I want to cluster on "both" dimensions but I am unsure how to set it up.
Do I estimate all the fixed effects with lm first? Or, do I simply run a model regressing crime on the independent variable (excluding fixed effects), then use cluster.vcov i.e., ~ precinct + month_year.
This seems like it would provide the wrong standard error though. Right? I hope this was clear. Sorry for any confusion. See my set up below.
library(multiwayvcov)
model <- lm(crime ~ as.factor(precinct) + as.factor(month_year) + policy, data = DATASET_full)
boot_both <- cluster.vcov(model, ~ precinct + month_year)
coeftest(model, boot_both)
### What the documentation offers as an example
### https://cran.r-project.org/web/packages/multiwayvcov/multiwayvcov.pdf
library(lmtest)
data(petersen)
m1 <- lm(y ~ x, data = petersen)
### Double cluster by firm and year using a formula
vcov_both_formula <- cluster.vcov(m1, ~ firmid + year)
coeftest(m1, vcov_both_formula)
Is is appropriate to first estimate a model that ignores the fixed effects?
First the answer: you should first estimate your lm -model using fixed effects. This will give you your asymptotically correct parameter estimates. The std errors are incorrect because they are calculated from a vcov matrix which assumes iid errors.
To replace the iid covariance matrix with a cluster robust vcov matrix, you can use cluster.vcov, i.e. my_new_vcov_matrix <- cluster.vcov(~ precinct + month_year).
Then a recommendation: I warmly recommend the function felm from lfe for both multi-way fe's and cluster-robust standard erros.
The syntax is as follows:
library(multiwayvcov)
library(lfe)
data(petersen)
my_fe_model <- felm(y~x | firmid + year | 0 | firmid + year, data=petersen )
summary(my_fe_model)
I am currently working through Andy Field's book, Discovering Statistics Using R. Chapter 14 is on Mixed Modelling and he uses the lme function from the nlme package.
The model he creates, using speed dating data, is such:
speedDateModel <- lme(dateRating ~ looks + personality +
gender + looks:gender + personality:gender +
looks:personality,
random = ~1|participant/looks/personality)
I tried to recreate a similar model using the lmer function from the lme4 package; however, my results are different. I thought I had the proper syntax, but maybe not?
speedDateModel.2 <- lmer(dateRating ~ looks + personality + gender +
looks:gender + personality:gender +
(1|participant) + (1|looks) + (1|personality),
data = speedData, REML = FALSE)
Also, when I run the coefficients of these models I notice that it only produces random intercepts for each participant. I was trying to then create a model that produces both random intercepts and slopes. I can't seem to get the syntax correct for either function to do this. Any help would be greatly appreciated.
The only difference between the lme and the corresponding lmer formula should be that the random and fixed components are aggregated into a single formula:
dateRating ~ looks + personality +
gender + looks:gender + personality:gender +
looks:personality+ (1|participant/looks/personality)
using (1|participant) + (1|looks) + (1|personality) is only equivalent if looks and personality have unique values at each nested level.
It's not clear what continuous variable you want to define your slopes: if you have a continuous variable x and groups g, then (x|g) or equivalently (1+x|g) will give you a random-slopes model (x should also be included in the fixed-effects part of the model, i.e. the full formula should be y~x+(x|g) ...)
update: I got the data, or rather a script file that allows one to reconstruct the data, from here. Field makes a common mistake in his book, which I have made several times in the past: since there is only a single observation in the data set for each participant/looks/personality combination, the three-way interaction has one level per observation. In a linear mixed model, this means the variance at the lowest level of nesting will be confounded with the residual variance.
You can see this in two ways:
lme appears to fit the model just fine, but if you try to calculate confidence intervals via intervals(), you get
intervals(speedDateModel)
## Error in intervals.lme(speedDateModel) :
## cannot get confidence intervals on var-cov components:
## Non-positive definite approximate variance-covariance
If you try this with lmer you get:
## Error: number of levels of each grouping factor
## must be < number of observations
In both cases, this is a clue that something's wrong. (You can overcome this in lmer if you really want to: see ?lmerControl.)
If we leave out the lowest grouping level, everything works fine:
sd2 <- lmer(dateRating ~ looks + personality +
gender + looks:gender + personality:gender +
looks:personality+
(1|participant/looks),
data=speedData)
Compare lmer and lme fixed effects:
all.equal(fixef(sd2),fixef(speedDateModel)) ## TRUE
The starling example here gives another example and further explanation of this issue.