Weights with plm package - r

My data frame looks like something as follows:
unique.groups<- letters[1:5]
unique_timez<- 1:20
groups<- rep(unique.groups, each=20)
my.times<-rep(unique_timez, 5)
play.data<- data.frame(groups, my.times, y= rnorm(100), x=rnorm(100), POP= 1:100)
I would like to run the following weighted regression:
plm(y~x + factor(my.times) ,
data=play.data,
index=c('groups','my.times'), model='within', weights= POP)
But I do not believe the plm package allows for weights. The answer I'm looking for the coefficient from the model below:
fit.regular<- lm(y~x + factor(my.times) + factor(my.groups),
weights= POP, data= play.data)
desired.answer<- coefficients(fit.regular)
However, I am looking for an answer with the plm package because it is much faster to get the coefficient of the within estimator with plm with larger datasets and many groups.

Edit: This problem does not exist anymore since plm features a weight function now (see #Helix123 comment above).
Even though I know of no solution with the plm package, the felmfunction in the lfe package handles weights correctly in the context of fixed effects (which seems what you need from the syntax of your example code). It is particularly written with a focus on speed in the presence of many observations and groups.
The lfe package focuses on fixed effects only, so if you need random effects the lme4 package might be more suited to your needs.

I am looking for exactly this information. I found this answer http://r.789695.n4.nabble.com/Longitudinal-Weights-in-PLM-package-td3298823.html by one of the author of the packages, which seems to suggest there is no way of using weights directly within the plm package.

Related

Heterocesdastic model of mixed effects via lmer function

I am adjusting a mixed effects model which, due to the observed heteroscedasticity, it was necessary to include an effect to accommodate it. Therefore, using the lme function of the nlme package, this was easy to be solved, see the code below:
library(nlme)
library(lme4)
Model1 <- lme(log(Var1)~log(Var2)+log(Var3)+
(Var4)+(Var5),
random = ~1|Var6, Data1, method="REML",
weights = varIdent(form=~1|Var7))
#Var6: It is a factor with several levels.
#Var7: It is a Dummy variable.
However, I need to readjust the model described above using the lme4 package, that is, using the lmer function. It is known and many are the materials that inform some limitations existing in the lme4, such as, for example, modeling heteroscedasticity. What motivated me to readjust this model is the fact that I have an interest in using a specific package that in cases of mixed models it only accepts if they are adjusted through the lmer function. How could I resolve this situation? Below is a good part of the model adjusted using the lmer function, however, this model is not considering the effect to model the observed heteroscedasticity.
Model2 <- lmer(log(Var1)~log(Var2)+log(Var3)+
(Var4)+(Var5)+(1|Var6),
Data1, REML=T)
Regarding the choice of the random effect (Var6) and the inclusion of the effect to consider the heterogeneity by levels of the variable (Var7), these were carefully analyzed, however, I will not put here the whole procedure so as not to be an extensive post and to be more objective .
This is hackable. You need to add an observation-level random effect that is only applied to the group with the larger residual variance (you need to know this in advance!), via (0+dummy(Var7,"1")|obs); this has the effect of multiplying each observation-level random effect value by 1 if the observation is in group "1" of Var7, 0 otherwise. You also need to use lmerControl() to override a few checks that lmer does to try to make sure you are not adding redundant random effects.
Data1$obs <- factor(seq(nrow(Data1)))
Model2 <- lmer(log(Var1)~log(Var2)+log(Var3)+
(Var4)+(Var5) + (1|Var6) +
(0+dummy(Var7,"1")|obs),
Data1, REML=TRUE,
control=lmerControl(check.nobs.vs.nlev="ignore",
check.nobs.vs.nRE="ignore"))
all.equal(REMLcrit(Model2), c(-2*logLik(Model1))) ## TRUE
all.equal(fixef(Model1), fixef(Model2), tolerance=1e-7)
If you want to use this model with hnp you need to work around the fact that hnp doesn't pass the lmerControl option properly.
library(hnp)
d <- function(obj) resid(obj, type="pearson")
s <- function(n, obj) simulate(obj)[[1]]
f <- function(y.) refit(Model2, y.)
hnp(Model2, newclass=TRUE, diagfun=d, simfun=s, fitfun=f)
You might also be interested in the DHARMa package, which does similar simulation-based diagnostics.

R: how to extract r squared from lmekin in coxme package

I'm trying to find out how well my mixed model with family effect fits the data. Is it possible to extract r squared values from lmekin functions? And if so, is it possible to extract partial r squared values for each of the covariables?
Example:
model= lmekin(formula = height ~ score + sex + age + (1 | IID), data = phenotype_df, varlist = kinship_matrix)
I have tried the MuMin package but it doesn't seem to work with lmekin models. Thanks.
I am able to use the r.squaredLR() function,
library(coxme)
library(MuMIn)
data(ergoStool, package="nlme") # use a data set from nlme
fit1 <- lmekin(effort ~ Type + (1|Subject), data=ergoStool)
r.squaredLR(fit1)
(I am pretty sure that works, but one thing that is great to do is to create a reproducible example so I can run your code to double check, for example I am not exactly sure what phenotype_df looks like, and I am not able to run your code as it is, a great resource for this is the reprex package).

Variable selection methods

I have been doing variable selection for a modeling problem.
I have used trial and error for the selection (adding / removing a variable) with a decrease in error. However, I have the challenge as the number of variables grows into the hundreds that manual variable selection can not be performed as the model takes 1/2 hour to compute, rendering the task impossible.
Would you happen to know of any other packages than the regsubsets from the leaps package (which when tested with the same trial and error variables produced a higher error, it did not include some variables which were lineraly dependant - excluding some valuable variables).
You need a better (i.e. not flawed) approach to model selection. There are plenty of options, but one that should be easy to adapt to your situation would be using some form of regularization, such as the Lasso or the elastic net. These apply shrinkage to the sizes of the coefficients; if a coefficient is shrunk from its least squares solution to zero, that variable is removed from the model. The resulting model coefficients are slightly biased but they have lower variance than the selected OLS terms.
Take a look at the lars, glmnet, and penalized packages
Try using the stepAIC function of the MASS package.
Here is a really minimal example:
library(MASS)
data(swiss)
str(swiss)
lm <- lm(Fertility ~ ., data = swiss)
lm$coefficients
## (Intercept) Agriculture Examination Education Catholic
## 66.9151817 -0.1721140 -0.2580082 -0.8709401 0.1041153
## Infant.Mortality
## 1.0770481
st1 <- stepAIC(lm, direction = "both")
st2 <- stepAIC(lm, direction = "forward")
st3 <- stepAIC(lm, direction = "backward")
summary(st1)
summary(st2)
summary(st3)
You should try the 3 directions and ckeck which model works better with your test data.
Read ?stepAIC and take a look at the examples.
EDIT
It's true stepwise regression isn't the greatest method. As it's mentioned in GavinSimpson answer, lasso regression is a better/much more efficient method. It's much faster than stepwise regression and will work with large datasets.
Check out the glmnet package vignette:
http://www.stanford.edu/~hastie/glmnet/glmnet_alpha.html

Running predict() after tobit() in package AER

I am doing a tobit analysis on a dataset where the dependent variable (lets call it y) is left censored at 0. So this is what I do:
library(AER)
fit <- tobit(data=mydata,formula=y ~ a + b + c)
This is fine. Now I want to run the "predict" function to get the fitted values. Ideally I am interested in the predicted values of the unobserved latent variable "y*" and the observed censored variable "y" [See Reference 1].
I checked the documentation for predict.survreg [Reference 2] and I don't think I understood which option gives me the predicted censored variables (or the latent variable).
Most examples I found online advise the following :
predict(fit,type="response").
Again, its not clear what kind of predictions these are.
My guess is that the "type" option in the predict function is the key here, with type="response" meant for the censored variable predictions and type="linear" meant for latent variable predictions.
Can someone with some experience here, shed some light for me please ?
Many Thanks!
References:
http://en.wikipedia.org/wiki/Tobit_model
http://astrostatistics.psu.edu/datasets/2006tutorial/html/survival/html/predict.survreg.html
Generally predict-"response" results have been back-transformed to the original scale of data from whatever modeling transformations were used in a regression, whereas the "linear" predictions are the linear predictors on the link transformed scale. In the case of tobit which has an identity link, they should be the same.
You can check my meta-prediction easily enough. I just checked it with the example on the ?tobit page:
plot(predict(fm.tobit2, type="response"), predict(fm.tobit2,type="linear"))
I posted a similar question on stats.stackexchange and I got an answer that could be useful for you:
https://stats.stackexchange.com/questions/149091/censored-regression-in-r
There one of the authors of the package shows how to calculate the mean of (ie. prediction) of $Y$ where $Y = max(Y^*,0)$. Using the package AER this has to be done somewhat "by hand".

Heteroscedasticity robust standard errors with the PLM package

I am trying to learn R after using Stata and I must say that I love it. But now I am having some trouble. I am about to do some multiple regressions with Panel Data so I am using the plm package.
Now I want to have the same results with plm in R as when I use the lm function and Stata when I perform a heteroscedasticity robust and entity fixed regression.
Let's say that I have a panel dataset with the variables Y, ENTITY, TIME, V1.
I get the same standard errors in R with this code
lm.model<-lm(Y ~ V1 + factor(ENTITY), data=data)
coeftest(lm.model, vcov.=vcovHC(lm.model, type="HC1))
as when I perform this regression in Stata
xi: reg Y V1 i.ENTITY, robust
But when I perform this regression with the plm package I get other standard errors
plm.model<-plm(Y ~ V1 , index=C("ENTITY","YEAR"), model="within", effect="individual", data=data)
coeftest(plm.model, vcov.=vcovHC(plm.model, type="HC1))
Have I missed setting some options?
Does the plm model use some other kind of estimation and if so how?
Can I in some way have the same standard errors with plm as in Stata with , robust
By default the plm package does not use the exact same small-sample correction for panel data as Stata. However in version 1.5 of plm (on CRAN) you have an option that will emulate what Stata is doing.
plm.model<-plm(Y ~ V1 , index=C("ENTITY","YEAR"), model="within",
effect="individual", data=data)
coeftest(plm.model, vcov.=function(x) vcovHC(x, type="sss"))
This should yield the same clustered by group standard-errors as in Stata (but as mentioned in the comments, without a reproducible example and what results you expect it's harder to answer the question).
For more discussion on this and some benchmarks of R and Stata robust SEs see Fama-MacBeth and Cluster-Robust (by Firm and Time) Standard Errors in R.
See also:
Clustered standard errors in R using plm (with fixed effects)
Is it possible that your Stata code is different from what you are doing with plm?
plm's "within" option with "individual" effects means a model of the form:
yit = a + Xit*B + eit + ci
What plm does is to demean the coefficients so that ci drops from the equation.
yit_bar = Xit_bar*B + eit_bar
Such that the "bar" suffix means that each variable had its mean subtracted. The mean is calculated over time and that is why the effect is for the individual. You could also have a fixed time effect that would be common to all individuals in which case the effect would be through time as well (that is irrelevant in this case though).
I am not sure what the "xi" command does in STATA, but i think it expands an interaction right ? Then it seems to me that you are trying to use a dummy variable per ENTITY as was highlighted by #richardh.
For your Stata and plm codes to match you must be using the same model.
You have two options:(1) you xtset your data in stata and use the xtreg option with the fe modifier or (2) you use plm with the pooling option and one dummy per ENTITY.
Matching Stata to R:
xtset entity year
xtreg y v1, fe robust
Matching plm to Stata:
plm(Y ~ V1 + as.factor(ENTITY) , index=C("ENTITY","YEAR"), model="pooling", effect="individual", data=data)
Then use vcovHC with one of the modifiers. Make sure to check this paper that has a nice review of all the mechanics behind the "HC" options and the way they affect the variance covariance matrix.
Hope this helps.

Resources