I'm not sure how to make this example reproducible - but I'm having an issue with my fixest implementation of a simple event study plot based on coefficients. Wondering if anyone has suggestions for ways to fix this issue.
I have longitudinal data in which I'm looking into the impact of a policy on mother employment. Since the treatment (a reduction in benefits) is blanketed across all effected obs at once - no need to worry about staggard D-I-D treatment effect.
Treated obs are those that made use of the benefit before it was taken away - control group is everyone else. Where treatment periods are normalized with 0 being the quarter of the event.
I'm using the specification below:
employ_mother <- feols(paid_emp ~ i (time_to_treat, treated_group,ref = -1) +
age_dv + age_dv^2 + nkids_dv + marital_status + regions|quarter ,
data = dta, cluster = dta$pidp)
iplot(ctx_employ_mother,
xlab = 'Time to treatment',
main = 'Mother in Employment')
For which the output graph looks like so:
Mother Employment Graph
I'm trying to understand why all of the pre-treatment coefficients are below 0 and rising. When I try the same specification in STATA - my results look normal- with pre-treatment coefficients hovering around 0 with a positive effect after the treatment period begins.
Would really appreciate any help with this.
Thanks!
Related
How to run Latent Class Growth Modelling (LCGM) with a multinomial response variable in R (using the flexmix package)?
And how to stratify each class by a binary/categorical dependent variable?
The idea is to let gender shape the growth curve by cluster (cf. Mikolai and Lyons-Amos (2017, p. 194/3) where the stratification is done by education. They used Mplus)
I think I might have come close with the following syntax:
lcgm_formula <- as.formula(rel_stat~age + I(age^2) + gender + gender:age)
lcgm <- flexmix::stepFlexmix(.~ .| id,
data=d,
k=nr_of_classes, # would be 1:12 in real analysis
nrep=1, # would be 50 in real analysis to avoid local maxima
control = list(iter.max = 500, minprior = 0),
model = flexmix::FLXMRmultinom(lcgm_formula,varFix=T,fixed = ~0))
,which is close to what Wardenaar (2020,p. 10) suggests in his methodological paper for a continuous outcome:
stepFlexmix(.~ .|ID, k = 1:4,nrep = 50, model = FLXMRglmfix(y~ time, varFix=TRUE), data = mydata, control = list(iter.max = 500, minprior = 0))
The only difference is that the FLXMRmultinom probably does not support varFix and fixed parameters, altough adding them do produce different results. The binomial equivalent for FLXMRmultinom in flexmix might be FLXMRglm (with family="binomial") as opposed FLXMRglmfix so I suspect that the restrictions of the LCGM (eg. fixed slope & intercept per class) are not specified they way it should.
The results are otherwise sensible, but model fails to put men and women with similar trajectories in the same classes (below are the fitted probabilities for each relationship status in each class by gender):
We should have the following matches by cluster and gender...
1<->1
2<->2
3<->3
...but instead we have
1<->3
2<->1
3<->2
That is, if for example men in class one and women in class three would be forced in the same group, the created group would be more similar than the current first row of the plot grid.
Here is the full MVE to reproduce the code.
Got similar results with another dataset with diffent number of classes and up to 50 iterations/class. Have tried two alternative ways to predict the probabilities, with identical results. I conclude that the problem is most likely in the model specification (stepflexmix(...,model=FLXMRmultinom(...) or this is some sort of label switch issue.
If the model would be specified correctly and the issue is that similar trajectories for men/women end up in different classes, is there a way to fix that? By for example restricting the parameters?
Any assistance will be highly appreciated.
This seems to be a an identifiability issue apparently common in mixture modelling. In other words the labels are switched so that while there might not be a problem with the modelling as such, men and women end up in different groups and that will have to be dealt with one way or another
In the the new linked code, I have swapped the order manually and calculated the predictions with by hand.
Will be happy to hear, should someone has an alternative approach to deal with the label swithcing issue (like restricting parameters or switching labels algorithmically). Also curious if the model could/should be specified in some other way.
A few remarks:
I believe that this is indeed performing a LCGM as we do not specify random effects for the slopes or intercepts. Therefore I assume that intercepts and slopes are fixed within classes for both sexes. That would mean that the model performs LCGM as intended. By the same token, it seems that running GMM with random intercept, slope or both is not possible.
Since we are calculating the predictions by hand, we need to be able to separate parameters between the sexes. Therefore I also added an interaction term gender x age^2. The calculations seems to slow down somewhat, but the estimates are similar to the original. It also makes conceptually sense to include the interaction for age^2 if we have it for age already.
varFix=T,fixed = ~0 seem to be reduntant: specifying them do not change anything. The subsampling procedure (of my real data) was unaffected by the set.seed() command for some reason.
The new model specification becomes:
lcgm_formula <- as.formula(rel_stat~ age + I(age^2) +gender + age:gender + I(age^2):gender)
lcgm <- flexmix::flexmix(.~ .| id,
data=d,
k=nr_of_classes, # would be 1:12 in real analysis
#nrep=1, # would be 50 in real analysis to avoid local maxima (and we would use the stepFlexmix function instead)
control = list(iter.max = 500, minprior = 0),
model = flexmix::FLXMRmultinom(lcgm_formula))
And the plots:
I am confused on what the cox.zph is showing. I came across this test in documentation for the finalfit package, and there was this bit under the heading "Testing for Proportional Hazards" about halfway down, which suggested testing the assumption that the risk associated with a particular variable does not change over time.
I ran it through using the code, but the info seems to imply that I want a straight line from zero (which I have in the graph), and that hypothesis test should not have variables that significantly differ from zero (which I don't have). This seems like a contradiction: Does anyone have any insight in where I may be going wrong here.
matt_sfit1 <- coxph(Surv(matt_tmove_cen, matt_moved_cen)~
matt_ncdem + flood_risk_simple + pre_matt.yr + CurrentAge + distance_bi + percap.inc.k + employment + rentership + pop.change + pop.den.k,
data=matt_timeadd)
matt_sfit1 %>% cox.zph()
chisq df p
matt_ncdem 39.22057 1 0.000000000378530830
flood_risk_simple 28.56281 1 0.000000090707709686
pre_matt.yr 7.96306 1 0.0047742
CurrentAge 5.83612 1 0.0157004
distance_bi 141.75756 1 < 0.000000000000000222
percap.inc.k 58.80923 1 0.000000000000017372
employment 30.16208 1 0.000000039740433777
rentership 8.69457 1 0.0031916
pop.change 36.13011 1 0.000000001845730660
pop.den.k 9.56108 1 0.0019875
GLOBAL 281.42991 10 < 0.000000000000000222
matt_sfit1 %>% cox.zph() %>% {zph_result <<- .} %>% plot(var=5)
Testing for proportionality is very important. If the proportional hazards assumption is rejected, it means that the effect of interest varies over time, and that the 'pooled' coefficient you are looking at is actually an average of different underlying values.
The first test you reported gives an overview of whether the PH assumption holds, i.e. of whether the effect of interest is constant over time. A graphical inspection can be informative in detecting 'when' this variation happens (for example, a covariate may have a stronger effect earlier/later on; this can sometimes be expected from a theoretical point of view). I think that the chosen y-scale is hiding a non-horizontal line. I would try to isolate the smoothed curve by removing the observation points. You have to specify the resid=FALSE argument in plot.
The two tests shuold give you a coherent outcome.
Past threads (among the others, here and here) offer excellent guidance on how to address the issue.
Background
I am trying to test for differences in wind speed data among different groups. For the purpose of my question, I am looking only on side wind (wind direction that is 90 deg from the individual), and I only care about the strength of the wind. Thus, I use absolute values. The range of wind speeds is 0.0004-6.8 m/sec and because I use absolute values, Gamma distribution describes it much better than normal distribution.
My data contains 734 samples from 68 individuals, with each individual having between 1-30 repeats. However, even if I reduce my samples to only include individuals with at least 10 repeats (which leaves me with 26 individuals and a total of 466 samples), I still get the problematic error.
The model
The full model is Wind ~ a*b + c*d + (1|individual), but for the purpose of this question, the simple model of Wind ~ 1 + (1|individual) gives the same singularity error, so I do not think that the explanatory variables are the problem.
The complete code line is glmer(Wind ~ 1 + (1|individual), data = X, family = Gamma(log))
The problem and the strange part
When running the model, I get the boundary (singular) fit: see ?isSingular error, although, as you can see, I use a very simple model and random structure. The strange part is that I can solve this by adding 0.1 to the Wind variable (i.e. glmer(Wind+0.1 ~ 1 + (1|Tag), data = X, family = Gamma(log)) does not give any error). I honestly do not remember why I added 0.1 the first time I did it, but I was surprised to see that it solved the error.
The question
Is this a problem with lme4? Am I missing something? Any ideas what might cause this and why does me adding 0.1 to the variable solve this problem?
Edit following questions
I am not sure what's the best way to add data, so here is a link to a csv file in Google drive
using glmmTMB does not produce any warnings with the basic formula glmmTMB(Wind ~ 1 + (1|Tag), data = X, family = Gamma(log)), but gives convergence problems warnings ('non-positive-definite Hessian matrix') when using the full model (i.e., Wind ~ a*b + c*d + (1|individual)), which are then solved if I scale the continuous variables
Until recently I used SPSS for my statistics, but since I am not in University any more, I am changing to R. Things are going well, but I can't seem to replicate the results I obtained for repeated effect LMM in SPSS. I did find some treads here which seemed relevant, but those didn't solve my issues.
This is the SPSS script I am trying to replicate in R
MIXED TriDen_L BY Campaign Watering Heating
/CRITERIA=CIN(95) MXITER(100) MXSTEP(10) SCORING(1)
SINGULAR(0.000000000001) HCONVERGE(0,
ABSOLUTE) LCONVERGE(0, ABSOLUTE) PCONVERGE(0.000001, ABSOLUTE)
/FIXED=Campaign Watering Heating Campaign*Watering Campaign*Heating
Watering*Heating Campaign*Watering*Heating | SSTYPE(3)
/METHOD=REML
/PRINT=TESTCOV
/RANDOM=Genotype | SUBJECT(Plant_id) COVTYPE(AD1)
/REPEATED=Week | SUBJECT(Plant_id) COVTYPE(AD1)
/SAVE=PRED RESID
Using the lme4 package in R I have tried:
lmm <- lmer(lnTriNU ~ Campaign + Watering + Heating + Campaign*Watering
+ Campaign*Heating + Watering*Heating + Campaign*Watering*Heating
+ (1|Genotype) + (1|Week:Plant_id), pg)
But this -and the other options I have tried for the random part- keep producing an error:
Error: number of levels of each grouping factor must be < number of observations
Obviously in SPSS everything is fine. I am suspecting I am not correctly modelling the repeated effect? Also saving predicted and residual values is not yet straightforward for me...
I hope anyone can point me in the right direction.
You probably need to take out either Week or Plant_id, as I think you have as many values for either variable as you have cases. You can nest observations within a larger unit if you add a variable to model time. I am not familiar with SPSS, but if your time variable is Week (i.e., if week has a value of 1 for the first observation, 2 for the second etc.), then it should not be a grouping factor but a random effect in the model. Something like <snip> week + (1 + week|Plant_id).
k.
Is Plant_id nested within Genotype, and Week indicate different measure points? If so, I assume that following formula leads to the required result:
lmm <- lmer(lnTriNU ~ Campaign + Watering + Heating + Campaign*Watering
+ Campaign*Heating + Watering*Heating + Campaign*Watering*Heating
+ (1+Week|Genotype/Plant_id), pg)
Also saving predicted and residual values is not yet straightforward for me...
Do you mean "computing" by "saving"? In R, all relevant information are in the returned object, and accessible through functions like residuals() or predict() etc., called on the saved object (in your case, residuals(lmm)).
Note that, by default, lmer does not use AD1-covtype.
I am a complete novice when it comes to survival analysis. I am working on a project that requires I use the coxph function in the "survival" package, but I am running into trouble because I do not understand what is required by the formula object.
Most descriptions I can find about the function are as follows:
"a formula object, with the response on the left of a ~ operator, and the terms on the right. The response must be a survival object as returned by the Surv function. "
I know what needs to be on the left of the operator, the issue is what the function expects from the right-hand side.
Here is a link of what my data looks like (The actual data set is much larger, I'm only displaying the first 20 data points for brevity):
Short explanation of data:
-Row 1 is the header
-Each row after that is a separate patient
-The first column is the age of the patient at the time of the study
-columns 2 through 14 (headed by x2-x13), and 19 (x18) and 20 (x19) are covariates such as race, relationship status, medical conditions that take on either true (1) or false (0) values.
-columns 15 (x14) through 18 (x17) are covariates such as tumor size, which take on whole number values greater than 0.
-The second to last column "sur" is the number of months survived, and "index" is whether or not that is a right-censored time (1 for true, 0 for false).
Given this data I need to plot a Cox Proportional hazard curve, but I end up with an incorrect plot because the right hand side of the formula object is wrong.
Here is my code, "temp4" is the name I gave to the data table:
library("survival")
temp4 <- read.table("~/data.txt", header=TRUE)
seerCox <- coxph(Surv(sur, index)~ temp4$x1 + temp4$x2 + temp4$x3 + temp4$x4 + temp4$x5 + temp4$x6 + temp4$x7 + temp4$x8 + temp4$x9 + temp4$x10 + temp4$x11 + temp4$x12 + temp4$x13 + temp4$x14 + temp4$x15 + temp4$x16 + temp4$x17 + temp4$x18 + temp4$x19, data=temp4, singular.ok=TRUE)
plot(survfit(seerCox), main= "Cox Estimate", mark.time=FALSE, ylab="Probability", xlab="Survival Time in Months", col=c("blue", "red", "green"))
I should also note that I have tried replacing the right hand side that you're seeing with the number 1, a period, leaving it blank. These methods produce a kaplan-meier curve.
The following is the console output:
Each new line is an example of the error produced depending on how I filter the data. (ie if I only include patients with ages greater than 85, etc.)
If someone could explain how it works, it would be greatly appreciated.
PS- I have searched for over a week to my solution, and I am asking for help here as a last resort.
You should not be using the prefix temp$ if you are also using a data argument. The whole purpose of supplying a data argument is to allow dropping those in the formula.
seerCox <- coxph( Surv(sur, index) ~ . , data=temp4, singular.ok=TRUE)
The above would use all of the x-variables in your temp data.frame. This will use just the first 3:
seerCox <- coxph( Surv(sur, index) ~ x1+x2+x3 , data=temp4)
Exactly what the warnings signify depends on the data (as you have in one sense already exemplified by producing different sorts of collinearity with different subsets.) If you have collinear columns, then you get singularities in the inversion of the model matrix and the software will attempt to drop aliased columns with a warning. This is really telling you that you do not have enough data to build the large models you are attempting. Exploring that possibility with table calls is often informative.
Bottom line: This is not a problem with your formula construction, so much as it is a problem of not understanding the limitations of the chosen method with the dataset you have assembled. You need to be more careful about defining your goals. What is the highest priority in this research? Do you really need every variable? Is it possible to aggregate some of these anonymous variables into clinically meaningful categories such as diagnostic categories or comorbities?