I am currently working on univariate GARCH models with different specifications and got stuck on including the exponential term in the variance equation:
mean model (setting ω4 = 0)
variance model
I am using the rugarch package in R and (unsuccessfully) tried the 'eGARCH' model type and external regressor option for the recession dummy INBER to get the estimates. Is this generally the correct way for including the exponential part or am I completely off?
Related
I'm developing a prediction model (binary outcome) in R and I have the following question:
After fitting my model with fit.mult.impute from the hmisc package (i dealt with some missing values in my data), i used the validate function from the rms package to assess for optimism with bootstrapping and got my optimism-corrected performance measures. I decided to do a post-estimation shrinkage using the index.corrected calibration slope as shrinkage factor to obtain shrunk coefficients. Now the question I have is related to correct model presentation.
It is common practice to report performance measures of apparent performance (with CIs), then optimism, and then optimism-corrected measures (with CIs). But what about the final post-shrinkage model? Should performance measures be reported for this final model as well, or this final model is only to be used for presentation of the model equation or a nomogram? The same question applies for calibration curves, for which i intended to use the CalibrationCurves package or the calibrate function from rms, but I don't know if I have to plot the performance of the shrunk model as well.
I'm trying to run a weighted Cox regression model but none of the resources I found were actually useful to figure out how to do that.
Basically, I just want to run the model specified below, but weighted.
coxph(Surv(df$outcome)~df$treatment)
Apparently the coxphw function is the way to do this, but I cannot figure out the necessary specifications to actually make that function run.
The 'template' option of the coxphw determines which type of (weighted) estimation of Cox regression is requested. You can choose "AHR" for estimation of average HRs or "ARE" for estimation of average regression effects and "PH" for unweighted Cox proportional hazards regression. Check the ref manual for more details.
Hope it helps
I have a dataset with data left censored and I wanted to apply a multilevel mixed-effects tobit regression, but I only find information about how to do it in Stata. Is it possible to do it in R?
I found the packages 'VGAM' and 'CensREG', but I don't get how to add fixed and random effects.
Also my data is log-normal distributed, is there a way to add this to the model?
Thanks!
According to Section 3.5 of a vignette, the censReg package can handle a mixed model if the data are prepared properly via the plm package.
This Cross Validated page shows an example.
I don't have experience with this; it might only work with formal panel data rather than more general random-effects structures.
If your data are truly log-normal, you could take logs first and set the lower censoring limit on the log scale. Note that an apparent log-normal distribution of outcomes might just represent a corresponding distribution of predictor values with an underlying normal error distribution around the predictions. Don't jump blindly into a log-normal assumption.
I`m considering several models such as GLM, GLMM, zero-inflated, and zero-inflated mixed in the count data.
All my work was done in R.
Prior studies confirmed that there is a problem of zero excess and over-dispersion as a consideration in counter data analysis.
So I tried the following tests.
1. zero excess
Voung test was performed using the zero-inflated model and the GLM.
Vuong of the pscl package was used.
ZIP vs. GLM Poisson
ZINB vs. GLM NB
Significant results were obtained from the above two tests (p<0.05).
2. over-dispersion
dispersion test was performed to find out why over-dispersion should be
considered in real data using the Poisson model.
dispersiontest of the AER package was used (Cameron, A.C. and Trivedi 1990).
The above test results in rejection of the null hypothesis (p<0.05)
In addition, it was confirmed that dispersion parameter(1/theta) had a value of about 0.39.
However, I have not yet found a verification method for the reason why random effects should be considered.
My data is traffic accident data according to the year of each road. i.e. it is longitudinal count data.
I was told by a professor of statistics that a mixed model should be used considering road heterogeneity.
Therefore, I constructed GLMM poisson/NB and zero-inflated mixed poisson/NB using random effects by road and confirmed the results.
GLMM used glmer of lme4, and glmmTMB of glmmTMB was used as the zero-inflated mixed model.
I did the Houseman test at first. However, this test compares the fixed-effects model with the random-effects model and was considered inappropriate for the count data (not linear model).
Crucially, when testing the random effect of the mixed model from the count data, no previous study was seen that conducted the Hausmann test.
Therefore, my question is as follows:
1. I would like to know if there is a previous study that identifies the reason for considering ramdom effect in modeling in longitudinal study data.
2. Is there a validation method to verify the significant effects of random effects in the mixed model?
The AIC and BIC comparison has already been carried out.
3. If there is a way, what package does R use? Additionally, how to use it
so I'm working with a generic log return series, which I made independent by modeling it as an ARMA(1,1) process. Now the resduals fail the Ljung test, i.e. the squares of residuals do not show any heteroskedasticity. However, on failing to find a way to model the residuals as a t-distribution (ARIMA does not allow t setting), I tried to model the residuals as an ARCH(1) process, with all coefficients significant at the 5% level (this allowed me to model the error as t distribution too)
Is it correct to go ahead with ARCH model in this case?
Is there any reason why this is happening?