glm with gamma family with NA / 0 Values in r - r

I would like to use a generalized linear mixed effect model. My data follows a gamma distribution but contains NA and 0 values. However, the gamma family does not allow me to compute these models if I have 0 values. Does anyone know of a way to go around this problem?
I heard that the glmmTMB package allows the use of gamma distributions with negative values, but I work on a mac, and it seems that I cannot download this package.
When I try, I get an error code stating "clang: error: unsupported option '-fopenmp'".
It would be great if any of you had an idea.

The Gamma distribution has no support on the non-positive real numbers. Accordingly, you are basically asking it to model data which it could never produce and therefore the software throws an error.
Similarly, the missing data cannot be modelled because the model you specify does not itself model or marginalize out the missing data. You will need to either replace the missing values with some number (impute missing values) deterministically/probabilistically or drop the observations with missing values.
In short, you will need to employ an alternative model. You could use the zero-inflated gamma model or the gamma hurdle. See here for an example. There is no "correct" alternative model: it is a model and you will need to think about their relative strengths and weaknesses (assumptions, etc.).

Related

Imputation missing data for MLM in R

Maybe anyone can help me with this question. I conducted a follow-up study and obviously now have to face missing data. Now I am considering how to impute the missing data at best using MLM in R (f.e. participants concluded the follow up 2 survey, but not the follow up 1 survey, therefore I am missing L1 predictors for my longitudinal analysis).
I read about Multiple Imputation of multilevel data using the pan package (Schafer & Yucel, 2002) and came across the following code:
imp <- panImpute(data, formula = fml, n.burn = 1000, n.iter = 100, m = 5)
Yet, I have troubles understanding it completely. Is there maybe another way to impute missing data in R? Or maybe somebody could illustrate the process of the imputation method a bit more detailed, that would be so great! Do I have to conduct the imputation for every model I built in my MLM? (f.e. when I compared, whether a random intercept versus a random intercept and random slope model fits better for my data, do I have to use the imputation code for every model, or do I use it at the beginning of all my calculations?)
Thank you in advance
Is there maybe another way to impute missing data in R?
There are other packages. mice is the one that I normally use, and it does support multilevel data.
Do I have to conduct the imputation for every model I built in my MLM? (f.e. when I compared, whether a random intercept versus a random intercept and random slope model fits better for my data, do I have to use the imputation code for every model, or do I use it at the beginning of all my calculations?)
You have to specify the imputation model. Basically that means you have to tell the software which variables are predicted by which other variables. Since you are comparing models with the same fixed effect, and only changing the random effects (in particular comparing models with and without random slopes), the imputation model should be the same in both cases. So the workflow is:
perform the imputations;
run the model on all the imputed datasets,
pool the results (typically using Rubin's rules)
So you will need to do this twice, to end up with 2 sets of pooled results - one for each model. The software should provide functionality for doing all of this.
Having said all of that, I would advise against choosing your model based on fit statistics and instead use expert knowledge. If you have strong theoretical reasons for expecting slopes to vary by group, then include random slopes. If not, then don't include them.

XGboost with monotonic constraints for explainability

I am building in Python a credit scorecard using this public dataset: https://www.kaggle.com/sivakrishna3311/delinquency-telecom-dataset
It's a binary classification problem:
Target = 1 -> Good applicant
Target = 0 -> Bad applicant
I only have numeric continuous predictive characteristics.
In the credit industry it is a legal requirement to explain why an applicant got rejected (or didn't even get the maximum score): to meet that requirement, Adverse Codes are produced.
In a classic logistic regression approach, one would do this:
calculate the Weight-of-Evidence (WoE) for each predictive
characteristic (forcing a monotonic relationship between the feature
values and the WoE or log(odds)). In the following example, the
higher the network Age the higher the Weight-of-Evidence (WoE):
replace the data values with the correspondent Weight-of-Evidence.
For example, a value of 250 for Network Age would be replaced by
0.04 (which is the correspondent WoE).
Train a logistic regression
After some linear transformations you'd get something like this:
And therefore it'd be straightforward to assign the Adverse Codes, so that the bin with the maximum score doesn't return an Adverse Code. For example:
Now, I want to train an XGBoost (which typically outperforms a logistic regression on a imbalanced, low noise data). XGBoost are very predictive but need to be explained (typically via SHAP).
What I have read is that in order to make the model decision explainable you must ensure that the monotonic constraints are applied.
Question 1. Does it mean that I need to train the XGBoost on the Weight-of-Evidence transformed data like it's done with the Logistic Regression (see point 2 above)?
Question 2. In Python, the XGBoost package offers the option to set monotonic constraints (via the monotone_constraints option). If I don't transform the data by replacing the Weight-of-Evidence (therefore removing all monotonic constraints) does it still make sense to use "monotone_constraints" in XGboost for a binary problem? I mean, does it make sense to use monotone_constraints with a XGBClassifier at all?
Thanks.

Not able to predict with gamma distributed GLM model in H2O

I have just computed a gamma GLM with the h2o package in R.
When I'm trying to predict on the test set I get this error:
Illegal argument(s) for GLM model: GLM_model_R_1644680218230_95. Details: ERRR on field: _family: Response value for gamma distribution must be greater than 0.
I understand that a gamma model cannot be trained on data with zero response, but one should be able to predict on data with a true value 0 (this is used a a lot in actuaries).
Does any one know a h2o solution? I know I can simply make the model with glm() or something similar, but I'm relying on mean encoded categorical variables (which is really convenient in h2o).
Thanks!
Based on description in comments - this seems like a bug and fix will be needed.

How to run a truncated and inflated Poisson model in R?

My data doesn't contain any zeros. The minimum value for my outcome, y, is 1 and that is the value that is inflated. My objective is to run a truncated and inflated Poisson regression model using R.
I already know how to separate way each regression zero truncated and zero inflated. I want to know how to combine the two conditions into one model.
Thanks for you help.
For zero inflated models or zero-hurdle models, the standard approach is to use pscl package. I also wrote a package fitting that kind of models here but it is not yet mature and fully tested. Unless you have voluminous data, I still recommend you to use pscl that is more flexible, robust and documented.
For zero-truncated models, you can have a look at the VGML::vglm function. You might find useful information here.
Note that you are not doing the same distributional assumption so you won't need the same estimation data. Given the description of your dataset, I think you are looking for a zero-truncated model (since you do not observe zeros). With zero-inflated models, you decompose your observed pattern into zeros generated by a selection model and others generated by a count data model. This doesn't look to be a pattern consistent with your dataset.

R: Estimating model variance

In the demo for ROC, there are models that when plotted have a spread, like hiv.svm$predictions which contains 10 estimates of response. Can someone remind me how to calculate N estimates of a model. I'm using RPART and neural network to estimate a single output (true/false). How can I run 10 different sampling for training data to get 10 different model responses to the input. I think the function is called bootstraping, but I don't know how to implement it.
I need to do this outside of caret, cause when I use caret I keep getting the message "Error in tab[1:m, 1:m] : subscript out of bounds". Is there a "simple" bootstrap function?
Obviously the answer is too late, but you could have used caret just by simply renaming the levels of your factor, because caret doesn't work if your binary response is of type logical. For example:
factor(responseWithTrueFalseLevel,
levels=c(TRUE,FALSE),
labels=c("myTrueLevel","myFalseLevel"))

Resources