R code to get Log-likelihood for Binary logistic regression - r

I have developed a binomial logistic regression using glm function in R. I need three outputs which are
Log likelihood (no coefficients)
Log likelihood (constants only)
Log likelihood (at optimal)
What functions or packages do I need to obtain these outputs?

Say we have a fitted model m.
log-likelihood of full model (i.e., at MLE): logLik(m)
log-likelihood of intercept-only model: logLik(update(m, . ~ 1))
although the latter can probably be retrieved without refitting the model if we think carefully enough about the deviance() and $null.deviance components (these are defined with respect to the saturated model)

Related

comparison of goodness-of-fit under robust circumstances [migrated]

This question was migrated from Stack Overflow because it can be answered on Cross Validated.
Migrated yesterday.
I have fitted respectively a zero-knot, a one-knot and a two-knot linear spline to my data, and I need some index of goodness-of-fit for model selection. The crucial point is that the splines are fitted with robust linear regressions (using function rlm in R package MASS), specifically with Huber estimations and Tukey's bisquare estimation, which makes the usual estimator of prediction error like AIC inappropriate.
So my problem is:
What criterion should I use to perform model selection on my zero, one and two-knot splines? Can I use SSE?
I also need to compare between a model using Huber estimation and a model using Tukey's bisquare estimation. What criterion should I use?

Does the function multinom() from R's nnet package fit a multinomial logistic regression, or a Poisson regression?

The documentation for the multinom() function from the nnet package in R says that it "[f]its multinomial log-linear models via neural networks" and that "[t]he response should be a factor or a matrix with K columns, which will be interpreted as counts for each of K classes." Even when I go to add a tag for nnet on this question, the description says that it is software for fitting "multinomial log-linear models."
Granting that statistics has wildly inconsistent jargon that is rarely operationally defined by whoever is using it, the documentation for the function even mentions having a count response and so seems to indicate that this function is designed to model count data. Yet virtually every resource I've seen treats it exclusively as if it were fitting a multinomial logistic regression. In short, everyone interprets the results in terms of logged odds relative to the reference (as in logistic regression), not in terms of logged expected count (as in what is typically referred to as a log-linear model).
Can someone clarify what this function is actually doing and what the fitted coefficients actually mean?
nnet::multinom is fitting a multinomial logistic regression as I understand...
If you check the source code of the package, https://github.com/cran/nnet/blob/master/R/multinom.R and https://github.com/cran/nnet/blob/master/R/nnet.R, you will see that the multinom function is indeed using counts (which is a common thing to use as input for a multinomial regression model, see also the MGLM or mclogit package e.g.), and that it is fitting the multinomial regression model using a softmax transform to go from predictions on the additive log-ratio scale to predicted probabilities. The softmax transform is indeed the inverse link scale of a multinomial regression model. The way the multinom model predictions are obtained, cf.predictions from nnet::multinom, is also exactly as you would expect for a multinomial regression model (using an additive log-ratio scale parameterization, i.e. using one outcome category as a baseline).
That is, the coefficients predict the logged odds relative to the reference baseline category (i.e. it is doing a logistic regression), not the logged expected counts (like a log-linear model).
This is shown by the fact that model predictions are calculated as
fit <- nnet::multinom(...)
X <- model.matrix(fit) # covariate matrix / design matrix
betahat <- t(rbind(0, coef(fit))) # model coefficients, with expicit zero row added for reference category & transposed
preds <- mclustAddons::softmax(X %*% betahat)
Furthermore, I verified that the vcov matrix returned by nnet::multinom matches that when I use the formula for the vcov matrix of a multinomial regression model, Faster way to calculate the Hessian / Fisher Information Matrix of a nnet::multinom multinomial regression in R using Rcpp & Kronecker products.
Is it not the case that a multinomial regression model can always be reformulated as a Poisson loglinear model (i.e. as a Poisson GLM) using the Poisson trick (glmnet e.g. uses the Poisson trick to fit multinomial regression models as a Poisson GLM)?

Can I test autocorrelation from the generalized least squares model?

I am trying to use a generalized least square model (gls in R) on my panel data to deal with autocorrelation problem.
I do not want to have any lags for any variables.
I am trying to use Durbin-Watson test (dwtest in R) to check the autocorrelation problem from my generalized least square model (gls).
However, I find that the dwtest is not applicable over gls function while it is applicable to other functions such as lm.
Is there a way to check the autocorrelation problem from my gls model?
Durbin-Watson test is designed to check for presence of autocorrelation in standard least-squares models (such as one fitted by lm). If autocorrelation is detected, one can then capture it explicitly in the model using, for example, generalized least squares (gls in R). My understanding is that Durbin-Watson is not appropriate to then test for "goodness of fit" in the resulting models, as gls residuals may no longer follow the same distribution as residuals from the standard lm model. (Somebody with deeper knowledge of statistics should correct me, if I'm wrong).
With that said, function durbinWatsonTest from the car package will accept arbitrary residuals and return the associated test statistic. You can therefore do something like this:
v <- gls( ... )$residuals
attr(v,"std") <- NULL # get rid of the additional attribute
car::durbinWatsonTest( v )
Note that durbinWatsonTest will compute p-values only for lm models (likely due to the considerations described above), but you can estimate them empirically by permuting your data / residuals.

coxme proportional hazard assumption

I am running mixed effect Cox models using the coxme function {coxme} in R, and I would like to check the assumption of proportional hazard.
I know that the PH assumption can be verified with the cox.zph function {survival} on cox.ph model.
However, I cannot find the equivalent for coxme models.
In 2015 a similar question has been posted here, but had no answer.
my questions are:
1) how to test PH assumption on mixed effect cox model coxme?
2) if there is no equivalent of the cox.zph for coxme models, is it valid for publication in scientific article to run mixed effect coxme model but test the PH assumption on a cox.ph model identical to the coxme model but without random effect?
Thanks in advance for your answers.
Regards
You can use frailty option in coxph function. Let's say, your random effect variable is B, your fixed effect variable is A. Then you fit your model as below
myfit <- coxph( Surv(Time, Censor) ~ A + frailty(B) , data = mydata )
Now, you can use cox.zph(myfit) to test the proportional hazard assumption.
I don't have enough reputation to comment, but I don't think using the frailty option in the coxph function will work. In the cox.zph documentation, it says:
Random effects terms such a frailty or random effects in a coxme model are not checked for proportional hazards, rather they are treated as a fixed offset in model.
Thus, it's not taking the random effects into account when testing the proportional hazards assumption.

How do I designate a negative binomial error distribution in a GLM using R?

I'm constructing a model using the glm() function in R. Let's say that I know that my data have an error distribution that fits a negative binomial distribution.
When I search the R manual for the various families, family=binomial is offered as an option, but negative binomial is not.
In the same section of the R manual (family), NegBinomial is linked in the "See also" section, but it is presented in the context of binomial coefficients (and I'm not even sure what this is referring to).
So, to summarize, I'm hoping to find syntax that would be analogous to glm(y~x, family=negbinomial, data=d,na.omit).
With an unknown overdispersion parameter, the negative binomial is not part of the negative exponential family, so can't be fitted as a standard GLM (or by glm()). There is a glm.nb() function in the MASS package that can help you ...
library(MASS)
glm.nb(y~x, ...)
If you happen to have a known/fixed overdispersion parameter (e.g. if you want to fit a geometric distribution model, which has theta=1), you can use the negative.binomial family from MASS:
glm(y~x,family=negative.binomial(theta=1), ...)
It might not hurt if MASS::glm.nb were in the "See Also" section of ?glm ...
I don't believe theta is the overdispersion parameter. Theta is a shape parameter for the distribution and overdispersion is the same as k, as discussed in The R Book (Crawley 2007). The model output from a glm.nb() model implies that theta does not equal the overdispersion parameter:
Dispersion parameter for Negative Binomial(0.493) family taken to be 0.4623841
The dispersion parameter is a different value than theta.

Resources