comparison of goodness-of-fit under robust circumstances [migrated] - r

This question was migrated from Stack Overflow because it can be answered on Cross Validated.
Migrated yesterday.
I have fitted respectively a zero-knot, a one-knot and a two-knot linear spline to my data, and I need some index of goodness-of-fit for model selection. The crucial point is that the splines are fitted with robust linear regressions (using function rlm in R package MASS), specifically with Huber estimations and Tukey's bisquare estimation, which makes the usual estimator of prediction error like AIC inappropriate.
So my problem is:
What criterion should I use to perform model selection on my zero, one and two-knot splines? Can I use SSE?
I also need to compare between a model using Huber estimation and a model using Tukey's bisquare estimation. What criterion should I use?

Related

Does the function multinom() from R's nnet package fit a multinomial logistic regression, or a Poisson regression?

The documentation for the multinom() function from the nnet package in R says that it "[f]its multinomial log-linear models via neural networks" and that "[t]he response should be a factor or a matrix with K columns, which will be interpreted as counts for each of K classes." Even when I go to add a tag for nnet on this question, the description says that it is software for fitting "multinomial log-linear models."
Granting that statistics has wildly inconsistent jargon that is rarely operationally defined by whoever is using it, the documentation for the function even mentions having a count response and so seems to indicate that this function is designed to model count data. Yet virtually every resource I've seen treats it exclusively as if it were fitting a multinomial logistic regression. In short, everyone interprets the results in terms of logged odds relative to the reference (as in logistic regression), not in terms of logged expected count (as in what is typically referred to as a log-linear model).
Can someone clarify what this function is actually doing and what the fitted coefficients actually mean?
nnet::multinom is fitting a multinomial logistic regression as I understand...
If you check the source code of the package, https://github.com/cran/nnet/blob/master/R/multinom.R and https://github.com/cran/nnet/blob/master/R/nnet.R, you will see that the multinom function is indeed using counts (which is a common thing to use as input for a multinomial regression model, see also the MGLM or mclogit package e.g.), and that it is fitting the multinomial regression model using a softmax transform to go from predictions on the additive log-ratio scale to predicted probabilities. The softmax transform is indeed the inverse link scale of a multinomial regression model. The way the multinom model predictions are obtained, cf.predictions from nnet::multinom, is also exactly as you would expect for a multinomial regression model (using an additive log-ratio scale parameterization, i.e. using one outcome category as a baseline).
That is, the coefficients predict the logged odds relative to the reference baseline category (i.e. it is doing a logistic regression), not the logged expected counts (like a log-linear model).
This is shown by the fact that model predictions are calculated as
fit <- nnet::multinom(...)
X <- model.matrix(fit) # covariate matrix / design matrix
betahat <- t(rbind(0, coef(fit))) # model coefficients, with expicit zero row added for reference category & transposed
preds <- mclustAddons::softmax(X %*% betahat)
Furthermore, I verified that the vcov matrix returned by nnet::multinom matches that when I use the formula for the vcov matrix of a multinomial regression model, Faster way to calculate the Hessian / Fisher Information Matrix of a nnet::multinom multinomial regression in R using Rcpp & Kronecker products.
Is it not the case that a multinomial regression model can always be reformulated as a Poisson loglinear model (i.e. as a Poisson GLM) using the Poisson trick (glmnet e.g. uses the Poisson trick to fit multinomial regression models as a Poisson GLM)?

R code to get Log-likelihood for Binary logistic regression

I have developed a binomial logistic regression using glm function in R. I need three outputs which are
Log likelihood (no coefficients)
Log likelihood (constants only)
Log likelihood (at optimal)
What functions or packages do I need to obtain these outputs?
Say we have a fitted model m.
log-likelihood of full model (i.e., at MLE): logLik(m)
log-likelihood of intercept-only model: logLik(update(m, . ~ 1))
although the latter can probably be retrieved without refitting the model if we think carefully enough about the deviance() and $null.deviance components (these are defined with respect to the saturated model)

Automatic model creation, for model selection, in polynomial regression in R

Let's imagine that for a target value 'price', I have predictive variables of x, y, z, m, and n.
I have been able to analyse different models that I could fit through following methods:
Forward, backward, and stepwise selection
Grid and Lasso
KNN (IBk)
For each I got RMSE and MSE for prediction and I can choose the best model.
All these are helpful with linear models.
I'm just wondering if there is any chance to do the same for polynomial regressions (squared, cubic, ...) so I can fit and analyse them as well in the same dataset.
Have you seen caret package? Its very powerfull and groups a lot of machine learning models. It can compares different models and also see the best metaparameters.
http://topepo.github.io/caret/index.html

Can I test autocorrelation from the generalized least squares model?

I am trying to use a generalized least square model (gls in R) on my panel data to deal with autocorrelation problem.
I do not want to have any lags for any variables.
I am trying to use Durbin-Watson test (dwtest in R) to check the autocorrelation problem from my generalized least square model (gls).
However, I find that the dwtest is not applicable over gls function while it is applicable to other functions such as lm.
Is there a way to check the autocorrelation problem from my gls model?
Durbin-Watson test is designed to check for presence of autocorrelation in standard least-squares models (such as one fitted by lm). If autocorrelation is detected, one can then capture it explicitly in the model using, for example, generalized least squares (gls in R). My understanding is that Durbin-Watson is not appropriate to then test for "goodness of fit" in the resulting models, as gls residuals may no longer follow the same distribution as residuals from the standard lm model. (Somebody with deeper knowledge of statistics should correct me, if I'm wrong).
With that said, function durbinWatsonTest from the car package will accept arbitrary residuals and return the associated test statistic. You can therefore do something like this:
v <- gls( ... )$residuals
attr(v,"std") <- NULL # get rid of the additional attribute
car::durbinWatsonTest( v )
Note that durbinWatsonTest will compute p-values only for lm models (likely due to the considerations described above), but you can estimate them empirically by permuting your data / residuals.

What's the difference between ks test and bootstrap_p for power law fitting?

I want to know the goodness of fit while fitting a power law distribution in R using poweRlaw package.
After estimate_xmin() , I had a p-value 0.04614726. But the bootstrap_p() returns another p-value 0.
So why do these two p-value differ? And how can I judge if it is a power law distribution?
here is the plot when using poweRlaw for fittingpoweRlaw fitting result
You're getting a bit confused. One of the statistics that estimate_xmin returns is the Kolmogorov-Smirnoff statistic (as described in Clauset, Shalizi, Newman (2009)). This statistic is used to estimate the best cut-off value for your model, i.e. xmin. However, this doesn't tell you anything about the model fit.
To assess model suitability is where the bootstrap function comes in.

Resources