Subject specific prediction from heterogenous linear mixed effect model package (lcmm) - r

I am fitting a heterogeneous linear mixed effect model which is in the lcmm package in R. Currently, I am only getting the class-specific and weighted subject-specific prediction from the predictY function. But, I want a subject-specific prediction. Is there any way to construct a subject-specific prediction from this package? Any help is appreciated.

I have found the answer. Looks like PredictY gives the mean class-specific predictions and adding them with the multiplication of the random effects from each subject (ranef(model)) and the model design matrix for the random part will provide the subject-specific prediction.

Related

Does the function multinom() from R's nnet package fit a multinomial logistic regression, or a Poisson regression?

The documentation for the multinom() function from the nnet package in R says that it "[f]its multinomial log-linear models via neural networks" and that "[t]he response should be a factor or a matrix with K columns, which will be interpreted as counts for each of K classes." Even when I go to add a tag for nnet on this question, the description says that it is software for fitting "multinomial log-linear models."
Granting that statistics has wildly inconsistent jargon that is rarely operationally defined by whoever is using it, the documentation for the function even mentions having a count response and so seems to indicate that this function is designed to model count data. Yet virtually every resource I've seen treats it exclusively as if it were fitting a multinomial logistic regression. In short, everyone interprets the results in terms of logged odds relative to the reference (as in logistic regression), not in terms of logged expected count (as in what is typically referred to as a log-linear model).
Can someone clarify what this function is actually doing and what the fitted coefficients actually mean?
nnet::multinom is fitting a multinomial logistic regression as I understand...
If you check the source code of the package, https://github.com/cran/nnet/blob/master/R/multinom.R and https://github.com/cran/nnet/blob/master/R/nnet.R, you will see that the multinom function is indeed using counts (which is a common thing to use as input for a multinomial regression model, see also the MGLM or mclogit package e.g.), and that it is fitting the multinomial regression model using a softmax transform to go from predictions on the additive log-ratio scale to predicted probabilities. The softmax transform is indeed the inverse link scale of a multinomial regression model. The way the multinom model predictions are obtained, cf.predictions from nnet::multinom, is also exactly as you would expect for a multinomial regression model (using an additive log-ratio scale parameterization, i.e. using one outcome category as a baseline).
That is, the coefficients predict the logged odds relative to the reference baseline category (i.e. it is doing a logistic regression), not the logged expected counts (like a log-linear model).
This is shown by the fact that model predictions are calculated as
fit <- nnet::multinom(...)
X <- model.matrix(fit) # covariate matrix / design matrix
betahat <- t(rbind(0, coef(fit))) # model coefficients, with expicit zero row added for reference category & transposed
preds <- mclustAddons::softmax(X %*% betahat)
Furthermore, I verified that the vcov matrix returned by nnet::multinom matches that when I use the formula for the vcov matrix of a multinomial regression model, Faster way to calculate the Hessian / Fisher Information Matrix of a nnet::multinom multinomial regression in R using Rcpp & Kronecker products.
Is it not the case that a multinomial regression model can always be reformulated as a Poisson loglinear model (i.e. as a Poisson GLM) using the Poisson trick (glmnet e.g. uses the Poisson trick to fit multinomial regression models as a Poisson GLM)?

Is there a function to obtain pooled standardized coefficient of linear equation modelling related to analysis of a MI database?

I replaced missing data by using MICE package.
I realized the linear equation modelling by using : summary(pool(with(imputed_base_finale,lm(....)))
I tried to obtain standardized coefficients by using the function lm.beta, however it doesn't work.
lm.beta (with(imputed_base_finale,lm(...)))
Error in lm.beta(with(imputed_base_finale, lm(...)))
object has to be of class lm
How can I obtain these standardized coefficients ?
Thank you for you help!!!
lm.scale works on lm objects and adds standardized coefficients. This however was not build to work on mira objects.
Have you considered using scale on the data before you build a model, effectively getting standardized coefficients?
Instead of standardizing the data before imputation, you could also apply it with post processing during imputation.
I am not sure which of these would be the most robust option.
require(mice)
# non-standardized
imp <- mice(nhanes2)
pool(with(imp,lm(chl ~ bmi)))
# standardized
imp_scale <- mice(scale(nhanes2[,c('bmi','chl')]))
pool(with(imp_scale,lm(chl ~ bmi)))

How to extract R squared from an ARIMA model

Is it possible to calculate an R squared value from an ARIMA model in R?
This is the output given from summary(model)
edit: I am worried about the biases associated with MAPE and other percentage errors. The quantities I'm predicting are relatively small so I feel that finding R2, correlation or some sort of other metric might be a better indicator.
Once you have ARMA errors, it is not a simple linear regression any more.

Can I test autocorrelation from the generalized least squares model?

I am trying to use a generalized least square model (gls in R) on my panel data to deal with autocorrelation problem.
I do not want to have any lags for any variables.
I am trying to use Durbin-Watson test (dwtest in R) to check the autocorrelation problem from my generalized least square model (gls).
However, I find that the dwtest is not applicable over gls function while it is applicable to other functions such as lm.
Is there a way to check the autocorrelation problem from my gls model?
Durbin-Watson test is designed to check for presence of autocorrelation in standard least-squares models (such as one fitted by lm). If autocorrelation is detected, one can then capture it explicitly in the model using, for example, generalized least squares (gls in R). My understanding is that Durbin-Watson is not appropriate to then test for "goodness of fit" in the resulting models, as gls residuals may no longer follow the same distribution as residuals from the standard lm model. (Somebody with deeper knowledge of statistics should correct me, if I'm wrong).
With that said, function durbinWatsonTest from the car package will accept arbitrary residuals and return the associated test statistic. You can therefore do something like this:
v <- gls( ... )$residuals
attr(v,"std") <- NULL # get rid of the additional attribute
car::durbinWatsonTest( v )
Note that durbinWatsonTest will compute p-values only for lm models (likely due to the considerations described above), but you can estimate them empirically by permuting your data / residuals.

Variance of a Time Series Fitted to an ARIMA Model

I think this is a basic question, but maybe I am confusing the concepts.
Suppose I fit an ARIMA model to a time series using, for example, the function auto.arima() in the R forecast package. The model assumes constant variance. How do I obtain that variance? Is it the variance of the residuals?
If I use the model for forecasting, I know that it gives me the conditional mean. I'd like to know the (constant) variance as well.
Thank you.
Bruno
from the arima() help I see
sigma2
the MLE of the innovations variance.
var.coef
the estimated variance matrix of the
coefficients coef, which can be extracted
by the vcov method.
It seems like which you want will depend on your model. I am pretty sure you want sigma2.
to get the sigma2 do:
?arima
x=cumsum(rcauchy(1000))
aax=auto.arima(x)
str(aax)
aax$sigma2

Resources