There are two models fitted with lemr(). The homework ask me to compare them graphically and numerically. I just don't know how to fix it besides comparing the AIC, fix effect and random effect of these two models.
the first model:
child.mutil<-lmer(HIV$CD4PCT~HIV$time1+(1|HIV$newpid))
the second model:
child.mutil2<-lmer(HIV2$CD4PCT~HIV2$time1+treatment.group+visage.group+(1|HIV2$newpid))
Related
I am performing mixed effect modeling using lme4. But as you would expect, I can get positive and negative fixed and random effects as coefficients. How do I put bounds on my final coefficients such that I get only positive coefficients?
I am also trying to use stan_lmer for Bayesian modeling.
Example:
lmer equation
If I check coef(fm1), I get the following output:
lmer output
I want to restrict the coefficients to be only positive. Please help.
When you do a multivariate linear regression you get the multiple R-squared, like this:
My question is, if I can get the R-squared for each independent variable, without having to make a regression for each of the predictor variables.
For example, is it possible to get the R-squared for each of the predictor variables, next to the p value:
In regression models, individual variables do not have an R-squared. There is only ever an R-squared for a complete model. The variance explained by any single independent variable in a regression model is depending on the other independent variables.
If you need some added value of an independent variable, that is, the variance this IV explains above all others, you can compute two regression models. One with this IV and one without. The difference in R-squared is the variance this IV explains after all others have explained their share. But if you do this for all variables, the differences won't add up to the total R-squared.
Alternatively, you may use squared Beta weights to roughly estimate the effect size of a variable in a model. But this value is not directly comparable to R-squared.
This said, this question would better be posted in CrossValidated than StackOverflow.
I'm looking for a function for interaction effects visualization which has a correspondence with ivreg or plm. My model is 2sls with fixed effects but it seems there are no packages available for calculating interaction effects in R.
I'd be pleased if someone could solve my concern.
You might want to have a look at interplot(). You can use this function to visualize e.g. the estimated coefficient of regressor X on outcome Y conditional on values of instrument Z by simply plugging in the fitted values from ivreg(). (The confidence intervals are trickier, but you are probably less interested in those in the first instance.)
https://cran.r-project.org/web/packages/interplot/vignettes/interplot-vignette.html
I'm fitting a multiple linear regression model with 6 predictiors (3 continuous and 3 categorical). The residuals vs. fitted plot show that there is heteroscedasticity, also it's confirmed by bptest().
summary of sales_lm
rediduals vs. fitted plot
Also I calculated the sqrt for my train data and test data, as showed below:
sqrt(mean(sales_train_lm_pred-sales_train$SALES)^2)
2 3533.665
sqrt(mean(sales_test_lm_pred-sales_test$SALES)^2)
2 3556.036
I tried to fit glm() model, but still didn't rectify heteroscedasticity.
glm.test3<-glm(SALES~.,weights=1/sales_fitted$.resid^2,family=gaussian(link="identity"), data=sales_train)
resid vs. fitted plot for glm.test3
it looks weird.
glm.test3 plot
Could you please help me what should I do next?
Thanks in advance!
That you observe heteroscedasticity for your data means that the variance is not stationary. You can try the following:
1) Apply the one-parameter Box-Cox transformation (of the which the log transform is a special case) with a suitable lambda to one or more variables in the data set. The optimal lambda can be determined by looking at its log-likelihood function. Take a look at MASS::boxcox.
2) Play with your feature set (decrease, increase, add new variables).
2) Use the weighted linear regression method.
I am trying to compare differences between coefficients in different regression equations.
Specifically, I have 2 regressions looking at the effect of Importance to Donate on Guilt, Feeling, and Responsibility
aov_I <- aov(newdata_I$AV_importance_to_donate~newdata_I$AV_guilty+newdata_I$AV_percieved_resp+feeling_I)
summary(aov_I)
aov_S <- aov(newdata_S$AV_importance_to_donate~newdata_S$AV_guilty+newdata_S$AV_percieved_resp+feeling_S)
summary(aov_S)
I would like to compare the differences between the coefficients in these two different regression equations.
How can I do this??
Thank you so much in advance!
You can view just the coefficients by doing aovI$coefficients[2] and aovS$coefficients[2] and then combine them into a dataframe using cbind, then just view with a bar graph if you don't need to do a real statistical comparison