Anova table by variable - r

I'm using 'gamlss' from the package 'gamlss' (version 5.4-1) in R for a generalized additive model for location scale and shape.
My model looks like this
propvoc3 = gamlss(proporcion.voc ~ familiaridad * proporcion)
When I want to see the Anova table I get this output
Mu link function: identity
Mu Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.625e-01 9.476e-02 5.936 1.9e-06 ***
familiaridaddesconocido -1.094e-01 1.059e-01 -1.032 0.31042
proporcionmayor 4.375e-01 1.340e-01 3.265 0.00281 **
proporcionmenor 1.822e-17 1.340e-01 0.000 1.00000
familiaridaddesconocido:proporcionmayor -3.281e-01 1.708e-01 -1.921 0.06464 .
familiaridaddesconocido:proporcionmenor 5.469e-01 1.708e-01 3.201 0.00331 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
------------------------------------------------------------------
I just want to know if there is a way to get the values just by variable and not by every term?

Related

Regression output double and with capital letter

I am doing a linear regression in R. The output shows some variables (equity & Equity, and loan & Loan) double and one is written with a capital letter. In the dataset, they are always written in lowercase but appear in two different ways when I run the regression. I do not find the answer online, so maybe some of you can help me out? Any ideas are highly appreciated!
Model1 <- lm(Lifetime_CO2 ~ signatory + as.factor(Finance_Type), data = Data_dup)
summary(Model1)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 90.351 4.397 20.550 < 2e-16 ***
signatory 7.378 1.732 4.259 2.10e-05 ***
as.factor(Finance_Type)equity -29.059 4.640 -6.263 4.18e-10 ***
as.factor(Finance_Type)Equity 14.549 38.971 0.373 0.708914
as.factor(Finance_Type)government grant -81.284 22.784 -3.568 0.000365 ***
as.factor(Finance_Type)insurance -2.810 16.397 -0.171 0.863948
as.factor(Finance_Type)loan -25.183 4.422 -5.695 1.32e-08 ***
as.factor(Finance_Type)Loan 14.549 27.731 0.525 0.599852
as.factor(Finance_Type)refinancing bond -9.728 19.878 -0.489 0.624578
as.factor(Finance_Type)refinancing equity -40.601 27.731 -1.464 0.143252
as.factor(Finance_Type)refinancing loan -26.889 5.344 -5.031 5.09e-07 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
You can convert upper-case characters in the Finance_Type column to lower-case, or vice versa.
By the way, as.factor() is not needed unless you want to re-order levels of a categorical variable.
Data_dup$Finance_Type <- tolower(Data_dup$Finance_Type)
Model1 <- lm(Lifetime_CO2 ~ signatory + Finance_Type, data = Data_dup)
summary(Model1)

Changing base category in latent class analysis

I'm using the glca package to run a latent class analysis. I want to see how covariates (other than indicators used to construct latent classes) affect the probability of class assignment. I understand this is a multinomial logistic regression, and thus, my question is, is there a way I can change the base reference latent class? For example, my model is currently a 4-class model, and the output shows the effect of covariates on class prevalence with respect to Class-4 (base category) as default. I want to change this base category to, for example, Class-2.
My code is as follows
fc <- item(intrst, respect, expert, inclu, contbt,secure,pay,bonus, benft, innov, learn, rspons, promote, wlb, flex) ~ atenure+super+sal+minority+female+age40+edu+d_bpw+d_skill
lca4_cov <- glca(fc, data = bpw, nclass = 4, seed = 1)
and I get the following output.
> coef(lca4_cov)
Class 1 / 4 :
Odds Ratio Coefficient Std. Error t value Pr(>|t|)
(Intercept) 1.507537 0.410477 0.356744 1.151 0.24991
atenure 0.790824 -0.234679 0.102322 -2.294 0.02183 *
super 1.191961 0.175600 0.028377 6.188 6.29e-10 ***
sal 0.937025 -0.065045 0.035490 -1.833 0.06686 .
minority 2.002172 0.694233 0.060412 11.492 < 2e-16 ***
female 1.210653 0.191160 0.059345 3.221 0.00128 **
age40 1.443603 0.367142 0.081002 4.533 5.89e-06 ***
edu 1.069771 0.067444 0.042374 1.592 0.11149
d_bpw 0.981104 -0.019077 0.004169 -4.576 4.78e-06 ***
d_skill 1.172218 0.158898 0.036155 4.395 1.12e-05 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Class 2 / 4 :
Odds Ratio Coefficient Std. Error t value Pr(>|t|)
(Intercept) 3.25282 1.17952 0.43949 2.684 0.00729 **
atenure 0.95131 -0.04992 0.12921 -0.386 0.69926
super 1.16835 0.15559 0.03381 4.602 4.22e-06 ***
sal 1.01261 0.01253 0.04373 0.287 0.77450
minority 0.72989 -0.31487 0.08012 -3.930 8.55e-05 ***
female 0.45397 -0.78971 0.07759 -10.178 < 2e-16 ***
age40 1.26221 0.23287 0.09979 2.333 0.01964 *
edu 1.29594 0.25924 0.05400 4.801 1.60e-06 ***
d_bpw 0.97317 -0.02720 0.00507 -5.365 8.26e-08 ***
d_skill 1.16223 0.15034 0.04514 3.330 0.00087 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Class 3 / 4 :
Odds Ratio Coefficient Std. Error t value Pr(>|t|)
(Intercept) 0.218153 -1.522557 0.442060 -3.444 0.000575 ***
atenure 0.625815 -0.468701 0.123004 -3.810 0.000139 ***
super 1.494112 0.401532 0.031909 12.584 < 2e-16 ***
sal 1.360924 0.308164 0.044526 6.921 4.72e-12 ***
minority 0.562590 -0.575205 0.081738 -7.037 2.07e-12 ***
female 0.860490 -0.150253 0.072121 -2.083 0.037242 *
age40 1.307940 0.268453 0.100376 2.674 0.007495 **
edu 1.804949 0.590532 0.054522 10.831 < 2e-16 ***
d_bpw 0.987353 -0.012727 0.004985 -2.553 0.010685 *
d_skill 1.073519 0.070942 0.045275 1.567 0.117163
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
I would appreciate it if anyone let me know codes/references to address my problem. Thanks in advance.
Try using the decreasing option.
lca4_cov <- glca(fc, data = bpw, nclass = 4, seed = 1, decreasing = T)

Undefined columns error when performing TukeyHSD

I'm extremely new to R and need your help!
I performed an Anova/Factorial Anova and wanted to do a Tukey test however I got this error:
Error in `[.data.frame`(mf, mf.cols[[i]]) : undefined columns selected
Here is what I did for the anova and such (removed section testing for normality)
> data.aov<- aov(`FREQUENCY OF INGESTION` ~ `HYDROLOGY REGIME`*`DEPTH ZONE`*`ST. LOCATION`)
> anova(data.aov)
Analysis of Variance Table
Response: FREQUENCY OF INGESTION
Df Sum Sq Mean Sq F value Pr(>F)
`HYDROLOGY REGIME` 1 0.0002 0.0001530 0.0218 0.88274
`DEPTH ZONE` 3 0.0147 0.0049134 0.6990 0.55288
`ST. LOCATION` 1 0.0202 0.0201579 2.8677 0.09085 .
`HYDROLOGY REGIME`:`DEPTH ZONE` 2 0.0229 0.0114514 1.6291 0.19691
`DEPTH ZONE`:`ST. LOCATION` 1 0.0018 0.0017877 0.2543 0.61422
Residuals 651 4.5761 0.0070293
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> TukeyHSD(data.aov)
Error in `[.data.frame`(mf, mf.cols[[i]]) : undefined columns selected
> library(multcompView)
> multcompLetters(extract_p(TukeyHSD(aov(`FREQUENCY OF INGESTION`~`HYDROLOGY REGIME`*`DEPTH ZONE`*`ST. LOCATION`))) ```
Try using the TukeyC package. There are several facilities compared to other packages for factorial experiments, split-plot and etc. Follow the link: https://cran.r-project.org/web/packages/TukeyC/TukeyC.pdf

How do I get the minimum model for a quasipoisson GLM

I have ran a quasipoisson GLM with the following code:
Output3 <- glm(GCN ~ DHSI + N + P, PondsTask2, family = quasipoisson(link = "log"))
and received this output:
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.69713 0.56293 -3.015 0.00272 **
DHSI 3.44795 0.74749 4.613 0.00000519 ***
N -0.59648 0.36357 -1.641 0.10157
P -0.01964 0.37419 -0.052 0.95816
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
With the DHSI being statistically significant, but the other two variables not being significant. How do I go about dropping variables until I have the minimum model?

How to do r square for glmmTMB negative binomial mixed model with zero-inflation in r

I made a zero-inflated negative binomial model with glmTMB as below
M2<- glmmTMB(psychological100~ (1|ID) + time*MNM01, data=mnmlong,
ziformula=~ (1|ID) + time*MNM01, family=nbinom2())
summary(M2)
Here is the output
Family: nbinom2 ( log )
Formula: psychological100 ~ (1 | ID) + time * MNM01
Zero inflation: ~(1 | ID) + time * MNM01
Data: mnmlong
AIC BIC logLik deviance df.resid
3507.0 3557.5 -1742.5 3485.0 714
Random effects:
Conditional model:
Groups Name Variance Std.Dev.
ID (Intercept) 0.2862 0.535
Number of obs: 725, groups: ID, 337
Zero-inflation model:
Groups Name Variance Std.Dev.
ID (Intercept) 0.5403 0.7351
Number of obs: 725, groups: ID, 337
Overdispersion parameter for nbinom2 family (): 3.14
Conditional model:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 2.89772 0.09213 31.451 < 2e-16 ***
time -0.08724 0.01796 -4.858 1.18e-06 ***
MNM01 0.02094 0.12433 0.168 0.866
time:MNM01 -0.01193 0.02420 -0.493 0.622
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Zero-inflation model:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.29940 0.17298 -1.731 0.083478 .
time 0.12204 0.03338 3.656 0.000256 ***
MNM01 0.06771 0.24217 0.280 0.779790
time:MNM01 -0.02821 0.04462 -0.632 0.527282
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
I wanted to know the R square of the model and tried the following 2 methods but not successful
MuMIn::r.squaredGLMM(M2)
Error in r.squaredGLMM.glmmTMB(M2) : r.squaredGLMM cannot (yet)
handle 'glmmTMB' object with zero-inflation
performance::r2_zeroinflated(M2)
Error in residuals.glmmTMB(model, type = "pearson") : pearson
residuals are not implemented for models with zero-inflation or
variable dispersion
what do you advise me?
Try with the pseudo-R^2 based on a likelihood-ratio (MuMIn::r.squaredLR). You may need to provide a null model for comparison explicitly.

Resources