How can I estimate the RER and the confidence intervals from the additive 5-years relative survival model?
I used the following syntax:
model3<- rsadd(Surv(durata_days,status_cat)~agediag_cat+sex+country,
ratetable=rt,data=nodco,
rmap=list(age=agediag*365.241, year=year(dtnewdiag),sex=sex, country=country), int = 5, method="glm.poi")
summary(model3)
Call:
rsadd(formula = Surv(durata_days, status_cat) ~ agediag_cat +
sex + country, data = nodco, ratetable = rt, int = 5, method = "glm.poi",
rmap = list(age = agediag * 365.241, year = year(dtnewdiag),
sex = sex, country = country))
Coefficients:
Estimate Std. Error z value Pr(>|z|)
agediag_cat55-69 0.08195 0.05134 1.596 0.11048
agediag_cat>=70 0.42853 0.05053 8.480 < 2e-16 ***
sexfemale -0.21838 0.04065 -5.372 7.77e-08 ***
countryEstonia 0.18457 0.06438 2.867 0.00415 **
countryPortugal 0.09580 0.05700 1.681 0.09283 .
countrySpain 0.16414 0.05742 2.859 0.00425 **
countrySwitzerland -0.19424 0.06686 -2.905 0.00367 **
fu [0,1] -0.26606 0.06715 -3.962 7.43e-05 ***
fu (1,2] -0.96752 0.07516 -12.873 < 2e-16 ***
fu (2,3] -1.44282 0.08988 -16.053 < 2e-16 ***
fu (3,4] -1.80198 0.12497 -14.419 < 2e-16 ***
fu (4,5] -2.20702 0.18353 -12.026 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’
I tried the following code:
round(exp(cbind(RR = coef(model3), confint(model3))),3)
Waiting for profiling to be done...
Error in `[<-`(`*tmp*`, , names(coef(fm)), value = coef(fm)) :
subscript out of bounds
Related
I'm using the glca package to run a latent class analysis. I want to see how covariates (other than indicators used to construct latent classes) affect the probability of class assignment. I understand this is a multinomial logistic regression, and thus, my question is, is there a way I can change the base reference latent class? For example, my model is currently a 4-class model, and the output shows the effect of covariates on class prevalence with respect to Class-4 (base category) as default. I want to change this base category to, for example, Class-2.
My code is as follows
fc <- item(intrst, respect, expert, inclu, contbt,secure,pay,bonus, benft, innov, learn, rspons, promote, wlb, flex) ~ atenure+super+sal+minority+female+age40+edu+d_bpw+d_skill
lca4_cov <- glca(fc, data = bpw, nclass = 4, seed = 1)
and I get the following output.
> coef(lca4_cov)
Class 1 / 4 :
Odds Ratio Coefficient Std. Error t value Pr(>|t|)
(Intercept) 1.507537 0.410477 0.356744 1.151 0.24991
atenure 0.790824 -0.234679 0.102322 -2.294 0.02183 *
super 1.191961 0.175600 0.028377 6.188 6.29e-10 ***
sal 0.937025 -0.065045 0.035490 -1.833 0.06686 .
minority 2.002172 0.694233 0.060412 11.492 < 2e-16 ***
female 1.210653 0.191160 0.059345 3.221 0.00128 **
age40 1.443603 0.367142 0.081002 4.533 5.89e-06 ***
edu 1.069771 0.067444 0.042374 1.592 0.11149
d_bpw 0.981104 -0.019077 0.004169 -4.576 4.78e-06 ***
d_skill 1.172218 0.158898 0.036155 4.395 1.12e-05 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Class 2 / 4 :
Odds Ratio Coefficient Std. Error t value Pr(>|t|)
(Intercept) 3.25282 1.17952 0.43949 2.684 0.00729 **
atenure 0.95131 -0.04992 0.12921 -0.386 0.69926
super 1.16835 0.15559 0.03381 4.602 4.22e-06 ***
sal 1.01261 0.01253 0.04373 0.287 0.77450
minority 0.72989 -0.31487 0.08012 -3.930 8.55e-05 ***
female 0.45397 -0.78971 0.07759 -10.178 < 2e-16 ***
age40 1.26221 0.23287 0.09979 2.333 0.01964 *
edu 1.29594 0.25924 0.05400 4.801 1.60e-06 ***
d_bpw 0.97317 -0.02720 0.00507 -5.365 8.26e-08 ***
d_skill 1.16223 0.15034 0.04514 3.330 0.00087 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Class 3 / 4 :
Odds Ratio Coefficient Std. Error t value Pr(>|t|)
(Intercept) 0.218153 -1.522557 0.442060 -3.444 0.000575 ***
atenure 0.625815 -0.468701 0.123004 -3.810 0.000139 ***
super 1.494112 0.401532 0.031909 12.584 < 2e-16 ***
sal 1.360924 0.308164 0.044526 6.921 4.72e-12 ***
minority 0.562590 -0.575205 0.081738 -7.037 2.07e-12 ***
female 0.860490 -0.150253 0.072121 -2.083 0.037242 *
age40 1.307940 0.268453 0.100376 2.674 0.007495 **
edu 1.804949 0.590532 0.054522 10.831 < 2e-16 ***
d_bpw 0.987353 -0.012727 0.004985 -2.553 0.010685 *
d_skill 1.073519 0.070942 0.045275 1.567 0.117163
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
I would appreciate it if anyone let me know codes/references to address my problem. Thanks in advance.
Try using the decreasing option.
lca4_cov <- glca(fc, data = bpw, nclass = 4, seed = 1, decreasing = T)
I am trying to figure out how to calculate the marginal effects of my model using the, "clogit," function in the survival package. The margins package does not seem to work with this type of model, but does work with "multinom" and "mclogit." However, I am investigating the affects of choice characteristics, and not individual characteristics, so it needs to be a conditional logit model. The mclogit function works with the margins package, but these results are widely different from the results using the clogit function, why is that? Any help calculating the marginal effects from the clogit function would be greatly appreciated.
mclogit output:
Call:
mclogit(formula = cbind(selected, caseID) ~ SysTEM + OWN + cost +
ENVIRON + NEIGH + save, data = atl)
Estimate Std. Error z value Pr(>|z|)
SysTEM 0.139965 0.025758 5.434 5.51e-08 ***
OWN 0.008931 0.026375 0.339 0.735
cost -0.103012 0.004215 -24.439 < 2e-16 ***
ENVIRON 0.675341 0.037104 18.201 < 2e-16 ***
NEIGH 0.419054 0.031958 13.112 < 2e-16 ***
save 0.532825 0.023399 22.771 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Null Deviance: 18380
Residual Deviance: 16670
Number of Fisher Scoring iterations: 4
Number of observations: 8364
clogit output:
Call:
coxph(formula = Surv(rep(1, 25092L), selected) ~ SysTEM + OWN +
cost + ENVIRON + NEIGH + save + strata(caseID), data = atl,
method = "exact")
n= 25092, number of events= 8364
coef exp(coef) se(coef) z Pr(>|z|)
SysTEM 0.133184 1.142461 0.034165 3.898 9.69e-05 ***
OWN -0.015884 0.984241 0.036346 -0.437 0.662
cost -0.179833 0.835410 0.005543 -32.442 < 2e-16 ***
ENVIRON 1.186329 3.275036 0.049558 23.938 < 2e-16 ***
NEIGH 0.658657 1.932195 0.042063 15.659 < 2e-16 ***
save 0.970051 2.638079 0.031352 30.941 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
exp(coef) exp(-coef) lower .95 upper .95
SysTEM 1.1425 0.8753 1.0685 1.2216
OWN 0.9842 1.0160 0.9166 1.0569
cost 0.8354 1.1970 0.8264 0.8445
ENVIRON 3.2750 0.3053 2.9719 3.6091
NEIGH 1.9322 0.5175 1.7793 2.0982
save 2.6381 0.3791 2.4809 2.8053
Concordance= 0.701 (se = 0.004 )
Rsquare= 0.103 (max possible= 0.688 )
Likelihood ratio test= 2740 on 6 df, p=<2e-16
Wald test = 2465 on 6 df, p=<2e-16
Score (logrank) test = 2784 on 6 df, p=<2e-16
margins output for mclogit
margins(model2A)
SysTEM OWN cost ENVIRON NEIGH save
0.001944 0.000124 -0.001431 0.00938 0.00582 0.0074
margins output for clogit
margins(model2A)
Error in match.arg(type) :
'arg' should be one of “risk”, “expected”, “lp”
I am doing some count data analysis. The data is in this link:
[1]: https://www.dropbox.com/s/q7fwqicw3ebvwlg/stackquestion.csv?dl=0
Column A is the count data, and other columns are the independent variables. At first I used Poisson regression to analyze it:
m0<-glm(A~.,data=d,family="poisson")
summary(m0)
#We see that the residual deviance is greater than the degrees of freedom so that we have over-dispersion.
Call:
glm(formula = A ~ ., family = "poisson", data = d)
Deviance Residuals:
Min 1Q Median 3Q Max
-28.8979 -4.5110 0.0384 5.4327 20.3809
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 8.7054842 0.9100882 9.566 < 2e-16 ***
B -0.1173783 0.0172330 -6.811 9.68e-12 ***
C 0.0864118 0.0182549 4.734 2.21e-06 ***
D 0.1169891 0.0301960 3.874 0.000107 ***
E 0.0738377 0.0098131 7.524 5.30e-14 ***
F 0.3814588 0.0093793 40.670 < 2e-16 ***
G -0.3712263 0.0274347 -13.531 < 2e-16 ***
H -0.0694672 0.0022137 -31.380 < 2e-16 ***
I -0.0634488 0.0034316 -18.490 < 2e-16 ***
J -0.0098852 0.0064538 -1.532 0.125602
K -0.1105270 0.0128016 -8.634 < 2e-16 ***
L -0.3304606 0.0155454 -21.258 < 2e-16 ***
M 0.2274175 0.0259872 8.751 < 2e-16 ***
N 0.2922063 0.0174406 16.754 < 2e-16 ***
O 0.1179708 0.0119332 9.886 < 2e-16 ***
P 0.0618776 0.0260646 2.374 0.017596 *
Q -0.0303909 0.0060060 -5.060 4.19e-07 ***
R -0.0018939 0.0037642 -0.503 0.614864
S 0.0383040 0.0065841 5.818 5.97e-09 ***
T 0.0318111 0.0116611 2.728 0.006373 **
U 0.2421129 0.0145502 16.640 < 2e-16 ***
V 0.1782144 0.0090858 19.615 < 2e-16 ***
W -0.5105135 0.0258136 -19.777 < 2e-16 ***
X -0.0583590 0.0043641 -13.373 < 2e-16 ***
Y -0.1554609 0.0042604 -36.489 < 2e-16 ***
Z 0.0064478 0.0001184 54.459 < 2e-16 ***
AA 0.3880479 0.0164929 23.528 < 2e-16 ***
AB 0.1511362 0.0050471 29.945 < 2e-16 ***
AC 0.0557880 0.0181129 3.080 0.002070 **
AD -0.6569099 0.0368771 -17.813 < 2e-16 ***
AE -0.0040679 0.0003960 -10.273 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 97109.0 on 56 degrees of freedom
Residual deviance: 5649.7 on 26 degrees of freedom
AIC: 6117.1
Number of Fisher Scoring iterations: 6
Then I think I should use negative binomial regression for the over-dispersion data. Since you can see I have many independent variables, and I wanted to select the important variables. And I decide to use stepwise regression to select the independent variable. At first, I create a full model:
full.model <- glm.nb(A~., data=d,maxit=1000)
# when not indicating maxit, or maxit=100, it shows Warning messages: 1: glm.fit: algorithm did not converge; 2: In glm.nb(A ~ ., data = d, maxit = 100) : alternation limit reached
# When indicating maxit=1000, the warning message disappear.
summary(full.model)
Call:
glm.nb(formula = A ~ ., data = d, maxit = 1000, init.theta = 2.730327193,
link = log)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.5816 -0.8893 -0.3177 0.4882 1.9073
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 11.8228596 8.3004322 1.424 0.15434
B -0.2592324 0.1732782 -1.496 0.13464
C 0.2890696 0.1928685 1.499 0.13393
D 0.3136262 0.3331182 0.941 0.34646
E 0.3764257 0.1313142 2.867 0.00415 **
F 0.3257785 0.1448082 2.250 0.02447 *
G -0.7585881 0.2343529 -3.237 0.00121 **
H -0.0714660 0.0343683 -2.079 0.03758 *
I -0.1050681 0.0357237 -2.941 0.00327 **
J 0.0810292 0.0566905 1.429 0.15291
K 0.2582978 0.1574582 1.640 0.10092
L -0.2009784 0.1543773 -1.302 0.19296
M -0.2359658 0.3216941 -0.734 0.46325
N -0.0689036 0.1910518 -0.361 0.71836
O 0.0514983 0.1383610 0.372 0.70974
P 0.1843138 0.3253483 0.567 0.57105
Q 0.0198326 0.0509651 0.389 0.69717
R 0.0892239 0.0459729 1.941 0.05228 .
S -0.0430981 0.0856391 -0.503 0.61479
T 0.2205653 0.1408009 1.567 0.11723
U 0.2450243 0.1838056 1.333 0.18251
V 0.1253683 0.0888411 1.411 0.15820
W -0.4636739 0.2348172 -1.975 0.04831 *
X -0.0623290 0.0508299 -1.226 0.22011
Y -0.0939878 0.0606831 -1.549 0.12142
Z 0.0019530 0.0015143 1.290 0.19716
AA -0.2888123 0.2449085 -1.179 0.23829
AB 0.1185890 0.0696343 1.703 0.08856 .
AC -0.3401963 0.2047698 -1.661 0.09664 .
AD -1.3409002 0.4858741 -2.760 0.00578 **
AE -0.0006299 0.0051338 -0.123 0.90234
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for Negative Binomial(2.7303) family taken to be 1)
Null deviance: 516.494 on 56 degrees of freedom
Residual deviance: 61.426 on 26 degrees of freedom
AIC: 790.8
Number of Fisher Scoring iterations: 1
Theta: 2.730
Std. Err.: 0.537
2 x log-likelihood: -726.803
When not indicating maxit, or maxit=100, it shows Warning messages: 1: glm.fit: algorithm did not converge; 2: In glm.nb(A ~ ., data = d, maxit = 100) : alternation limit reached.
When indicating maxit=1000, the warning message disappear.
Then I create a first model:
first.model <- glm.nb(A ~ 1, data = d)
Then I tried the forward stepwise regression:
step.model <- step(first.model, direction="forward", scope=formula(full.model))
#Error in glm.fit(X, y, wt, offset = offset, family = object$family, control = object$control) :
#NA/NaN/Inf in 'x'
#In addition: Warning message:
# step size truncated due to divergence
#What is the problem?
It gives me error message: Error in glm.fit(X, y, wt, offset = offset, family = object$family, control = object$control) :
NA/NaN/Inf in 'x'
In addition: Warning message:
step size truncated due to divergence
I also tried the backward regression:
step.model2 <- step(full.model,direction="backward")
#the final step
Step: AIC=770.45
A ~ B + C + E + F + G + H + I + K + L + R + T + V + W + Y + AA +
AB + AD
Df Deviance AIC
<none> 62.375 770.45
- AB 1 64.859 770.93
- H 1 65.227 771.30
- V 1 65.240 771.31
- L 1 65.291 771.36
- Y 1 65.831 771.90
- B 1 66.051 772.12
- C 1 67.941 774.01
- AA 1 69.877 775.95
- K 1 70.411 776.48
- W 1 71.526 777.60
- I 1 71.863 777.94
- E 1 72.338 778.41
- G 1 73.344 779.42
- F 1 73.510 779.58
- AD 1 79.620 785.69
- R 1 80.358 786.43
- T 1 95.725 801.80
Warning messages:
1: glm.fit: algorithm did not converge
2: glm.fit: algorithm did not converge
3: glm.fit: algorithm did not converge
4: glm.fit: algorithm did not converge
My question is: Why it is different in using forward and backward stepwise regression? And why do I get the error message when performing forward selection? Also, what exactly do these warning messages mean? And how should I deal with it?
I am not a stats person but need to conduct statical analysis for my research data. So I am struggling in learning how to do different regression analyses using real data. I searched online for similar questions but I still could understand ... And please let me know if I did anything wrong in my regression analysis. I would really appreciate it if you could help me with these questions!
This is the R code for logistic reg model,
> hrlogis1 <- glm(Attrition~. -Age -DailyRate -Department -Education
> -EducationField -HourlyRate -JobLevel
> -JobRole -MonthlyIncome -MonthlyRate
> -PercentSalaryHike -PerformanceRating
> -StandardHours -StockOptionLevel
> , family=binomial(link = "logit"),data=hrtrain)
where:
Attrition is the dependent variable and rest are all the independent variables.
Below is the summary of the model:
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 1.25573 0.84329 1.489 0.136464
BusinessTravelTravel_Frequently 1.86022 0.47410 3.924 8.72e-05 ***
BusinessTravelTravel_Rarely 1.28273 0.44368 2.891 0.003839 **
DistanceFromHome 0.03869 0.01138 3.400 0.000673 ***
EnvironmentSatisfaction -0.36484 0.08714 -4.187 2.83e-05 ***
GenderMale 0.52556 0.19656 2.674 0.007499 **
JobInvolvement -0.59407 0.13259 -4.480 7.45e-06 ***
JobSatisfaction -0.37315 0.08671 -4.303 1.68e-05 ***
MaritalStatusMarried 0.23408 0.26993 0.867 0.385848
MaritalStatusSingle 1.37647 0.27511 5.003 5.63e-07 ***
NumCompaniesWorked 0.16439 0.04034 4.075 4.59e-05 ***
OverTimeYes 1.67531 0.20054 8.354 < 2e-16 ***
RelationshipSatisfaction -0.23865 0.08726 -2.735 0.006240 **
TotalWorkingYears -0.12385 0.02360 -5.249 1.53e-07 ***
TrainingTimesLastYear -0.15522 0.07447 -2.084 0.037124 *
WorkLifeBalance -0.30969 0.13025 -2.378 0.017427 *
YearsAtCompany 0.06887 0.04169 1.652 0.098513 .
YearsInCurrentRole -0.10812 0.04880 -2.216 0.026713 *
YearsSinceLastPromotion 0.14006 0.04452 3.146 0.001657 **
YearsWithCurrManager -0.09343 0.04984 -1.875 0.060834 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Now I want to remove those which are not significant, here in this case "MaritalStatusMarried" is not significant.
MaritalStatus is a variable(column) with two levels "Married" and "Single".
How about:
data$MaritalStatus[data[,num]="Married"] <- NA
(where num = number of the column in the data)
The values for Married will be replaced for NA's and then you can run the glm model again.
I have a large dataset (24765 obs)
I am trying to look at how does cleaning method effect emergence success(ES).
I have several factors: beach(4 levels), cleaning method(3 levels) -->fixed
I also have a few random variables: Zone (128 levels),Year(18 years) and Index(24765)
This is an ORLE model to account for overdispersion.
My best fit model based on AIC scores is:
mod8a<-glmer(ES.test~beach+method+(1|Year)+(1|index),data=y5,weights=egg.total,family=binomial)
The summary showed:
summary(mod8a)#AIC=216732.9, same affect at every beach
Generalized linear mixed model fit by maximum likelihood (LaplaceApproximation) ['glmerMod']
Family: binomial ( logit )
Formula: ES.test ~ beach + method + (1 | Year) + (1 | index)
Data: y5
Weights: egg.total
AIC BIC logLik deviance df.resid
214834.2 214891.0 -107410.1 214820.2 24758
Scaled residuals:
Min 1Q Median 3Q Max
-1.92900 -0.09344 0.00957 0.14682 1.62327
Random effects:
Groups Name Variance Std.Dev.
index (Intercept) 1.6541 1.286
Year (Intercept) 0.6512 0.807
Number of obs: 24765, groups: index, 24765; Year, 19
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.65518 0.18646 3.514 0.000442 ***
beachHillsboro -0.06770 0.02143 -3.159 0.001583 **
beachHO/HA 0.31927 0.03716 8.591 < 2e-16 ***
methodHTL only 0.18106 0.02526 7.169 7.58e-13 ***
methodno clean 0.05989 0.03170 1.889 0.058853 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) bchHll bHO/HA mtHTLo
beachHllsbr -0.002
beachHO/HA -0.054 0.047
mthdHTLonly -0.107 -0.242 0.355
methodnclen -0.084 -0.060 0.265 0.628
What is my "intercept" (as seen above)? I am missing levels of fixed factors, is that because R could not compute it?
I tested for Overdispersion:
overdisp_fun <- function(mod8a) {
+ ## number of variance parameters in
+ ## an n-by-n variance-covariance matrix
+ vpars <- function(m) {
+ nrow(m)*(nrow(m)+1)/2
+ }
+
+ model8a.df <- sum(sapply(VarCorr(mod8a),vpars))+length(fixef(mod8a))
+ rdf <- nrow(model.frame(mod8a))-model8a.df
+ rp <- residuals(mod8a,type="pearson")
+ Pearson.chisq <- sum(rp^2)
+ prat <- Pearson.chisq/rdf
+ pval <- pchisq(Pearson.chisq, df=rdf, lower.tail=FALSE)
+ c(chisq=Pearson.chisq,ratio=prat,rdf=rdf,p=pval)
+ }
> overdisp_fun(mod8a)
chisq ratio rdf p
2.064765e+03 8.339790e-02 2.475800e+04 1.000000e+00
This shows the plot of mod8a
I would like to know why I am getting such a curve and what it means
Lastly I did a multicomparion analysis using multcomp
ls1<- glht(mod8a, mcp(beach = "Tukey"))$linfct
ls2 <- glht(mod8a, mcp(method= "Tukey"))$linfct
summary(glht(mod8a, linfct = rbind(ls1, ls2)))
Simultaneous Tests for General Linear Hypotheses
Fit: glmer(formula = ES.test ~ beach + method + (1 | Year) + (1 |
index), data = y5, family = binomial, weights = egg.total)
Linear Hypotheses:
Estimate Std. Error z value Pr(>|z|)
Hillsboro - FTL/P == 0 -0.06770 0.02143 -3.159 0.00821 **
HO/HA - FTL/P == 0 0.31927 0.03716 8.591 < 0.001 ***
HO/HA - Hillsboro == 0 0.38696 0.04201 9.211 < 0.001 ***
HTL only - HTL and SB == 0 0.18106 0.02526 7.169 < 0.001 ***
no clean - HTL and SB == 0 0.05989 0.03170 1.889 0.24469
no clean - HTL only == 0 -0.12117 0.02524 -4.800 < 0.001 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Adjusted p values reported -- single-step method)
At this point help with interpreting for analysis would help and be greatly appreciated. (Especially with that sigmoid curve for my residuals)