NaNS produced when using the probit analysis in LC50 toxicity analysis - r

I have a dose/response relationship from the chronic toxicity test I have done and I intended to use a probit analysis (LC_probit function of the ecotox package) to calculate the LC50 value of the substance and after the test I have gotten no Confidence Intervals calculated and got this warning message:
"Warning message:
In sqrt(cl_part_2) : NaNs produced"
Originally my data was binary (0 - alive, 1 - dead) as each replicate had one daphnid in it. I grouped them together and created a proportional mortality for each concentration to solve the issue, however the problem still remained.
Dose
Proportional mortality
Total replicates (n)
0
0.4
10
0.6
0.5
10
1.2
0.5
10
2.4
0.888
9
4.8
0.555
9
9.6
0.5
10
Here is the code I used for this calculation:
library(ecotox)
rob1<-read.csv("robc14a.csv", stringsAsFactors = TRUE)
str(rob1)
rob_det<-LC_probit(prop~log10(dose), p=c(50), weights=total, data=rob1)
rob_det
Tried using the Spearman - Karber method to see if the problem persists and it did, and I can tell the problem probably lies inside the data itself but I cannot figure out where it is from possible layout problem or something else entirely.

Related

zero-inflated overdispersed count data glmmTMB error in R

I am working with count data (available here) that are zero-inflated and overdispersed and has random effects. The package best suited to work with this sort of data is the glmmTMB (details here and troubleshooting here).
Before working with the data, I inspected it for normality (it is zero-inflated), homogeneity of variance, correlations, and outliers. The data had two outliers, which I removed from the dataset linekd above. There are 351 observations from 18 locations (prop_id).
The data looks like this:
euc0 ea_grass ep_grass np_grass np_other_grass month year precip season prop_id quad
3 5.7 0.0 16.7 4.0 7 2006 526 Winter Barlow 1
0 6.7 0.0 28.3 0.0 7 2006 525 Winter Barlow 2
0 2.3 0.0 3.3 0.0 7 2006 524 Winter Barlow 3
0 1.7 0.0 13.3 0.0 7 2006 845 Winter Blaber 4
0 5.7 0.0 45.0 0.0 7 2006 817 Winter Blaber 5
0 11.7 1.7 46.7 0.0 7 2006 607 Winter DClark 3
The response variable is euc0 and the random effects are prop_id and quad_id. The rest of the variables are fixed effects (all representing the percent cover of different plant species).
The model I want to run:
library(glmmTMB)
seed0<-glmmTMB(euc0 ~ ea_grass + ep_grass + np_grass + np_other_grass + month + year*precip + season*precip + (1|prop_id) + (1|quad), data = euc, family=poisson(link=identity))
fit_zinbinom <- update(seed0,family=nbinom2) #allow variance increases quadratically
The error I get after running the seed0 code is:
Error in optimHess(par.fixed, obj$fn, obj$gr) : gradient in optim
evaluated to length 1 not 15 In addition: There were 50 or more
warnings (use warnings() to see the first 50)
warnings() gives:
1. In (function (start, objective, gradient = NULL, hessian = NULL, ... :
NA/NaN function evaluation
I also normally mean center and standardize my numerical variables, but this only removes the first error and keeps the NA/NaN error. I tried adding a glmmTMBControl statement like this OP, but it just opened a whole new world of errors.
How can I fix this? What am I doing wrong?
A detailed explanation would be appreciated so that I can learn how to troubleshoot this better myself in the future. Alternatively, I am open to a MCMCglmm solution as that function can also deal with this sort of data (despite taking longer to run).
An incomplete answer ...
identity-link models for limited-domain response distributions (e.g. Gamma or Poisson, where negative values are impossible) are computationally problematic; in my opinion they're often conceptually problematic as well, although there are some reasonable arguments in their favor. Do you have a good reason to do this?
This is a pretty small data set for the model you're trying to fit: 13 fixed-effect predictors and 2 random-effect predictors. The rule of thumb would be that you want about 10-20 times that many observations: that seems to fit in OK with your 345 or so observations, but ... only 40 of your observations are non-zero! That means your 'effective' number of observations/amount of information will be much smaller (see Frank Harrell's Regression Modeling Strategies for more discussion of this point).
That said, let me run through some of the things I tried and where I ended up.
GGally::ggpairs(euc, columns=2:10) doesn't detect anything obviously terrible about the data (I did throw out the data point with euc0==78)
In order to try to make the identity-link model work I added some code in glmmTMB. You should be able to install via remotes::install_github("glmmTMB/glmmTMB/glmmTMB#clamp") (note you will need compilers etc. installed to install this). This version takes negative predicted values and forces them to be non-negative, while adding a corresponding penalty to the negative log-likelihood.
Using the new version of glmmTMB I don't get an error, but I do get these warnings:
Warning messages:
1: In fitTMB(TMBStruc) :
Model convergence problem; non-positive-definite Hessian matrix. See vignette('troubleshooting')
2: In fitTMB(TMBStruc) :
Model convergence problem; false convergence (8). See vignette('troubleshooting')
The Hessian (second-derivative) matrix being non-positive-definite means there are some (still hard-to-troubleshoot) problems. heatmap(vcov(f2)$cond,Rowv=NA,Colv=NA) lets me look at the covariance matrix. (I also like corrplot::corrplot.mixed(cov2cor(vcov(f2)$cond),"ellipse","number"), but that doesn't work when vcov(.)$cond is non-positive definite. In a pinch you can use sfsmisc::posdefify() to force it to be positive definite ...)
Tried scaling:
eucsc <- dplyr::mutate_at(euc1,dplyr::vars(c(ea_grass:precip)), ~c(scale(.)))
This will help some - right now we're still doing a few silly things like treating year as a numeric variable without centering it (so the 'intercept' of the model is at year 0 of the Gregorian calendar ...)
But that still doesn't fix the problem.
Looking more closely at the ggpairs plot, it looks like season and year are confounded: with(eucsc,table(season,year)) shows that observations occur in Spring and Winter in one year and Autumn in the other year. season and month are also confounded: if we know the month, then we automatically know the season.
At this point I decided to give up on the identity link and see what happened. update(<previous_model>, family=poisson) (i.e. using a Poisson with a standard log link) worked! So did using family=nbinom2, which was much better.
I looked at the results and discovered that the CIs for the precip X season coefficients were crazy, so dropped the interaction term (update(f2S_noyr_logNB, . ~ . - precip:season)) at which point the results look sensible.
A few final notes:
the variance associated with quadrat is effectively zero
I don't think you necessarily need zero-inflation; low means and overdispersion (i.e. family=nbinom2) are probably sufficient.
the distribution of the residuals looks OK, but there still seems to be some model mis-fit (library(DHARMa); plot(simulateResiduals(f2S_noyr_logNB2))). I would spend some time plotting residuals and predicted values against various combinations of predictors to see if you can localize the problem.
PS A quicker way to see that there's something wrong with the fixed effects (multicollinearity):
X <- model.matrix(~ ea_grass + ep_grass +
np_grass + np_other_grass + month +
year*precip + season*precip,
data=euc)
ncol(X) ## 13
Matrix::rankMatrix(X) ## 11
lme4 has tests like this, and machinery for automatically dropping aliased columns, but they aren't implemented in glmmTMB at present.

How to use the predict() function in the R package "pscl" with categorical predictor variables

I'm fitting count data (number of fledgling birds produced per territory) using zero-inflated poisson models in R, and while model fitting is working fine, I'm having trouble using the predict function to get estimates for multiple values of one category (Year) averaged over the values of another category (StudyArea). Both variables are dummy coded (0,1) and are set up as factors. The data frame sent to the predict function looks like this:
Year_d StudyArea_d
1 0 0.5
2 1 0.5
However, I get the error message:
Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) :
contrasts can be applied only to factors with 2 or more levels
If instead I use a data frame such as:
Year_d StudyArea_d
1 0 0
2 0 1
3 1 0
4 1 1
I get sensible estimates of fledgling counts per year and study site combination. However, I'm not really interested in the effect of study site (the effect is small and isn't involved in an interaction), and the year effect is really what the study was designed to examine.
I have previously used similar code to successfully get estimated counts from a model that had one categorical and one continuous predictor variable (averaging over the levels of the dummy-coded factor), using a data frame similar to:
VegHeight StudyArea_d
1 0 0.5
2 0.5 0.5
3 1 0.5
4 1.5 0.5
So I'm a little confused why the first attempt I describe above doesn't work.
I can work on constructing a reproducible example if it would help, but I have a hunch that I'm not understand something basic about how the predict function works when dealing with factors. If anyone can help me understand what I need to do to get estimates at both levels of one factor, and averaged over the levels of another factor, I would really appreciate it.

warning messages in lme4 for survival analysis that did not arise 3 years ago

I am trying to fit a generalized linear mixed-effects model to my data, using the lme4 package.
The data can be described as follows (see example below): Survival data of fish over 28 days. Explanatory variables in the example data set are:
Region This is the geographical region from which the larvae originated.
treatment The temperatures at which sub-samples of fish from each region were raised.
replicate One of three replications of the entire experiment
tub Random variable. 15 tubs (used to maintain experimental temperatures in aquaria) in total (3 replicates for each of 5 temperature treatments). Each tub contained 1 aquaria for each Region (4 aquaria in total) and was located randomly in the lab.
Day Self explanatory, number of days from the start of the experiment.
stage is not being used in the analysis. Can be ignored.
Response variable
csns cumulative survival. i.e remaining fish/initial fish at day 0.
start weights used to tell the model that the probability of survival is relative to this number of fish at start of experiment.
aquarium Second random variable. This is the unique ID for each individual aquaria containing the value of each factor that it belongs to. e.g. N-14-1 means Region N, Treatment 14, replicate 1.
My problem is unusual, in that I have fitted the following model before:
dat.asr3<-glmer(csns~treatment+Day+Region+
treatment*Region+Day*Region+Day*treatment*Region+
(1|tub)+(1|aquarium),weights=start,
family=binomial, data=data2)
However, now that I am attempting to re-run the model, to generate analyses for publication, I am getting the following errors with the same model structure and package. Output is listed below:
> Warning messages:
1: In eval(expr, envir, enclos) : non-integer #successes in a binomial glm!
2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge with max|grad| = 1.59882 (tol = 0.001, component >1)
3: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model is nearly unidentifiable: very large eigenvalue
- Rescale variables?;Model is nearly unidentifiable: large eigenvalue ratio
- Rescale variables?
My understanding is the following:
Warning message 1.
non-integer #success in a binomial glm refers to the proportion format of the csns variable. I have consulted several sources, here included, github, r-help, etc, and all suggested this. The research fellow that assisted me in this analysis 3 years ago, is unreachable. Can it have to do with changes in lme4 package over the last 3 years?
Warning message 2.
I understand this is a problem because there are insufficient data points to fit the model to, particularly at
L-30-1, L-30-2 and L-30-3,
where only two observations are made:
Day 0 csns=1.00 and Day 1 csns=0.00
for all three aquaria. Therefore there is no variability or sufficient data to fit the model to.
Nevertheless, this model in lme4 has worked before, but doesn't run without these warnings now.
Warning message 3
This one is entirely unfamiliar to me. Never seen it before.
Sample data:
Region treatment replicate tub Day stage csns start aquarium
N 14 1 13 0 1 1.00 107 N-14-1
N 14 1 13 1 1 1.00 107 N-14-1
N 14 1 13 2 1 0.99 107 N-14-1
N 14 1 13 3 1 0.99 107 N-14-1
N 14 1 13 4 1 0.99 107 N-14-1
N 14 1 13 5 1 0.99 107 N-14-1
The data in question 1005cs.csv is available here via we transfer: http://we.tl/ObRKH0owZb
Any help with deciphering this problem, would be greatly appreciated. Also any alternative suggestions for suitable packages or methods to analyse this data would be great too.
tl;dr the "non-integer successes" warning is accurate; it's up to you to decide whether fitting a binomial model to these data really makes sense. The other warnings suggest that the fit is a bit unstable, but scaling and centering some of the input variables can make the warnings go away. It's up to you, again, to decide whether the results from these different formulations are different enough for you to worry about ...
data2 <- read.csv("1005cs.csv")
library(lme4)
Fit model (slightly more compact model formulation)
dat.asr3<-glmer(
csns~Day*Region*treatment+
(1|tub)+(1|aquarium),
weights=start, family=binomial, data=data2)
I do get the warnings you report.
Let's take a look at the data:
library(ggplot2); theme_set(theme_bw())
ggplot(data2,aes(Day,csns,colour=factor(treatment)))+
geom_point(aes(size=start),alpha=0.5)+facet_wrap(~Region)
Nothing obviously problematic here, although it does clearly show that the data are very close to 1 for some treatment combinations, and that the treatment values are far from zero. Let's try scaling & centering some of the input variables:
data2sc <- transform(data2,
Day=scale(Day),
treatment=scale(treatment))
dat.asr3sc <- update(dat.asr3,data=data2sc)
Now the "very large eigenvalue" warning is gone, but we still have the "non-integer # successes" warning, and a max|grad|=0.082. Let's try another optimizer:
dat.asr3scbobyqa <- update(dat.asr3sc,
control=glmerControl(optimizer="bobyqa"))
Now only the "non-integer #successes" warning remains.
d1 <- deviance(dat.asr3)
d2 <- deviance(dat.asr3sc)
d3 <- deviance(dat.asr3scbobyqa)
c(d1,d2,d3)
## [1] 12597.12 12597.31 12597.56
These deviances don't differ by very much (0.44 on the deviance scale is more than could be accounted for by round-off error, but not much difference in goodness of fit); actually, the first model gives the best (lowest) deviance, suggesting that the warnings are false positives ...
resp <- with(data2,csns*start)
plot(table(resp-floor(resp)))
This makes it clear that there really are non-integer responses, so the warning is correct.

Convert mixed model with repeated measures from SAS to R

I have been trying to convert a repeated measures model from SAS to R, since a collaborator will do the analysis but does not have SAS. We are dealing with 4 groups, 8 to 10 animals per group, and then 5 time points for each animal. The mock data file is available here https://drive.google.com/file/d/0B-WfycVUQyhaVGU2MUpuQkg4Mk0/edit?usp=sharing as a Rdata file and here https://drive.google.com/file/d/0B-WfycVUQyhaR0JtZ0V4VjRkTk0/edit?usp=sharing as an excel file:
The original SAS code (1) is :
proc mixed data=essai.data_test method=reml;
class group time mice;
model param = group time group*time / ddfm=kr;
repeated time / type=un subject=mice group=group;
run;
Which gives :
Type 3 Tests des effets fixes
DDL DDL Valeur
Effet Num. Res. F Pr > F
group 3 15.8 1.58 0.2344
time 4 25.2 10.11 <.0001
group*time 12 13.6 1.66 0.1852
I know that R does not handle degrees of freedom in the same way as SAS does, so I am first trying to obtain results similar to (2) :
proc mixed data=essai.data_test method=reml;
class group time mice;
model param = group time group*time;
repeated time / type=un subject=mice group=group;
run;
I have found some hints here Converting Repeated Measures mixed model formula from SAS to R and when specifying a compound symmetry correlation matrix this works perfectly. However, I am not able to obtain the same thing for a general correlation matrix.
With (2) in SAS, I obtain the following results :
Type 3 Tests des effets fixes
DDL DDL Valeur
Effet Num. Res. F Pr > F
group 3 32 1.71 0.1852
time 4 128 11.21 <.0001
group*time 12 128 2.73 0.0026
Using the following R code :
options(contrasts=c('contr.sum','contr.poly'))
mod <- lme(param~group*time, random=list(mice=pdDiag(form=~group-1)),
correlation = corSymm(form=~1|mice),
weights = varIdent(form=~1|group),
na.action = na.exclude, data = data, method = "REML")
anova(mod,type="marginal")
I obtain:
numDF denDF F-value p-value
(Intercept) 1 128 1373.8471 <.0001
group 3 32 1.5571 0.2189
time 4 128 10.0628 <.0001
group:time 12 128 1.6416 0.0880
The degrees of freedom are similar, but not the tests on fixed effects and I don’t know where this comes from. Would anyone have any idea of what I am doing wrong here?
Your R code differs from the SAS code in multiple ways. Some of them are fixable, but I was not able to fix all the aspects to reproduce the SAS analysis.
The R code fits a mixed effects model with a random mice effect, while the SAS code fits a generalized linear model that allows correlation between the residuals, but there are no random effects (because there is no RANDOM statement). In R you would have to use the gls function from the same nlme package.
In the R code all observations within the same group have the same variance, while in the SAS code you have an unstructured covariance matrix, that is each time-point within each group has its own variance. You can achieve the same effect by using weights=varIdent(form=~1|group*time).
In the R code the correlation matrix is the same for every mouse regardless of group. In the SAS code each group has its own correlation matrix. This is the part that I don't know how to reproduce in R.
I have to note that the R model seems to be more meaningful - SAS estimates way too many variances and correlations (which, by the way, you can see meaningfully arranged using the R and RCORR options to the repeated statement).
"In the R code the correlation matrix is the same for every mouse regardless of group. In the SAS code each group has its own correlation matrix. This is the part that I don't know how to reproduce in R." - Try: correlation=corSymm(~1|group*time)

How to get stepwise logistic regression to run faster

I'm using the standard glm function with step function on 100k rows and 107 variables. When I did a regular glm I got the calculation done within a minute or two but when I added step(glm(...)) it runs for hours.
I tried to run it as a matrix, but it is still running for about 0.5 hour and I'm not sure it will ever be done. When I ran it on 9 variables it gave me the answers in a few seconds but with 9 warnings: all of them were "Warning messages:1: glm.fit: fitted probabilities numerically 0 or 1 occurred "
I used the line of code below: is it wrong? What should I do in order to gain better running time?
logit1back <- step(glm(IsChurn ~ var1 + var2+ var3+ var4+
var5+ var6+ var7+ var8+ var9, data=tdata , family='binomial'))

Resources