Fitting logistic growth curves to data - r

I've been attempting to fit logistic growth equations to data sets I have, with mixed results. I typically use a setup like this:
# Post PT
time <- 1:48
Diversity <- new8
plot(time, Diversity,log="y",las=1, pch=16, type="l")
logisticModel <- nls(Diversity~K/(1+exp(Po+r*time)), start=list(Po=25, r=-1.6, K=200),control=list(maxiter=1000,minFactor=.00000000001))
The goal here is to model Diversity over time logistically; this is a species diversity curve that asymptotes. However, for particular datasets, I cannot get the model to work and can't for the life of me figure out why. As an example, in one iteration, the Diversity (and therefore, new8) value that is being pulled is
[1] 25 22 68 72 126 141 82 61 97 126 101 110 173 164 160 137 122 113 104 104 109 102 107 122 149 127 137 146 185 188 114 91 102 132 147
[36] 148 151 154 165 215 216 206 205 207 207 220 200 204
# plot via this, and it is a nice species diversity curve beginning to level off
plot(Diversity,type="l")
This data is beginning to reach its limit, yet I cannot fit a logistic curve to it. If I try, I get an exceeded max iterations error, no matter how high I crank up the iterations. I've played with the starting parameters over and over with no luck. Currently, for example the code looks like this:
# Post PT
time <- 1:48
Diversity <- new8
plot(time, Diversity,log="y",las=1, pch=16, type="l")
logisticModel <- nls(Diversity~K/(1+exp(Po+r*time)), start=list(Po=25, r=-1.6, K=200),control=list(maxiter=1000,minFactor=.00000000001))
Any help is more than appreciated. Spent all day sitting on my couch stuck on this. If someone has a better way to coerce a logistic growth curve out of data, I'd love to hear it! As a side note, I've used SSlogis for these datasets with no luck, either.

Numerical instability is often a problem with models involving exponential terms. Try evaluating your model at your starting parameters:
> 200/(1+exp(25-1.6*df$norm_time))
[1] 2.871735e-09 2.969073e-09 3.069710e-09 3.173759e-09 3.281333e-09 3.392555e-09 3.507546e-09 3.626434e-09 3.749353e-09
[10] 3.876437e-09 4.007830e-09 4.143676e-09 4.284126e-09 4.429337e-09 4.579470e-09 4.734691e-09 4.895174e-09 5.061097e-09
[19] 5.232643e-09 5.410004e-09 5.593377e-09 5.782965e-09 5.978979e-09 6.181637e-09 6.391165e-09 6.607794e-09 6.831766e-09
[28] 7.063329e-09 7.302742e-09 7.550269e-09 7.806186e-09 8.070778e-09 8.344338e-09 8.627170e-09 8.919589e-09 9.221919e-09
[37] 9.534497e-09 9.857670e-09 1.019180e-08 1.053725e-08 1.089441e-08 1.126368e-08 1.164546e-08 1.204019e-08 1.244829e-08
[46] 1.287023e-08 1.330646e-08 1.375749e-08
With predicted data having such small values, it's likely that any moderate change in the parameters, as required by nls() to estimate gradients, will produce changes in the data that are very small, barely above or even below minFactor().
It's better to normalize your data so that its numerical range is within a nice friendly range, like 0 to 1.
require(stringr)
require(ggplot2)
new8 <- '25 22 68 72 126 141 82 61 97 126 101 110 173 164 160 137 122 113 104 104 109 102 107 122 149 127 137 146 185 188 114 91 102 132 147 148 151 154 165 215 216 206 205 207 207 220 200 204'
Diversity = as.numeric(str_split(new8, '[ ]+')[[1]])
time <- 1:48
df = data.frame(time=time, div=Diversity)
# normalize time
df$norm_time <- df$time / max(df$time)
# normalize diversity
df$norm_div <- (df$div - min(df$div)) / max(df$div)
With this way of normalizing diversity, your Po parameter can always be assumed to be 0. That means we can eliminate it from the model. The model now only has two degrees of freedom instead of three, which also makes fitting easier.
That leads us to the following model:
logisticModel <- nls(norm_div~K/(1+exp(r*norm_time)), data=df,
start=list(K=1, r=-1.6),
control=list(maxiter=1000, minFactor=.00000000001))
Your data doesn't look like that great a fit to the model to me, but I'm not an expert in your field:
ggplot(data=df, aes(x=norm_time, y=norm_div)) +
geom_point(log='y') +
geom_line(aes(x=norm_time, y=predict(logisticModel)), color='red') +
theme_bw()
quartz.save('~/Desktop/SO_31236153.png', type='png')
summary(logisticModel)
Formula: norm_div ~ K/(1 + exp(r * norm_time))
Parameters:
Estimate Std. Error t value Pr(>|t|)
K 0.6940 0.1454 4.772 1.88e-05 ***
r -2.6742 2.4222 -1.104 0.275
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.1693 on 46 degrees of freedom
Number of iterations to convergence: 20
Achieved convergence tolerance: 5.895e-06

Related

Creating and plotting confidence intervals

I have fitted a gaussian GLM model to my data, i now wish to create 95% CIs and fit them to my data. Im having a couple of issues with this when plotting as i cant get them to capture my data, they just seem to plot the same line as the model without captuing the data points. Also Im also unsure that I've created my CIs the correct way here for the mean. I entered my data and code below if anyone knows how to fix this
data used
aids
cases quarter date
1 2 1 83.00
2 6 2 83.25
3 10 3 83.50
4 8 4 83.75
5 12 1 84.00
6 9 2 84.25
7 28 3 84.50
8 28 4 84.75
9 36 1 85.00
10 32 2 85.25
11 46 3 85.50
12 47 4 85.75
13 50 1 86.00
14 61 2 86.25
15 99 3 86.50
16 95 4 86.75
17 150 1 87.00
18 143 2 87.25
19 197 3 87.50
20 159 4 87.75
21 204 1 88.00
22 168 2 88.25
23 196 3 88.50
24 194 4 88.75
25 210 1 89.00
26 180 2 89.25
27 277 3 89.50
28 181 4 89.75
29 327 1 90.00
30 276 2 90.25
31 365 3 90.50
32 300 4 90.75
33 356 1 91.00
34 304 2 91.25
35 307 3 91.50
36 386 4 91.75
37 331 1 92.00
38 368 2 92.25
39 416 3 92.50
40 374 4 92.75
41 412 1 93.00
42 358 2 93.25
43 416 3 93.50
44 414 4 93.75
45 496 1 94.00
my code used to create the model and intervals before plotting
#creating the model
model3 = glm(cases ~ date,
data = aids,
family = poisson(link='log'))
#now to add approx. 95% confidence envelope around this line
#predict again but at the linear predictor level along with standard errors
my_preds <- predict(model3, newdata=data.frame(aids), se.fit=T, type="link")
#calculate CI limit since linear predictor is approx. Gaussian
upper <- my_preds$fit+1.96*my_preds$se.fit #this might be logit not log
lower <- my_preds$fit-1.96*my_preds$se.fit
#transform the CI limit to get one at the level of the mean
upper <- exp(upper)/(1+exp(upper))
lower <- exp(lower)/(1+exp(lower))
#plotting data
plot(aids$date, aids$cases,
xlab = 'Date', ylab = 'Cases', pch = 20)
#adding CI lines
plot(aids$date, exp(my_preds$fit), type = "link",
xlab = 'Date', ylab = 'Cases') #add title
lines(aids$date,exp(my_preds$fit+1.96*my_preds$se.fit),lwd=2,lty=2)
lines(aids$date,exp(my_preds$fit-1.96*my_preds$se.fit),lwd=2,lty=2)
outcome i currently get with no data points, the model is correct here but the CI isnt as i have no data points, so the CIs are made incorrectly i think somewhere
Edit: Response to OP's providing full data set.
This started out as a question about plotting data and models on the same graph, but has morphed considerably. You seem you have an answer to the original question. Below is one way to address the rest.
Looking at your (and my) plots it seems clear that poisson glm is just not a good model. To say it differently, the number of cases may vary with date, but is also influenced by other things not in your model (external regressors).
Plotting just your data suggests strongly that you have at least two and perhaps more regimes: time frames where the growth in cases follows different models.
ggplot(aids, aes(x=date)) + geom_point(aes(y=cases))
This suggests segmented regression. As with most things in R, there is a package for that (more than one actually). The code below uses the segmented package to build successive poisson glm using 1 breakpoint (two regimes).
library(data.table)
library(ggplot2)
library(segmented)
setDT(aids) # convert aids to a data.table
aids[, pred:=
predict(
segmented(glm(cases~date, .SD, family = poisson), seg.Z = ~date, npsi=1),
type='response', se.fit=TRUE)$fit]
ggplot(aids, aes(x=date))+ geom_line(aes(y=pred))+ geom_point(aes(y=cases))
Note that we need to tell segmented the count of breakpoints, but not where they are - the algorithm figures that out for you. So here, we see a regime prior to 3Q87 which is well modeled using poission glm, and a regime after that which is not. This is a fancy way of saying that "something happened" around 3Q87 which changed the course of the disease (at least in this data).
The code below does the same thing but for between 1 and 4 breakpoints.
get.pred <- \(p.n, p.DT) {
fit <- glm(cases~date, p.DT, family=poisson)
seg.fit <- segmented(fit, seg.Z = ~date, npsi=p.n)
predict(seg.fit, type='response', se.fit=TRUE)[c('fit', 'se.fit')]
}
gg.dt <- rbindlist(lapply(1:4, \(x) { copy(aids)[, c('pred', 'se'):=get.pred(x, .SD)][, npsi:=x] } ))
ggplot(gg.dt, aes(x=date))+
geom_ribbon(aes(ymin=pred-1.96*se, ymax=pred+1.96*se), fill='grey80')+
geom_line(aes(y=pred))+
geom_point(aes(y=cases))+
facet_wrap(~npsi)
Note that the location of the first breakpoint does not seem to change, and also that, notwithstanding the use of the poisson glm the growth appears linear in all but the first regime.
There are goodness-of-fit metrics described in the package documentation which can help you decide how many break points are most consistent with your data.
Finally, there is also the mcp package which is a bit more powerful but also a bit more complex to use.
Original Response: Here is one way that builds the model predictions and std. error in a data.table, then plots using ggplot.
library(data.table)
library(ggplot2)
setDT(aids) # convert aids to a data.table
aids[, c('pred', 'se', 'resid.scale'):=predict(glm(cases~date, data=.SD, family=poisson), type='response', se.fit=TRUE)]
ggplot(aids, aes(x=date))+
geom_ribbon(aes(ymin=pred-1.96*se, ymax=pred+1.96*se), fill='grey80')+
geom_line(aes(y=pred))+
geom_point(aes(y=cases))
Or, you could let ggplot do all the work for you.
ggplot(aids, aes(x=date, y=cases))+
stat_smooth(method = glm, method.args=list(family=poisson))+
geom_point()

In R, how can I compute the summary function in parallel?

I have a huge dataset. I computed the multinomial regression by multinom in nnet package.
mylogit<- multinom(to ~ RealAge, mydata)
It takes 10 minutes. But when I use summary function to compute the coefficient
it takes more than 1 day!!!
This is the code I used:
output <- summary(mylogit)
Coef<-t(as.matrix(output$coefficients))
I was wondering if anybody know how can I compute this part of the code by parallel processing in R?
this is a small sample of data:
mydata:
to RealAge
513 59.608
513 84.18
0 85.23
119 74.764
116 65.356
0 89.03
513 92.117
69 70.243
253 88.482
88 64.23
513 64
4 84.03
65 65.246
69 81.235
513 87.663
513 81.21
17 75.235
117 49.112
69 59.019
20 90.03
If you just want the coefficients, use only the coef() method which do less computations.
Example:
mydata <- readr::read_table("to RealAge
513 59.608
513 84.18
0 85.23
119 74.764
116 65.356
0 89.03
513 92.117
69 70.243
253 88.482
88 64.23
513 64
4 84.03
65 65.246
69 81.235
513 87.663
513 81.21
17 75.235
117 49.112
69 59.019
20 90.03")[rep(1:20, 3000), ]
mylogit <- nnet::multinom(to ~ RealAge, mydata)
system.time(output <- summary(mylogit)) # 6 sec
all.equal(output$coefficients, coef(mylogit)) # TRUE & super fast
If you profile the summary() function, you'll see that the most of the time is taken by the crossprod() function.
So, if you really want the output of the summary() function, you could use an optimized math library, such as the MKL provided by Microsoft R Open.

Plotting estimated probabilities from binary logistic regression when one or more predictor variables are held constant

I am a biology grad student who has been spinning my wheels for about thirty hours on the following issue. In summary I would like to plot a figure of estimated probabilities from a glm binary logistic regression model i produced. I have already gone through model selection, validation, etc and am now simply trying to produce figures. I had no problem plotting probability curves for the model i selected but what i am really interested in is producing a figure that shows probabilities of a binary outcome for a predictor variable when the other predictor variable is held constant.
I cannot figure out how to assign this constant value to only one of the predictor variables and plot the probability for the other variable. Ultimately i would like to produce figures similar to the crude example i attached desired output. I admit I am a novice in R and I certainly appreciate folks' time but i have exhausted online searches and have yet to find the approach or a solution adequately explained. This is the closest information related to my question but i found the explanation vague and it failed to provide an example for assigning one predictor a constant value while plotting the probability of the other predictor. https://stat.ethz.ch/pipermail/r-help/2010-September/253899.html
Below i provided a simulated dataset and my progress. Thank you very much for your expertise, i believe a solution and code example would be helpful for other ecologists who use logistic regression.
The simulated dataset shows survival outcomes over the winter for lizards. The predictor variables are "mass" and "depth".
x<-read.csv('logreg_example_data.csv',header = T)
x
survival mass depth
1 0 4.294456 262
2 0 8.359857 261
3 0 10.740580 257
4 0 10.740580 257
5 0 6.384678 257
6 0 6.384678 257
7 0 11.596380 270
8 0 11.596380 270
9 0 4.294456 262
10 0 4.294456 262
11 0 8.359857 261
12 0 8.359857 261
13 0 8.359857 261
14 0 7.920406 258
15 0 7.920406 258
16 0 7.920406 261
17 0 10.740580 257
18 0 10.740580 258
19 0 38.824960 262
20 0 9.916840 239
21 1 6.384678 257
22 1 6.384678 257
23 1 11.596380 270
24 1 11.596380 270
25 1 11.596380 270
26 1 23.709520 288
27 1 23.709520 288
28 1 23.709520 288
29 1 38.568970 262
30 1 38.568970 262
31 1 6.581013 295
32 1 6.581013 298
33 1 0.766564 269
34 1 5.440803 262
35 1 5.440803 262
36 1 19.534710 252
37 1 19.534710 259
38 1 8.359857 263
39 1 10.740580 257
40 1 38.824960 264
41 1 38.824960 264
42 1 41.556970 239
#Dataset name is x
# time to run the glm model
model1<-glm(formula=survival ~ mass + depth, family = "binomial", data=x)
model1
summary(model1)
#Ok now heres how i predict the probability of a lizard "Bob" surviving the winter with a mass of 32.949 grams and a burrow depth of 264 mm
newdata<-data.frame(mass = 32.949, depth = 264)
predict(model1, newdata, type = "response")
# the lizard "Bob" has a 87.3% chance of surviving the winter
#Now lets assume the glm. model was robust and the lizard was endangered,
#from all my research I know the average burrow depth is 263.9 mm at a national park
#lets say i am also interested in survival probabilities at burrow depths of 200 and 100 mm, respectively
#how do i use the valuable glm model produced above to generate a plot
#showing the probability of lizards surviving with average burrow depths stated above
#across a range of mass values from 0.0 to 100.0 grams??????????
#i know i need to use the plot and predict functions but i cannot figure out how to tell R that i
#want to use the glm model i produced to predict "survival" based on "mass" when the other predictor "depth" is held at constant values of biological relevance
#I would also like to add dashed lines for 95% CI

multidimensional data clustering

Problem: I have two groups of multidimensional heterogeneous data. I have concocted a simple illustrative example below. Notice that some columns are discrete (age) while some are binary (gender) and another is even an ordered pair (pant size).
Person Age gender height weight pant_size
Control_1 55 M 167.6 155 32,34
Control_2 68 F 154.1 137 28,28
Control_3 53 F 148.9 128 27,28
Control_4 57 M 167.6 165 38,34
Control_5 62 M 147.4 172 36,32
Control_6 44 M 157.6 159 32,32
Control_7 76 F 172.1 114 30,32
Control_8 49 M 161.8 146 34,34
Control_9 53 M 164.4 181 32,36
Person Age gender height weight pant_size
experiment_1 39 F 139.6 112 26,28
experiment_2 52 M 154.1 159 32,32
experiment_3 43 F 148.9 123 27,28
experiment_4 55 M 167.6 188 36,38
experiment_5 61 M 161.4 171 36,32
experiment_6 48 F 149.1 144 28,28
The question is does the entire experimental group differ significantly from the entire control group?
Or roughly speaking do they form two distinct clusters in the space of [age,gender,height,weight,pant_size]?
The general idea of what I’ve tried so far is a metric that compares corresponding columns of the experimental group to those of the control; the metric then takes the sum of the column scores (see below). A somewhat arbitrary threshold is picked to decide if the two groups are different. This arbitrariness is confounded by the weighting of the columns which is also somewhat arbitrary. Remarkably this approaches is preforming well for the actual problem I have but it needs to be formalized. I’m wondering if this approach is similar to any existing approaches or if other well established approaches more widely accepted?
Person Age gender height weight pant_size
experiment_1 39 F 139.6 112 26,28
experiment_2 52 M 154.1 159 32,32
experiment_3 43 F 148.9 123 27,28
experiment_4 55 M 167.6 188 36,38
experiment_5 61 M 161.4 171 36,32
experiment_6 48 F 149.1 144 28,28 metric
column score 2 1 5 1 7 16
Treat this as a classification rather than a clustering problem if you assume the results "cluster".
Because you don't need to find these clusters, but they are predefined classes.
The "rewritten" approach is as follows:
Train different classifiers to predict whether a point is from data A or data B. If you can get a much better accuracy than 50% (assuming balanced data) then the geoups do differ. If all your classifiers are only as good as random (and you didn't make mistakes) then tthe two sets are probably just too similar.

Converting different maximum scores to percentage out of 100

I have three different datasets with 3 students and 3 subjects each, with different maximum scores(125,150,200). How to calculate the mean percentage(out of 100) per subject of a standard(not section), when all the three maximum scores are different. which are not comparable at this point.
Class2:
section1.csv
english maths science
name score(125) score(125) score(125)
sam 114 112 111
erm 89 91 97
asd 101 107 118
section2.csv
english maths science
name score(150) score(150) score(150)
wer 141 127 143
rahul 134 119 145
rohit 149 135 139
section3.csv
english maths science
name score(200) score(200) score(200)
vinod 178 186 176
manoj 189 191 185
deepak 191 178 187
P.s: Expected columns in the output:
class1 englishavg mathsavg scienceavg( the values are the summation of mean percentage of all the three section)
Here is the piece of the code. I tried.
files <- list.files(pattern = ".csv") ## creates a vector with all file names in your folder
list_files <- lapply(files,read.csv,header=F,stringsAsFactors=F)
list_files <- lapply(list_files, function(x) x)
engav <- sapply(list_files,function(x) mean(as.numeric(x[,2]),na.rm=T)/2)
mathav <- sapply(list_files,function(x) mean(as.numeric(x[,3]),na.rm=T)/2)
scienceav <- sapply(list_files,function(x) mean(as.numeric(x[,4]),na.rm=T)/2)
result <- cbind(files,engav,mathav,scienceav)
Looking forward for an assistance.

Resources