Gompertz Aging analysis in R - r

I have survival data from an experiment in flies which examines rates of aging in various genotypes. The data is available to me in several layouts so the choice of which is up to you, whichever suits the answer best.
One dataframe (wide.df) looks like this, where each genotype (Exp, of which there is ~640) has a row, and the days run in sequence horizontally from day 4 to day 98 with counts of new deaths every two days.
Exp Day4 Day6 Day8 Day10 Day12 Day14 ...
A 0 0 0 2 3 1 ...
I make the example using this:
wide.df2<-data.frame("A",0,0,0,2,3,1,3,4,5,3,4,7,8,2,10,1,2)
colnames(wide.df2)<-c("Exp","Day4","Day6","Day8","Day10","Day12","Day14","Day16","Day18","Day20","Day22","Day24","Day26","Day28","Day30","Day32","Day34","Day36")
Another version is like this, where each day has a row for each 'Exp' and the number of deaths on that day are recorded.
Exp Deaths Day
A 0 4
A 0 6
A 0 8
A 2 10
A 3 12
.. .. ..
To make this example:
df2<-data.frame(c("A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A"),c(0,0,0,2,3,1,3,4,5,3,4,7,8,2,10,1,2),c(4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36))
colnames(df2)<-c("Exp","Deaths","Day")
What I would like to do is perform a Gompertz Analysis (See second paragraph of "the life table" here). The equation is:
μx = α*e β*x
Where μx is probability of death at a given time, α is initial mortality rate, and β is the rate of aging.
I would like to be able to get a dataframe which has α and β estimates for each of my ~640 genotypes for further analysis later.
I need help going from the above dataframes to an output of these values for each of my genotypes in R.
I have looked through the package flexsurv which may house the answer but I have failed in attempts to find and implement it.

This should get you started...
Firstly, for the flexsurvreg function to work, you need to specify your input data as a Surv object (from package:survival). This means one row per observation.
The first thing is to re-create the 'raw' data from the summary tables you provide.
(I know rbind is not efficient, but you can always switch to data.table for large sets).
### get rows with >1 death
df3 <- df2[df2$Deaths>1, 2:3]
### expand to give one row per death per time
df3 <- sapply(df3, FUN=function(x) rep(df3[, 2], df3[, 1]))
### each death is 1 (occurs once)
df3[, 1] <- 1
### add this to the rows with <=1 death
df3 <- rbind(df3, df2[!df2$Deaths>1, 2:3])
### convert to Surv object
library(survival)
s1 <- with(df3, Surv(Day, Deaths))
### get parameters for Gompertz distribution
library(flexsurv)
f1 <- flexsurvreg(s1 ~ 1, dist="gompertz")
giving
> f1$res
est L95% U95%
shape 0.165351912 0.1281016481 0.202602176
rate 0.001767956 0.0006902161 0.004528537
Note that this is an intercept-only model as all your genotypes are A.
You can loop this over multiple survival objects once you have re-created the per-observation data as above.
From the flexsurv docs:
Gompertz distribution with shape parameter a and rate parameter
b has hazard function
H(x: a, b) = b.e^{ax}
So it appears your alpha is b, the rate, and beta is a, the shape.

Related

Unmarked colext function: detection probability = 1

I'm building a single species dynamic occupancy model through the r package "unmarked" with UnmarkedMultFrame setup with the "colext()" function for pika den occupancy across 4 years. However, I want to specify that detection probability (p) = 1 (perfect detection). I want to do this because the detection probability for known dens is near-perfect (many other studies make this assumption too). In theory, this is just a simpler model and kind-of defeats the purpose of an occupancy model, but I'm using it because I need to estimate colonization and extinction probabilities.
Another specification is that all dens we are monitoring were initially occupied the first year we have data for them (so occupancy = 1 for all dens the first year and I am monitoring extinction and re-colonization rates after the first year).
Does anyone know how to specify in Unmarked that p = 1 when using the colext function? This function estimates detection probability, but I'm not sure what it is basing it off of, so I don't know how to either eliminate it from the function entirely or force it to be 1. Here is an example of my data and code:
dets1 <- as.matrix(dets1) #detections (179 total dens sampled once per year for 4 years (lots of NAs))
year <- factor(rep(c(2018, 2019,2020, 2021),179)) #the 4 years we surveyed
UMFdets <- unmarkedMultFrame(y=dets1, numPrimary=4)
m4 <- colext(psiformula = ~1, # First-year occupancy
gammaformula = ~ year, # Colonization
epsilonformula = ~ year, # Extinction
pformula = ~1, #Detection
data = UMFdets)
*simply removing "pformula" doesn't work.
Any ideas or knowledge about this would be much appreciated! Thank you!

Cross-correlation of autocorrelated variables with annual time step

I am trying to analyze cross-correlations between pairs from a set of many variables with an annual time step. Currently, I have difficulties dealing with auto-correlated variables...
For each variable, I first test if it is non stationary (with both Augmented Dickey–Fuller test and Kwiatkowski–Phillips–Schmidt–Shin test), then I test if the suitably lagged and iterated difference of the first order is auto-correlated (with the estimate of the autocorrelation function).
When variables are auto-correlated, I try to prewhiten them, using an ARIMA model and then run the cross-correlation analysis.
Here is an example with 3 variables with an annual time step (31 years of data):
Auto-correlated:
V1 <- c(524.800, 764.449, 557.564, 615.880, 1055.453, 1290.122, 1709.981, 2113.651, 2958.380, 2541.366, 2640.039, 2787.651, 3580.520, 3907.812, 4369.220, 5003.527, 3672.161, 4394.090, 4214.062, 3671.846, 3351.458, 3036.077, 2704.647, 1854.365, 2543.996, 2106.863, 1960.762, 2036.323, 1996.526, 1940.905, 2120.696)
Not auto-correlated:
V2 <- c(21939,20081,23702,12908,16825,23794,17748,20251,24024,19822,15272, 12258,22539,30856,24090,23512,12588,13367,11501,9099,11411,11832,11341,10899,10049,10114,10667,9902,10639,10702,10066)
Auto-correlated:
V3 <- c(3361213, 3493958, 5924759, 5283699, 9477393, 13528459, 17153250, 14509681, 27052581, 27653986, 25618494, 24642924, 31252566, 40446538, 33833710, 55843471, 31572981, 51685769, 48218752, 46947448, 52243131, 68991783, 66494239, 44108474, 67089912, 61832249, 65841550, 73992509, 62093282, 55447568, 71151920)
In short, what I want to see is: ccf(diff(V1),diff(V2)) and ccf(diff(V1),diff(V3))
So, here is what I did:
adf.test(V1) #pvalue > 0.05 non stationary #OK
kpss.test(V1) # pvalue < 0.05 non stationary #OK
the same for V2 and V3
autocorrelation function:
acf(diff(V1))
basically, if the the lag drops off sharply after time 0, V1 is not
auto-correlated. I have also seen more precisely the output values, only for security.
the same for V2 and V3
So now comes my problem:
To get ccf(diff(V1),diff(V2)), knowing that diff(V1) is auto-correlated, I tried:
V1ts <- ts(V1,frequency=1) #convert to a time series
I suppose frequency = 1 because we only have one value per year.
V2ts <- ts(V2,frequency=1) #convert to a time series
library(fpp) #load forecasting package
mod1 <- auto.arima(V1ts)
mod2 <- auto.arima(V1ts,D=0)
mod3 <- auto.arima(V1ts,D=1)
I tried also forcing D to 0 or 1.
library(TSA)
print(prewhiten(V1ts, V2ts, mod1))
with mod2 or mod3 I get the same results...
When comparing the standard ccf without dealing with autocorrelation...
ccf1 <- ccf(diff(V1ts),diff(V2ts))
ccf1
I got exactly the same results...
In conclusion, I am surely doing something wrong because I get exactly the same result with or without prewhitening...
In a nutshell, may someone help me to analyze cross-correlation of one auto-correlated variable V1 with a second one which is not auto-correlated V2 please? And in the case both variables are auto-correlated, like between V1 and V3, do I have to apply two ARIMA models and prewhiten them simultaneously? Thank you a lot in advance, I am a bit confused here...

Generate a crude incidence rate table (stratified by a factor variable) from a Lexis Model

I am using the 'Epi' package in R to model follow-up data from a study.
I am having no issues with declaring the Lexis model or running Poisson and (combined with the survival package) Cox regressions.
As part of the initial data review I want to find a simple way to make a table of crude unadjusted incidence/event rates from data in a lexis model in R (pre-fitting any poisson/cox models).
I have found a coded approach which allows me to do this and to stratify by a variable as part of exploratory data analysis:
#Generic Syntax Example
total <-cbind(tapply(lexis_model$lex.Xst,lexis_model$stratifying_var,sum),tapply(lexis_model$lex.dur,lexis_model$stratifying_var,sum))
#Add up the number of events within the stratifying variable
#Add up the amount of follow-up time within the stratifying the variable
rates <- tapply(lexis_model$lex.Xst,lexis_model$stratifying_var,sum)/tapply(lexis_model$lex.dur,lexis_model$stratifying_var,sum)*10^3
#Given rates per 1,000 person years
ratetable <- (cbind(totals,rates))
#Specific Example based on the dataset
totals <-cbind(tapply(lexis_model$lex.Xst,lexis_model$grade,sum),tapply(lexis_model$lex.dur,lexis_model$grade,sum))
rates <- tapply(lexis_model$lex.Xst,lexis_model$grade,sum)/tapply(lexis_model$lex.dur,lexis_model$grade,sum)*10^3
ratetable <- (cbind(totals,rates))
ratetable
rates
1 90 20338.234 4.4251630
2 64 7265.065 8.8092811
#Shows number of events, years follow-up, number of events per 1000 years follow-up, stratified by the stratifying variable
Note this is crude unadjusted/absolute rates - not the output of a Poisson model. Whilst I appreciate that the code above does indeed produce the desired output (and is pretty straightforward) I wanted to see if people were aware of a command which can take a lexis dataset and output this. I've had a look at the available commands in the Epi and epitools package - may have missed somthing but could not see an obvious way to do this.
As this is a quite common thing to want to do I wondered if anyone was aware of a package/function that could do this by specifying the simply the lexis dataset and the stratification variable (or indeed a single function to the steps above in a single go).
Ideally the output would look something like the below (which is taken from STATA which I am trying to move away from in favour of R!):
A copy of the first twenty rows or so of the actual data is here (the data has already been put in to a lexis model using Epi package so all relevant lexis variables are there):
https://www.dropbox.com/s/yjyz1kzusysz941/rate_table_data.xlsx?dl=0
I would do this simply using the tidyverse R package as such:
library(tidyverse)
lexis_model %>%
group_by(grade) %>%
summarise(sum_Xst = sum(lex.Xst), sum_dur = sum(lex.dur)) %>%
mutate(rate = sum_Xst/sum_dur*10^3) -> rateable
rateable
# A tibble: 2 x 4
# grade sum_Xst sum_dur rate
# <dbl> <int> <dbl> <dbl>
# 1 1 2 375.24709 5.329821
# 2 2 0 92.44079 0.000000
And you could wrap this into a function yourself:
rateFunc <- function(data, strat_var)
{
lexis_model %>%
group_by_(strat_var) %>%
summarise(sum_Xst = sum(lex.Xst), sum_dur = sum(lex.dur)) %>%
mutate(rate = sum_Xst/sum_dur*10^3)
}
which you would then call:
rateFunc(lexis_model, "grade")
This is useful because, using the combination of tidyverse summarise and mutate it is very easy to add more summary statistics to the table.
EDIT:
After clarification on the question, this can be done using the popEpi package using the rate command:
popEpi::rate(lexis_model, obs = lex.Xst, pyrs = lex.dur, print = grade)
# Crude rates and 95% confidence intervals:
# grade lex.Xst lex.dur rate SE.rate rate.lo rate.hi
# 1: 1 2 375.2472 0.00532982 0.003768752 0.001332942 0.0213115
# 2: 2 0 92.4408 0.00000000 0.000000000 0.000000000 NaN

Confusing p values with ANOVA on a big dataframe

I am trying to analyse the significant differences between different car company performance values across different countries. I am using ANOVA to do this.
Running ANOVA on my real dataset (30 countries, 1000 car companies and 90000 measurement scores) gave every car a zero p-value.
Confused by this, I created a reproducible example (below) with 30 groups, 3 car companies, 90000 random scores. Purposely, I kept a score of 1 for the Benz company where you shouldn't see any difference between countries. After running anova, I see a pvalue of 0.46 instead of 1.
Does any one know why is this ?
Reproducible example
set.seed(100000)
qqq <- 90000
df = data.frame(id = c(1:90000), country = c(rep("usa",3000), rep("usb",3000), rep("usc",3000), rep("usd",3000), rep("use",3000), rep("usf",3000), rep("usg",3000), rep("ush",3000), rep("usi",3000), rep("usj",3000), rep("usk",3000), rep("usl",3000), rep("usm",3000), rep("usn",3000), rep("uso",3000), rep("usp",3000), rep("usq",3000), rep("usr",3000), rep("uss",3000), rep("ust",3000), rep("usu",3000), rep("usv",3000), rep("usw",3000), rep("usx",3000), rep("usy",3000), rep("usz",3000), rep("usaa",3000), rep("usab",3000), rep("usac",3000), rep("usad",3000)), tesla=runif(90000), bmw=runif(90000), benz=rep(1, each=qqq))
str(df)
out<-data.frame()
for(j in 3:ncol(df)){
amod2 <- aov(df[,j]~df$country)
out[(j-2),1]<-colnames(df)[j]
out[(j-2),2]<-summary(amod2, test = adjusted("bonferroni"))[[1]][[1,"Pr(>F)"]]
}
colnames(out)<-c("cars","pvalue")
write.table(out,"df.output")
df.output
"cars" "pvalue"
"1" "tesla" 0.245931589754359
"2" "bmw" 0.382730335188437
"3" "benz" 0.465083026215268
With respect to the "benz" p-value in your reproducible example: an ANOVA analysis requires positive variance (i.e., non-constant data). If you violate this assumption, the model is degenerate. Technically, the p-value is based on an F-statistic whose value is a normalized ratio of the variance attributable to the "country" effect (for "benz" in your example, zero) divided by the total variance (for "benz" in your example, zero), so your F-statistic has "value" 0/0 or NaN.
Because of the approach R takes to calculating the F-statistic (using a QR matrix decomposition to improve numerical stability in "nearly" degenerate cases), it calculates an F-statistic equal to 1 (w/ 29 and 89970 degrees of freedom). This gives a p-value of:
> pf(1, 29, 89970, lower=FALSE)
[1] 0.465083
>
but it is, of course, largely meaningless.
With respect to your original problem, with large datasets relatively small effects will yield very small p-values. For example, if you add the following after your df definition above to introduce a difference in country usa:
df = within(df, {
o = country=="usa"
tesla[o] = tesla[o] + .1
bmw[o] = bmw[o] + .1
benz[o] = benz[o] + .1
rm(o)
})
you will find that out looks like this:
> out
cars pvalue
1 tesla 9.922166e-74
2 bmw 5.143542e-74
3 benz 0.000000e+00
>
Is this what you're seeing, or are you seeing all of them exactly zero?

Manually conduct leave-one-out cross validation for a GLMM using a for() loop in R

I am trying to build a for() loop to manually conduct leave-one-out cross validations for a GLMM fit using the lmer() function from the lme4 pkg. I need to remove an individual, fit the model and use the beta coefficients to predict a response for the individual that was withheld, and repeat the process for all individuals.
I have created some test data to tackle the first step of simply leaving an individual out, fitting the model and repeating for all individuals in a for() loop.
The data have a binary (0,1) Response, an IndID that classifies 4 individuals, a Time variable, and a Binary variable. There are N=100 observations. The IndID is fit as a random effect.
require(lme4)
#Make data
Response <- round(runif(100, 0, 1))
IndID <- as.character(rep(c("AAA", "BBB", "CCC", "DDD"),25))
Time <- round(runif(100, 2,50))
Binary <- round(runif(100, 0, 1))
#Make data.frame
Data <- data.frame(Response, IndID, Time, Binary)
Data <- Data[with(Data, order(IndID)), ] #**Edit**: Added code to sort by IndID
#Look at head()
head(Data)
Response IndID Time Binary
1 0 AAA 31 1
2 1 BBB 34 1
3 1 CCC 6 1
4 0 DDD 48 1
5 1 AAA 36 1
6 0 BBB 46 1
#Build model with all IndID's
fit <- lmer(Response ~ Time + Binary + (1|IndID ), data = Data,
family=binomial)
summary(fit)
As stated above, my hope is to get four model fits – one with each IndID left out in a for() loop. This is a new type of application of the for() command for me and I quickly reached my coding abilities. My attempt is below.
fit <- list()
for (i in Data$IndID){
fit[[i]] <- lmer(Response ~ Time + Binary + (1|IndID), data = Data[-i],
family=binomial)
}
I am not sure storing the model fits as a list is the best option, but I had seen it on a few other help pages. The above attempt results in the error:
Error in -i : invalid argument to unary operator
If I remove the [-i] conditional to the data=Data argument the code runs four fits, but data for each individual is not removed.
Just as an FYI, I will need to further expand the loop to:
1) extract the beta coefs, 2) apply them to the X matrix of the individual that was withheld and lastly, 3) compare the predicted values (after a logit transformation) to the observed values. As all steps are needed for each IndID, I hope to build them into the loop. I am providing the extra details in case my planned future steps inform the more intimidate question of leave-one-out model fits.
Thanks as always!
The problem you are having is because Data[-i] is expecting i to be an integer index. Instead, i is either AAA, BBB, CCC or DDD. To fix the loop, set
data = Data[Data$IndID != i, ]
in you model fit.

Resources