Can I do Fine-Gray regression on a split survival dataset? - r

This is my first question here, so if I need to share more information please let me know.
I have done a Cox regression analysis in R in which I am interested in the effect of implant surface on reoperation over 36 months. Here's a reproducible example:
library(survival)
n <- 100
df <- data.frame(id=1:n,
time=sample(1:36, n, replace=TRUE),
event=sample(0:2, n, replace=TRUE),
implantsurface=sample(0:1, n, replace=TRUE),
covariate1=sample(0:1, n, replace=TRUE),
covariate2=sample(0:1, n, replace=TRUE))
df$time <- as.numeric(df$time)
I adjusted for a number of covariates, which showed that the proportional hazards assumption was violated for covariate1. I split my dataset into 0-4 mo and 4-36 mo as follows (simplified code), so that the PH assumption was no longer violated:
fit1 <- survSplit(Surv(time, event == 1) ~
implantsurface + covariate1 + covariate2,
data = df, cut=c(4),
episode= "tgroup")
fit2 <- coxph(Surv(tstart, time, event) ~
implantsurface + strata(tgroup):covariate1 + covariate2,
data = fit1)
Now I would also like to adjust for competing risks with Fine-Gray regression, but I am unable to do this for the split dataset. I have tried the following:
FG <- finegray(Surv(time = time, event = event.competing, type = "mstate") ~
implantsurface + strata(tgroup):covariate1 + covariate2,
data = fit1, etype = "event_of_interest")
FGfit <- coxph(Surv(fgstart, fgstop, fgstatus) ~
implantsurface + strata(tgroup):covariate1 + covariate2,
weights = fgwt, data = FG)
Error in strata(tgroup) : object 'tgroup' not found
Does anyone know how/if Fine-Gray can be applied to a split survival dataset? Many thanks in advance for thinking along!

Related

How to loop over columns to evaluate different fixed effects in consecutive lme4 mixed models and extract the coefficients and P values?

I am new to R and am trying to loop a mixed model across 90 columns in a dataset.
My dataset looks like the following one but has 90 predictors instead of 7 that I need to evaluate as fixed effects in consecutive models.
I then need to store the model output (coefficients and P values) to finally construct a figure summarizing the size effects of each predictor. I know the discussion of P value estimates from lme4 mixed models.
For example:
set.seed(101)
mydata <- tibble(id = rep(1:32, times=25),
time = sample(1:800),
experiment = rep(1:4, times=200),
Y = sample(1:800),
predictor_1 = runif(800),
predictor_2 = rnorm(800),
predictor_3 = sample(1:800),
predictor_4 = sample(1:800),
predictor_5 = seq(1:800),
predictor_6 = sample(1:800),
predictor_7 = runif(800)) %>% arrange (id, time)
The model to iterate across the N predictors is:
library(lme4)
library(lmerTest) # To obtain new values
mixed.model <- lmer(Y ~ predictor_1 + time + (1|id) + (1|experiment), data = mydata)
summary(mixed.model)
My coding skills are far from being able to set a loop to repeat the model across the N predictors in my dataset and store the coefficients and P values in a dataframe.
I have been able to iterate across all the predictors fitting linear models instead of mixed models using lapply. But I have failed to apply this strategy with mixed models.
varlist <- names(mydata)[5:11]
lm_models <- lapply(varlist, function(x) {
lm(substitute(Y ~ i, list(i = as.name(x))), data = mydata)
})
One option is to update the formula of a restricted model (w/o predictor) in an lapply loop over the predictors. Then summaryze the resulting list and subset the coefficient matrix using a Vectorized function.
library(lmerTest)
mixed.model <- lmer(Y ~ time + (1|id) + (1|experiment), data = mydata)
preds <- grep('pred', names(mydata), value=TRUE)
fits <- lapply(preds, \(x) update(mixed.model, paste('. ~ . + ', x)))
extract_coef_p <- Vectorize(\(x) x |> summary() |> coef() |> {\(.) .[3, c(1, 5)]}())
res <- `rownames<-`(t(extract_coef_p(fits)), preds)
res
# Estimate Pr(>|t|)
# predictor_1 -7.177579138 0.8002737
# predictor_2 -5.010342111 0.5377551
# predictor_3 -0.013030513 0.7126500
# predictor_4 -0.041702039 0.2383835
# predictor_5 -0.001437124 0.9676346
# predictor_6 0.005259293 0.8818644
# predictor_7 31.304496255 0.2511275

Non-parametric bootstrapping to generate 95% Confidence Intervals for fixed effect coefficients calculated by a glmer with nested random effects

I have an R coding question.
This is my first time asking a question here, so apologies if I am unclear or do something wrong.
I am trying to use a Generalized Linear Mixed Model (GLMM) with Poisson error family to test for any significant effect on a count response variable by three separate dichotomous variables (AGE = ADULT or JUVENILE, SEX = MALE or FEMALE and MEDICATION = NEW or OLD) and an interaction between AGE and MEDICATION (AGE:MEDICATION).
There is some dependency in my data in that the data was collected from a total of 22 different sites (coded as SITE vector with 33 distinct levels), and the data was collected over a total of 21 different years (coded as YEAR vector with 21 distinct levels, and treated as a categorical variable). Unfortunately, every SITE was not sampled for each YEAR, with some being sampled for a greater number of years than others.
The data is also quite sparse, in that I do not have a great number of measurements of the response variable (coded as COUNT and an integer vector) per SITE per YEAR.
My Poisson GLMM is constructed using the following code:
model <- glmer(data = mydata,
family = poisson(link = "log"),
formula = COUNT ~ SEX + SEX:MEDICATION + AGE + AGE:SEX + MEDICATION + AGE:MEDICATION + (1|SITE/YEAR),
offset = log(COUNT.SAMPLE.SIZE),
nAGQ = 0)
In order to try and obtain more reliable estimates for the fixed effect coefficients (particularly given the sparse nature of my data), I am trying to obtain 95% confidence intervals for the fixed effect coefficients through non-parametric bootstrapping.
I have come across the "glmmboot" package which can be used to conduct non-parametric bootstrapping of GLMMs, however when I try to run the non-parametric bootstrapping using the following code:
library(glmmboot)
bootstrap_model(base_model = model,
base_data = mydata,
resamples = 1000)
When I run this code, I receive the following message:
Performing case resampling (no random effects)
Naturally, though, my model does have random effects, namely (1|SITE/YEAR).
If I try to tell the function to resample from a specific block, by adding in the "reample_specific_blocks" argument, i.e.:
library(glmmboot)
bootstrap_model(base_model = model,
base_data = mydata,
resamples = 1000,
resample_specific_blocks = "YEAR")
Then I get the following error message:
Performing block resampling, over SITE
Error: Invalid grouping factor specification, YEAR:SITE
I get a similar error message if I try set 'resample_specific_blocks' to "SITE".
If I then try to set 'resample_specific_blocks' to "SITE:YEAR" or "SITE/YEAR" I get the following error message:
Error in bootstrap_model(base_model = model, base_data = mydata, resamples = 1000, :
No random columns from formula found in resample_specific_blocks
I have tried explicitly nesting YEAR within SITE and then adapting the model accordingly using the code:
mydata <- within(mydata, SAMPLE <- factor(SITE:YEAR))
model.refit <- glmer(data = mydata,
family = poisson(link = "log"),
formula = COUNT ~ SEX + AGE + MEDICATION + AGE:MEDICATION + (1|SITE) + (1|SAMPLE),
offset = log(COUNT.SAMPLE.SIZE),
nAGQ = 0)
bootstrap_model(base_model = model.refit,
base_data = mydata,
resamples = 1000,
resample_specific_blocks = "SAMPLE")
But unfortunately I just get this error message:
Error: Invalid grouping factor specification, SITE
The same error message comes up if I set resample_specific_blocks argument to SITE, or if I just remove the resample_specific_blocks argument.
I believe that the case_bootstrap() function found in the lmeresampler package could potentially be another option, but when I look into the help for it it looks like I would need to create a function and I unfortunately have no experience with creating my own functions within R.
If anyone has any suggestions on how I can get the bootstrap_model() function in the glmmboot package to recognise the random effects in my model/dataframe, or any suggestions for alternative methods on conducting non-parametric bootstrapping to create 95% confidence intervals for the coefficients of the fixed effects in my model, it would be greatly appreciated! Many thanks in advance, and for reading such a lengthy question!
For reference, I include links to the RDocumentation and GitHub for the glmmboot package:
https://www.rdocumentation.org/packages/glmmboot/versions/0.6.0
https://github.com/ColmanHumphrey/glmmboot
The following is code that will allow for creation of a reproducible example using the data set from lme4::grouseticks
#Load in required packages
library(tidyverse)
library(lme4)
library(glmmboot)
library(psych)
#Load in the grouseticks dataframe
data("grouseticks")
tibble(grouseticks)
#Create dummy vectors for SEX, AGE and MEDICATION
set.seed(1)
SEX <-sample(1:2, size = 403, replace = TRUE)
SEX <- as.factor(ifelse(SEX == 1, "MALE", "FEMALE"))
set.seed(2)
AGE <- sample(1:2, size = 403, replace = TRUE)
AGE <- as.factor(ifelse(AGE == 1, "ADULT", "JUVENILE"))
set.seed(3)
MEDICATION <- sample(1:2, size = 403, replace = TRUE)
MEDICATION <- as.factor(ifelse(MEDICATION == 1, "OLD", "NEW"))
grouseticks$SEX <- SEX
grouseticks$AGE <- AGE
grouseticks$MEDICATION <- MEDICATION
#Use the INDEX vector to create a vector of sample sizes per LOCATION
#per YEAR
grouseticks$INDEX <- 1
sample.sizes <- grouseticks %>%
group_by(LOCATION, YEAR) %>%
summarise(SAMPLE.SIZE = sum(INDEX))
#Combine the dataframes together into the dataframe to be used in the
#model
mydata$SAMPLE.SIZE <- as.integer(mydata$SAMPLE.SIZE)
#Create the Poisson GLMM model
model <- glmer(data = mydata,
family = poisson(link = "log"),
formula = TICKS ~ SEX + SEX + AGE + MEDICATION + AGE:MEDICATION + (1|LOCATION/YEAR),
nAGQ = 0)
#Attempt non-parametric bootstrapping on the model to get 95%
#confidence intervals for the coefficients of the fixed effects
set.seed(1)
Model.bootstrap <- bootstrap_model(base_model = model,
base_data = mydata,
resamples = 1000)
Model.bootstrap

Permutation with random sampling (with replacement) in linear regression in R

I want to permutate a linear regression (to not lose power with random sampling with replacement).
I know how to randomly sample my dataset:
sampled_random <- df[sample(nrow(df), replace = TRUE),]
My regression is like this:
reg <- lm(DV ~ Iv1 + IV2 + IV3, data = df)
Is there a nice built-in function to repeat this regression x times with different sample_random that I have overseen? As outcome I want the average p-values and the other averaged stuff that you get with summary(reg)
I am not experienced enough to write my own function that does all I want. Is there an R package that does this? Or, better, can you recommend a good (handy) one?
You can write your own code.
res <- lapply(1:100, function(i){
sampled_random <- df[sample(nrow(df), replace = TRUE),]
reg <- lm(DV ~ Iv1 + IV2 + IV3, data = sampled_random)
return(c(summary(reg)$residuals, summary(reg)$r.squared))
})

Plotting estimated HR from coxph object with time-dependent coefficient and splines

I want to plot the estimated hazard ratio as a function of time in the case of a coxph model with a time-dependent coefficient that is based on a spline term. I created the time-dependent coefficient using function tt, analogous to this example that comes straight from ?coxph:
# Fit a time transform model using current age
cox = coxph(Surv(time, status) ~ ph.ecog + tt(age), data=lung,
tt=function(x,t,...) pspline(x + t/365.25))
Calling survfit(cox) results in an error that survfit does not understand models with a tt term (as described in 2011 by Terry Therneau).
You can extract the linear predictor using cox$linear.predictors, but I would need to somehow extract ages and less trivially, times to go with each. Because tt splits the dataset on event times, I can't just match up the columns of the input dataframe with the coxph output. Additionally, I really would like to plot the estimated function itself, not just the predictions for the observed data points.
There is a related question involving splines here, but it does not involve tt.
Edit (7/7)
I'm still stuck on this. I've been looking in depth at this object:
spline.obj = pspline(lung$age)
str(spline.obj)
# something that looks very useful, but I am not sure what it is
# cbase appears to be the cardinal knots
attr(spline.obj, "printfun")
function (coef, var, var2, df, history, cbase = c(43.3, 47.6,
51.9, 56.2, 60.5, 64.8, 69.1, 73.4, 77.7, 82, 86.3, 90.6))
{
test1 <- coxph.wtest(var, coef)$test
xmat <- cbind(1, cbase)
xsig <- coxph.wtest(var, xmat)$solve
cmat <- coxph.wtest(t(xmat) %*% xsig, t(xsig))$solve[2, ]
linear <- sum(cmat * coef)
lvar1 <- c(cmat %*% var %*% cmat)
lvar2 <- c(cmat %*% var2 %*% cmat)
test2 <- linear^2/lvar1
cmat <- rbind(c(linear, sqrt(lvar1), sqrt(lvar2), test2,
1, 1 - pchisq(test2, 1)), c(NA, NA, NA, test1 - test2,
df - 1, 1 - pchisq(test1 - test2, max(0.5, df - 1))))
dimnames(cmat) <- list(c("linear", "nonlin"), NULL)
nn <- nrow(history$thetas)
if (length(nn))
theta <- history$thetas[nn, 1]
else theta <- history$theta
list(coef = cmat, history = paste("Theta=", format(theta)))
}
So, I have the knots, but I am still not sure how to combine the coxph coefficients with the knots in order to actually plot the function. Any leads much appreciated.
I think what you need can be generated by generating an input matrix using pspline and matrix-multiplying this by the relevant coefficients from the coxph output. To get the HR, you then need to take the exponent.
i.e.
output <- data.frame(Age = seq(min(lung$age) + min(lung$time) / 365.25,
max(lung$age + lung$time / 365.25),
0.01))
output$HR <- exp(pspline(output$Age) %*% cox$coefficients[-1] -
sum(cox$means[-1] * cox$coefficients[-1]))
library("ggplot2")
ggplot(output, aes(x = Age, y = HR)) + geom_line()
Note the age here is the age at the time of interest (i.e. the sum of the baseline age and the elapsed time since study entry). It has to use the range specified to match with the parameters in the original model. It could also be calculated using the x output from using x = TRUE as shown:
cox <- coxph(Surv(time, status) ~ ph.ecog + tt(age), data=lung,
tt=function(x,t,...) pspline(x + t/365.25), x = TRUE)
index <- as.numeric(unlist(lapply(strsplit(rownames(cox$x), "\\."), "[", 1)))
ages <- lung$age[index]
output2 <- data.frame(Age = ages + cox$y[, 1] / 365.25,
HR = exp(cox$x[, -1] %*% cox$coefficients[-1] -
sum(cox$means[-1] * cox$coefficients[-1])))

Example of Time Series Prediction using Neural Networks in R

Anyone's got a quick short educational example how to use Neural Networks (nnet in R) for the purpose of prediction?
Here is an example, in R, of a time series
T = seq(0,20,length=200)
Y = 1 + 3*cos(4*T+2) +.2*T^2 + rnorm(200)
plot(T,Y,type="l")
Many thanks
David
I think you can use the caret package and specially the train function
This function sets up a grid of tuning parameters for a number
of classification and regression routines.
require(quantmod)
require(nnet)
require(caret)
T = seq(0,20,length=200)
y = 1 + 3*cos(4*T+2) +.2*T^2 + rnorm(200)
dat <- data.frame( y, x1=Lag(y,1), x2=Lag(y,2))
names(dat) <- c('y','x1','x2')
dat <- dat[c(3:200),] #delete first 2 observations
#Fit model
model <- train(y ~ x1+x2 ,
dat,
method='nnet',
linout=TRUE,
trace = FALSE)
ps <- predict(model, dat)
#Examine results
plot(T,Y,type="l",col = 2)
lines(T[-c(1:2)],ps, col=3)
legend(5, 70, c("y", "pred"), cex=1.5, fill=2:3)
The solution proposed by #agstudy is useful, but in-sample fits are not a reliable guide to out-of-sample forecasting accuracy. The gold standard in forecasting accuracy measurement is to use a holdout sample. Remove the last 5 or 10 or 20 observations (depending to the length of the time series) from the training sample, fit your models to the rest of the data, use the fitted models to forecast the holdout sample and simply compare accuracies on the holdout, using Mean Absolute Deviations (MAD) or weighted Mean Absolute Percentage Errors (wMAPEs).
So to do this you can change the code above in this way:
require(quantmod)
require(nnet)
require(caret)
t = seq(0,20,length=200)
y = 1 + 3*cos(4*t+2) +.2*t^2 + rnorm(200)
dat <- data.frame( y, x1=Lag(y,1), x2=Lag(y,2))
names(dat) <- c('y','x1','x2')
train_set <- dat[c(3:185),]
test_set <- dat[c(186:200),]
#Fit model
model <- train(y ~ x1+x2 ,
train_set,
method='nnet',
linout=TRUE,
trace = FALSE)
ps <- predict(model, test_set)
#Examine results
plot(T,Y,type="l",col = 2)
lines(T[c(186:200)],ps, col=3)
legend(5, 70, c("y", "pred"), cex=1.5, fill=2:3)
This last two lines output the wMAPE of the forecasts from the model
sum(abs(ps-test_set["y"]))/sum(test_set)

Resources