Making sense of gbm survival prediction model - r

I am a newbie in using and making sense of ML methods and currently doing survival analysis using gbm package in R.
I have difficulty understanding some of the output of the survival prediction model. I have checked this tutorial and this post but still, find trouble in making sense of the outputted survival prediction model.
Here is my code for analysis based on example data:
rm(list=ls(all=TRUE))
library(randomForestSRC)
library(gbm)
library(survival)
library(Hmisc)
data(pbc, package="randomForestSRC")
data <- na.omit(pbc)
set.seed(9512)
train <- sample(1:nrow(data), round(nrow(data)*0.7))
data.train <- data[train, ]
data.test <- data[-train, ]
set.seed(9741)
model <- gbm(Surv(days, status)~.,
data.train,
interaction.depth=2,
shrinkage=0.01,
n.trees=500,
distribution="coxph",
cv.folds = 5)
summary(model)
best.iter <- gbm.perf(model, plot.it = TRUE, method = 'cv',
overlay = TRUE) #to get the optimal number of Boosting iterations
best.iter
#Us the best number of tree to produce predicted values for each observation in newdata
# return a vector of prediction on n.trees indicting log hazard scale.f(x)
# By default the predictions are on log hazard scale for coxph
# proportional hazard model assumes h(t|x)=lambda(t)*exp(f(x)).
# estimate the f(x) component of the hazard function
pred.train <- predict(object=model, newdata=data.train, n.trees = best.iter)
pred.test <- predict(object=model, newdata=data.test, n.trees = best.iter)
#trainig set
Hmisc::rcorr.cens(-pred.train, Surv(data.train$days, data.train$status))
#val set
Hmisc::rcorr.cens(-pred.test, Surv(data.test$days, data.test$status))
# Estimate the cumulative baseline hazard function using training data
basehaz.cum <- basehaz.gbm(t=data.train$days, #The survival times.
delta=data.train$status, #The censoring indicator
f.x=pred.train, #The predicted values of the regression model on the log hazard scale.
t.eval = data.train$days, #Values at which the baseline hazard will be evaluated
cumulative = TRUE, #If TRUE the cumulative survival function will be computed
smooth = FALSE) #If TRUE basehaz.gbm will smooth the estimated baseline hazard using Friedman's super smoother supsmu.
basehaz.cum
#Estimation of survival rate of all:
surv.rate <- exp(-exp(pred.train)*basehaz.cum)
surv.rate
res_train <- data.train
# predicted outcome for train set
res_train$pred <- pred.train
res_train$survival_rate <- surv.rate
res_train
# Estimate the cumulative baseline hazard function using training data
basehaz.cum <- basehaz.gbm(t=data.test$days, #The survival times.
delta=data.test$status, #The censoring indicator
f.x=pred.test, #The predicted values of the regression model on the log hazard scale.
t.eval = data.test$days, #Values at which the baseline hazard will be evaluated
cumulative = TRUE, #If TRUE the cumulative survival function will be computed
smooth = FALSE) #If TRUE basehaz.gbm will smooth the estimated baseline hazard using Friedman's super smoother supsmu.
basehaz.cum
#Estimation of survival rate of all at specified time is:
surv.rate <- exp(-exp(pred.test)*basehaz.cum)
surv.rate
res_test <- data.test
# predicted outcome for test set
res_test$pred <- pred.test
res_test$survival_rate <- surv.rate
res_test
#--------------------------------------------------
#Estimate survival rate at time of interest
# Specify time of interest
time.interest <- sort(unique(data.train$days[data.train$status==1]))
# Estimate the cumulative baseline hazard function using training data
basehaz.cum <- basehaz.gbm(t=data.train$days, #The survival times.
delta=data.train$status, #The censoring indicator
f.x=pred.train, #The predicted values of the regression model on the log hazard scale.
t.eval = time.interest, #Values at which the baseline hazard will be evaluated
cumulative = TRUE, #If TRUE the cumulative survival function will be computed
smooth = FALSE) #If TRUE basehaz.gbm will smooth the estimated baseline hazard using Friedman's super smoother supsmu.
#For individual $i$ in test set, estimation of survival function is:
surf.i <- exp(-exp(pred.test[1])*basehaz.cum) #survival rate
#Estimation of survival rate of all at specified time is:
specif.time <- time.interest[10]
surv.rate <- exp(-exp(pred.test)*basehaz.cum[10])
cat("Survival Rate of all at time", specif.time, "\n")
print(surv.rate)
The output returned from the predict function represents the f(x) component of the hazard function ( h(t|x)=lambda(t)*exp(f(x)) ).
My questions:
• A bit confused about whether hazard ratios can be calculated here?
• Wondering how can I divide the population into low-risk and high-risk groups? Can I rely on the estimated f(x) component of the hazard function to do the scoring system for the training set? I aim from this to have a scoring system where I show KM plots for low and high-risk groups for training and test sets.
• How can I construct calibration curve plots where I can plot observed survival vs. predicted survival for the training set and test set?

Amer. Thx for your reading of my tutorial!
As you mentioned that "The output returned from the predict function represents the f(x) component of the hazard function ( h(t|x)=lambda(t)*exp(f(x)) )", maybe we need to understand the hazard function, i.e. h(t|x).
Before this, please sure that you have the basic knowledge of survival analysis. if not, it's recommended to read the great post. I think the post would help you solve the questions.
Back to your questions:
Exactly, we can get the hazard ratios of log scale by invoking the predict function. Therefore, the hazard ratio can be calculated by exp() .
Sure! Relying on the values of hazard ratio, we can divide the population into low-risk and high-risk groups. Alternatively, you can use the median of hazard ratios as the cutoff value. I think the cutoff value should be derived from the training set, and then test in the test set. If your model is effective, KM plots for low and high-risk groups would have a significant difference (measured by log-rank test statistically).
Calibration curve plots are often used to evaluated the performance of model that outputs probabilities or likelihoods ranged from [0.0, 1.0]. We can calculate the survival function, and then specify a time point of interest, e.g. 5-Year. At last, we compare the survival probabilities with the actual survival state at the specified time, which is just the same as we do evaluating a binary classification model. More details of obtaining survival function can refer to my tutorial, and the principles can be found in that post aforementioned.

Related

is there an R function to obtain the minimal depth distribution from a conditional random forest estimated with the party package?

I ran a conditional random forest regression model using the cforest function from the party package because I have both categorical and continuous predictor variables that are correlated with each other, and a continuous outcome variable.
Here is my code to run the conditional random forest model, obtain out-of-bag estimates, and estimate the permutation variable importance.
# 1. fit the random forest
crf <- party::cforest(Y ~ ., data = df,
controls = party::cforest_unbiased(ntree = 10000, mtry = 7))
# 2. obtain out-of-bag estimates
pred_oob <- as.data.frame(caret::predict(crf, OOB = T, newdata = NULL))
# 3. estimate permutation variable importance
vi <- permimp::permimp(crf, condition = T, threshold = 0.5, nperm = 1000, OOB = T,
mincriterion = 0)))
I would like to visualize the minimal depth distribution and calculate mean minimal depth similar to the output from the RandomForestExplainer package. However, the RandomForestExplainer package only takes in objects from the randomForest function in the randomForest package. It's not an option for me to use this function due to the nature of my data (described above).
I have been combing the internet and have not been able to find a solution. Can someone point me to a way to visualize the minimal depth distribution for all predictors and calculate the mean minimal depth?

How to predict survival probabilities of cox model in R using the estimated coefficients and baseline hazard

I built a coxph model with time varying covariates
fit = coxph(Surv(time_mnth_1, time_mnth_2, default)~cust_score+bur_score+dep_score+MOB +Real_GDP_growth + Real_disposable_income_growth + Unemployment_rate + CPI_inflation_rate + Mortgage_rate + Market_Volatility_Index, data=data, cluster = APP_NUMBER)
The following are the coefficients of the cox model
Now for a new data, I am making survival probability predictions using the survfit function as follows
res = survfit(fit, newdata=oot_data[oot_data$APP_NUMBER==667259,], id=APP_NUMBER)
summary(res)
I get the following output
Now I want to get the survival probability output by calculating manually using the beta coefficients and baseline hazard function
bh=basehaz(fit,centered=FALSE)
I get a time series from time t=3 to t=41 (not getting t=1&t=2 at which I get the survival probability prediction using survfit above. After more inspection I realized that the min value to time_mnth_2 column in my dataset is 3 maybe thats why.. anyways..)
using the above baseline hazard timeseries I calculated survival probability using the below formula (oot_data is the newdataset)
LP <- fit$coef["cust_score"]*oot_data$cust_score+
fit$coef["bur_score"]*oot_data$bur_score+
fit$coef["dep_score"]*oot_data$dep_score+
fit$coef["Real_GDP_growth"]*oot_data$Real_GDP_growth+
fit$coef["Real_disposable_income_growth"]*oot_data$Real_disposable_income_growth+
fit$coef["Unemployment_rate"]*oot_data$Unemployment_rate+
fit$coef["CPI_inflation_rate"]*oot_data$CPI_inflation_rate+
fit$coef["Mortgage_rate"]*oot_data$Mortgage_rate+
fit$coef["Market_Volatility_Index"]*oot_data$Market_Volatility_Index+
fit$coef["MOB"]*oot_data$MOB
I get LP (linear prediction for every time interval (t1,t2) present in my oot_data (i.e for each row)
I assume that the calculated LP is valid from time (t1,t2]
survival probability (t) = exp(-bh(t)*exp(LP)
This way I calculate survival probability for every time t (starting from 3 as my baseline hazard starts from t=3)
BUT the calculated probabilities are not matching wit what I get through my calculation. (wierdly enough survfit prediction for time 1,2,3 match with my prediction 3,4,5 but after that it doesn't match at all)
Can someone help what I am doing wrong.

Get predictions from coxph

# Create the simplest test data set
test1 <- list(time=c(4,3,1,1,2,2,3),
status=c(1,1,1,0,1,1,0),
x=c(0,2,1,1,1,0,0),
sex=c(0,0,0,0,1,1,1))
# Fit a stratified model
m=coxph(Surv(time, status) ~ x + sex, test1)
y=predict(m,type="survival",by="sex")
Basically what I am doing is making fake data called test1, then I am fitting a simple coxph model and saving it as 'm'. Then what I aim to do is get the predicted probabilities and confidence bands for the survival probability separate for sexes. My hopeful dataset 'y' will include: age, survival probability, lower confidence band, upper confidence band, and sex which equals to '0' or '1'.
This can be accomplished in two ways. The first is a slight modification to your code, using the predict() function to get predictions at a specific times for specific combinations of covariates. The second is by using the survfit() function, which estimates the entire survival curve and is easy to plot. The confidence intervals don't exactly agree as we'll see, but they should match fairly closely as long as the probabilities aren't too close to 1 or 0.
Below is code to both make the predictions as your code tries. It uses the built-in cancer data. The important difference is to create a newdata which has the covariate values you're interested in. Because of the non-linear nature of survival probabilities it is generally a bad idea to try and make a prediction for the "average person". Because we want to get a survival probability we must also specify what time to consider that probability. I've taken time = 365, age = 60, and both sex = 1 and sex = 2 So this code predicts the 1-year survival probability for a 60 year old male and a 60 year old female. Note that we must also include status in the newdata, even though it doesn't affect the result.
library(survival)
mod <- coxph(Surv(time,status) ~ age + sex, data = cancer)
pred_dat <- data.frame(time = c(365,365), status = c(2,2),
age = c(60,60), sex = c(1,2))
preds <- predict(mod, newdata = pred_dat,
type = "survival", se.fit = TRUE)
pred_dat$prob <- preds$fit
pred_dat$lcl <- preds$fit - 1.96*preds$se.fit
pred_dat$ucl <- preds$fit + 1.96*preds$se.fit
pred_dat
#> time status age sex prob lcl ucl
#> 1 365 2 60 1 0.3552262 0.2703211 0.4401313
#> 2 365 2 60 2 0.5382048 0.4389833 0.6374264
We see that for a 60 year old male the 1 year survival probability is estimated as 35.5%, while for a 60 year old female it is 53.8%.
Below we estimate the entire survival curve using survfit(). I've saved time by reusing the pred_dat from above, and because the plot gets messy I've only plotted the male curve, which is the first row. I've also added some flair, but you only need the first 2 lines.
fit <- survfit(mod, newdata = pred_dat[1,])
plot(fit, conf.int = TRUE)
title("Estimated survival probability for age 60 male")
abline(v = 365, col = "blue")
abline(h = pred_dat[1,]$prob, col = "red")
abline(h = pred_dat[1,]$lcl, col = "orange")
abline(h = pred_dat[1,]$ucl, col = "orange")
Created on 2022-06-09 by the reprex package (v2.0.1)
I've overlaid lines corresponding to the predicted probabilities from part 1. The red line is the estimated survival probability at day 365 and the orange lines are the 95% confidence interval. The predicted survival probability matches, but if you squint closely you'll see the confidence interval doesn't match exactly. That's generally not a problem, but if it is a problem you should trust the ones from survfit() instead of the ones calculated from predict().
You can also dig into the values of fit to extract fitted probabilities and confidence bands, but the programming is a little more complicated because the desired time doesn't usually match exactly.
Section 5 of this document by Dimitris Rizopoulos discusses how to estimate Survival Probabilities from a Cox model. Dimitris Rizipoulos states:
the Cox model does not estimate the baseline hazard, and therefore we cannot directly obtain survival probabilities from it. To achieve that we need to combine it with a non-parametric estimator of the baseline hazard function. The most popular method to do that is to use the Breslow estimator. For a fitted Cox model from package survival these probabilities are calculated by function survfit(). As an illustration, we would like to derive survival probabilities from the following Cox model for the AIDS dataset:
He then goes on to provide R code that shows how to estimate Survival Probabilities at specific follow-up times.
I found this useful, it may help you too.

Confidence intervals for predicted probabilities from predict.lrm

I am trying to determine confidence intervals for predicted probabilities from a binomial logistic regression in R. The model is estimated using lrm (from the package rms) to allow for clustering standard errors on survey respondents (each respondent appears up to 3 times in the data):
library(rms)
model1<-lrm(outcome~var1+var2+var3,data=mydata,x=T,y=T,se.fit=T)
model.rob<-robcov(model1,cluster=respondent.id)
I am able to estimate a predicted probability for the outcome using predict.lrm:
predicted.prob<-predict(model.rob,newdata=data.frame(var1=1,var2=.33,var3=.5),
type="fitted")
What I want to determine is a 95% confidence interval for this predicted probability. I have tried specifying se.fit=T, but this not permissible in predict.lrm when type=fitted.
I have spent the last few hours scouring the Internet for how to do this with lrm to no avail (obviously). Can anyone point me toward a method for determining this confidence interval? Alternatively, if it is impossible or difficult with lrm models, is there another way to estimate a logit with clustered standard errors for which confidence intervals would be more easily obtainable?
The help file for predict.lrm has a clear example. Here is a slight modification of it:
L <- predict(fit, newdata=data.frame(...), se.fit=TRUE)
plogis(with(L, linear.predictors + 1.96*cbind(- se.fit, se.fit)))
For some problems you may want to use the gendata or Predict functions, e.g.
L <- predict(fit, gendata(fit, var1=1), se.fit=TRUE) # leave other vars at median/mode
Predict(fit, var1=1:2, var2=3) # leave other vars at median/mode; gives CLs

GLM prediction in r

I spilt the data set into train and test as following:
splitdata<-split(sb[1:nrow(sb),], sample(rep(1:2, as.integer(nrow(sb)/2))))
test<-splitdata[[1]]
train<-rbind(splitdata[[2]])
sb is the name of original data set, so it is 50/50 train and test.
Then I fitted a glm using the training set.
fitglm<- glm(num_claims~year+vt+va+public+pri_bil+persist+penalty_pts+num_veh+num_drivers+married+gender+driver_age+credit+col_ded+car_den, family=poisson, train)
now I want to predict using this glm, say the next 10 observations.
I have trouble to specify the newdata in predict(),
I tried:
pred<-predict(fitglm,newdata=data.frame(train),type="response", se.fit=T)
this will give a number of predictions that is equal to the number of samples in training set.
and finally, how to plot these predictions with confidence intervals?
Thank you for the help
If you are asking how to construct predictions on the next 10 in the test set then:
pred10<-predict(fitglm,newdata=data.frame(test)[1:10, ], type="response", se.fit=T)
Edit 9 years later:
#carsten's comment is correct regarding how to construct a confidence interval. If one has a non-linear link function for a glm-object, fitglm then this is a reasonably general method to recover the inverse of the link function and construct a two-sided 95% CI on the response scale:
pred.fit <- predict(fitglm, newdata=newdata, se.fit=TRUE)
pred.fit <- predict(fitglm, newdata=newdata, se.fit=TRUE)
CI.pred.upper <- family(fitglm)$linkinv( # that information is in the model
pred.fit+ 1.96*pred.fit$se.fit )
CI.pred.lower <- family(fitglm)$linkinv( # that information is in the model
pred.fit$fit - 1.96*pred.fit$se.fit )

Resources