Writing syntax for bivariate survival censored data to fit copula models in R - r

library(Sunclarco)
library(MASS)
library(survival)
library(SPREDA)
library(SurvCorr)
library(doBy)
#Dataset
diabetes=data("diabetes")
data1=subset(diabetes,select=c("LASER","TRT_EYE","AGE_DX","ADULT","TIME1","STATUS1"))
data2=subset(diabetes,select=c("LASER","TRT_EYE","AGE_DX","ADULT","TIME2","STATUS2"))
#Adding variable which identify cluster
data1$CLUSTER<- rep(1,197)
data2$CLUSTER<- rep(2,197)
#Renaming the variable so that that we hve uniformity in the common items in the data
names(data1)[5] <- "TIME"
names(data1)[6] <- "STATUS"
names(data2)[5] <- "TIME"
names(data2)[6] <- "STATUS"
#merge the files
Total_data=rbind(data1,data2)
# Re arranging the database
diabete_full=orderBy(~LASER+TRT_EYE+AGE_DX,data=Total_data)
diabete_full
#using Sunclarco package for Clayton a nd Gumbel
Clayton_1step <- SunclarcoModel(data=diabete_full,time="TIME",status="STATUS",
clusters="CLUSTER",covariates=c("LASER","TRT_EYE","ADULT"),
stage=1,copula="Clayton",marginal="Weibull")
summary(Clayton_1step)
# Estimates StandardErrors
#lambda 0.01072631 0.005818201
#rho 0.79887565 0.058942208
#theta 0.10224445 0.090585891
#beta_LASER 0.16780224 0.157652947
#beta_TRT_EYE 0.24580489 0.162333369
#beta_ADULT 0.09324001 0.158931463
# Estimate StandardError
#Kendall's Tau 0.04863585 0.04099436
Clayton_2step <- SunclarcoModel(data=diabete_full,time="TIME",status="STATUS",
clusters="CLUSTER",covariates=c("LASER","TRT_EYE","ADULT"),
stage=2,copula="Clayton",marginal="Weibull")
summary(Clayton_1step)
# Estimates StandardErrors
#lambda 0.01131751 0.003140733
#rho 0.79947406 0.012428824
#beta_LASER 0.14244235 0.041845100
#beta_TRT_EYE 0.27246433 0.298184235
#beta_ADULT 0.06151645 0.253617142
#theta 0.18393973 0.151048024
# Estimate StandardError
#Kendall's Tau 0.08422381 0.06333791
Gumbel_1step <- SunclarcoModel(data=diabete_full,time="TIME",status="STATUS",
clusters="CLUSTER",covariates=c("LASER","TRT_EYE","ADULT"),
stage=1,copula="GH",marginal="Weibull")
# Estimates StandardErrors
#lambda 0.01794495 0.01594843
#rho 0.70636113 0.10313853
#theta 0.87030690 0.11085344
#beta_LASER 0.15191936 0.14187943
#beta_TRT_EYE 0.21469814 0.14736381
#beta_ADULT 0.08284557 0.14214373
# Estimate StandardError
#Kendall's Tau 0.1296931 0.1108534
Gumbel_2step <- SunclarcoModel(data=diabete_full,time="TIME",status="STATUS",
clusters="CLUSTER",covariates=c("LASER","TRT_EYE","ADULT"),
stage=2,copula="GH",marginal="Weibull")
Am required to fit copula models in R for different copula classes particularly the Gaussian, FGM,Pluckett and possibly Frank (if i still have time). The data am using is Diabetes data available in R through the package Survival and Survcorr.
Its my thesis am working on and its a study for the exploratory purposes to see how does copula class serves different purposes as in results they lead to having different results on the same. I found a package Sunclarco in Rstudio which i was able to fit Clayton and Gumbel copula class but its not available yet for the other classes.
The challenge am facing is that since i have censored data which has to be incorporated in likelihood estimation then it becomes harder fro me to write a syntax since as I don't have a strong programming background. In addition, i have to incorporate the covariates present in programming and see their impact on the association if it present or not. However, taking to my promoter he gave me insights on how to approach the syntax writing for this puzzle which goes as follows
• ******First of all, forget about the likelihood function. We only work with the log-likelihood function. In this way, you do not need to take the product of the contributions over each of the observations, but can take the sum of the log-contributions over the different observations.
• Next, since we have a balanced design, we can use the regular data frame structure in which we have for each cluster only one row in the data frame. The different variables such as the lifetimes, the indicators and all the covariates are the columns in this data frame.
• Due to the bivariate setting, there are only 4 possible ways to give a contribution to the log-likelihood function: both uncensored, both censored, first uncensored and second censored, or first censored and second uncensored. Well, to create the loglikelihood function, you create a new variable in your data frame in which you put the correct contribution of the log-likelihood based on which individual in the couple is censored. When you take the sum of this variable, you have the value of the log-likelihood function.
• Since this function depends on parameters, you can use any optimizer, like optim or nlm to get your optimal values. By careful here, optim and nlm look for the minimum of a function, not a maximum. This is easy solved since the minimum of a function -f is the same as the maximum of a function f.
• Since you have for each copula function, the different expressions for the derivatives, it should be possible to get the likelihood functions now.******
Am still struggling to find a way as for each copula class each of the likelihood changes as the generator function is also unique for the respective copula since it needs to be adapted during estimation. Lastly, I should run analysis for both one and two steps of copula estimations as i will use to compare results.
if someone could help me to figure it out then I will be eternally grateful. Even if for just one copula class e.g. Gaussian then I will figure it the rest based on the one that am requesting to be assisted since I tried everything and still i have nothing to show up for and now i feel time is running out to get answers by myself.

Related

How to estimate less conservative standard errors when using post-stratified weights without full information in the survey package?

I'm encountering (very) huge standard errors in my analysis of proportions with post-stratified data when using the survey package.
I'm working with a data set including (normalized) weights calculated via raking by another party. I don't know exactly how the strata have been defined (e.g. "ageXgender" has been used, but it's unclear which categorization has been used). Let's assume a simple random sample with a considerable amount of non-response.
Is there any way to estimate reduced standard errors due to post-stratification without the exact information about the procedure in survey? I could recallibrate the weights with rake() if I can exactly define the strata but I don't have enough information for this.
I have tried to infer the strata by grouping all equal weights together and thought that I would at least get an upper bound of the reduction in standard errors this way but using them did only lead to marginally reduced standard errors and sometimes even increased standard errors:
# An example with the api datasets, pretending that pw are post-stratification weights of unknown origin
library(survey)
data(api)
apistrat$pw <-apistrat$pw/mean(apistrat$pw) #normalized weights
# Include some more extreme weights to simulate my data
mins <- which(apistrat$pw == min(apistrat$pw))
maxs <- which(apistrat$pw == max(apistrat$pw))
apistrat[mins[1:5], "pw"] <- 0.1
apistrat[maxs[1:5], "pw"] <- 10
apistrat[mins[6:10], "pw"] <- 0.2
apistrat[maxs[6:10], "pw"] <- 5
dclus1<-svydesign(id=~1, weights=~pw, data=apistrat)
# "Estimate" stratas from the weights
apistrat$ps_est <- as.factor(apistrat$pw)
dclus_ps_est <-svydesign(id=~1, strata=~ps_est, weights=~pw, data=apistrat)
svymean(~api00, dclus1)
svymean(~api00, dclus_ps_est)
#this actually increases the se instead of reducing it
My real weights are also much more complex with 700 unique values in 1000 cases.
Is it possible to somehow approximate the reduction of standard errors due to post-stratification without knowing the real variables and categories and -especially- population values for rake? Could I use rake with some assumptions about the variables and categories used in the strata definitions but without the population totals in some way?
If your data are already raked, then you know the population totals exactly: raking makes the estimated population totals equal the true population totals for the raking variables. So, if you know the raking variables you can estimate the population totals then rake. The raking won't change the weights (because ex hypothesi these were already raked) but it will change the standard error estimates
(The next version of the survey package will have an option in svydesign to do exactly this.)

How to obtain Brier Score in Random Forest in R?

I am having trouble getting the Brier Score for my Machine Learning Predictive models. The outcome "y" was categorical (1 or 0). Predictors are a mix of continuous and categorical variables.
I have created four models with different predictors, I will call them "model_1"-"model_4" here (except predictors, other parameters are the same). Example code of my model is:
Model_1=rfsrc(y~ ., data=TrainTest, ntree=1000,
mtry=30, nodesize=1, nsplit=1,
na.action="na.impute", nimpute=3,seed=10,
importance=T)
When I run the "Model_1" function in R, I got the results:
My question was how can I get the predicted possibility for those 412 people? And how to find the observed probability for each person? Do I need to calculate by hand? I found the function BrierScore() in "DescTools" package.
But I tried "BrierScore(Model_1)", it gives me no results.
codes I added:
library(scoring)
library(DescTools)
BrierScore(Raw_SB)
class(TrainTest$VL_supress03)
TrainTest$VL_supress03_nu<-as.numeric(as.character(TrainTest$VL_supress03))
class(TrainTest$VL_supress03_nu)
prediction_Raw_SB = predict(Raw_SB, TrainTest)
BrierScore(prediction_Raw_SB, as.numeric(TrainTest$VL_supress03) - 1)
BrierScore(prediction_Raw_SB, as.numeric(as.character(TrainTest$VL_supress03)) - 1)
BrierScore(prediction_Raw_SB, TrainTest$VL_supress03_nu - 1)
I tried some codes: have so many error messages:
One assumption I am making about your approach is that you want to compute the BrierScore on the data you train your model on (which is usually not the correct approach, google train-test split if you need more info there).
In general, therefore you should reflect on whether your approach is correct there.
The BrierScore method in DescTools only has a defined method for glm models, otherwise, it expects as input a vector of predicted probabilities and a vector of true values (see ?BrierScore).
What you would need to do though is to predict on your data using:
prediction = predict(model_1, TrainTest, na.action="na.impute")
and then compute the brier score using
BrierScore(as.numeric(TrainTest$y) - 1, prediction$predicted[, 1L])
(Note, that we transform TrainTest$y into a numeric vector of 0's and 1's in order to compute the brier score.)
Note: The randomForestSRC package also prints a normalized brier score when you call print(prediction).
In general, using one of the available workbenches for machine learning in R (mlr3, tidymodels, caret) might simplify this approach for you and prevent a lot of errors in this direction. This is a really good practice, especially if you are less experienced in ML as it can prevent many errors.
See e.g. this chapter in the mlr3 book for more information.
For reference, here is some very similar code using the mlr3 package, automatically also taking care of train-test splits.
data(breast, package = "randomForestSRC") # with target variable "status"
library(mlr3)
library(mlr3extralearners)
task = TaskClassif$new(id = "breast", backend = breast, target = "status")
algo = lrn("classif.rfsrc", na.action = "na.impute", predict_type = "prob")
resample(task, algo, rsmp("holdout", ratio = 0.8))$score(msr("classif.bbrier"))

Spark ML Logistic Regression with Categorical Features Returns Incorrect Model

I've been doing a head-to-head comparison of Spark 1.6.2 ML's LogisticRegression with R's glmnet package (the closest analog I could find based on other forum posts).
I'm specifically looking at these two fitting packages when using categorical features. When using continuous features, results for the two packages are comparable.
For my first attempt with Spark, I used the ML Pipeline API to transform my single 21-level categorical variable (called FAC for faculty) with StringIndexer followed by OneHotEncoder to get a binary vector representation.
When I fit my models in Spark and R, I get the following sets of results (that aren't even close):
SPARK 1.6.2 ML
lrModel.intercept
-3.1453838659926427
lrModel.weights
[0.37664264958084287,0.697784342445422,0.4269429071484017,0.3521764371898419,0.19233585960734872,0.6708049751689226,0.49342372792676115,0.5471489576300356,0.37650628365008465,1.0447861554914701,0.5371820187662734,0.4556833133252492,0.2873530144304645,0.09916227313130375,0.1378469333986134,0.20412095883272838,0.4494641670133712,0.4499625784826652,0.489912016708041,0.5433020878341336]
R (glmnet)
(Intercept) -2.79255253
facG -0.35292166
facU -0.16058275
facN 0.69187146
facY -0.06555711
facA 0.09655696
facI 0.02374558
facK -0.25373146
facX 0.31791765
facM 0.14054251
facC 0.02362977
facT 0.07407357
facP 0.09709607
facE 0.10282076
facH -0.21501281
facQ 0.19044412
facW 0.18432837
facF 0.34494177
facO 0.13707197
facV -0.14871580
facS 0.19431703
I've manually checked the glmnet results and they're correct (calculating the proportion of training samples with a particular level of the categorical feature and comparing that to the softmax prob. under the estimated model). These results do not change even when the max. no. of iterations in the optimization is set to 1000000 and the convergence tolerance is set to 1E-15. These results also do not change when the Spark LogisticRegression weights are initialized to the glmnet-estimated weights (Spark's optimizing a different cost function?).
I should say that the optimization problem is not different between these two approaches. You should be minimizing logistic loss (a convex surface) and thereby arriving at nearly the exact same answer).
Now, when I manually recode the FAC feature as a binary vector in the data file and read those binary columns as "DoubleType" (using Spark's DataFrame schema), I get very comparable results. (The order of the coefficients for the following results is different from the above results. Also the reference levels are different--"B" in the first case, "A" in the second--so the coefficients for this test should not match those from the above test.)
SPARK 1.6.2 ML
lrModel.intercept
-2.9530485080391378
lrModel.weights
[-0.19233467682265934,0.8524505857034615,0.09501714540028124,0.25712829253044844,0.18430675058702053,0.09317325898819705,0.4784688407322236,0.3010877381053835,0.18417033887042242,0.2346069926274015,0.2576267066227656,0.2633474197307803,0.05448893119304087,0.35096612444193326,0.3448460751810199,0.505448794876487,0.29757609104571175,0.011785058030487976,0.3548130904832268,0.15984047288368383]
R (glmnet)
s0
(Intercept) -2.9419468179
FAC_B -0.2045928975
FAC_C 0.8402716731
FAC_E 0.0828962518
FAC_F 0.2450427806
FAC_G 0.1723424956
FAC_H -0.1051037449
FAC_I 0.4666239456
FAC_K 0.2893153021
FAC_M 0.1724536240
FAC_N 0.2229762780
FAC_O 0.2460295934
FAC_P 0.2517981380
FAC_Q -0.0660069035
FAC_S 0.3394729194
FAC_T 0.3334048723
FAC_U 0.4941379563
FAC_V 0.2863010635
FAC_W 0.0005482422
FAC_X 0.3436361348
FAC_Y 0.1487405173
Standardization is set to FALSE for both and no regularization is performed (you shouldn't perform it here since you're really just learning the incidence rate of each level of the feature and the binary feature columns are completely uncorrelated from one another). Also, I should mention that the 21 levels of the categorical feature range in incidence counts from ~800 to ~3500 (so this is not due to lack of data; large error in estimates).
Anyone experience this? I'm one step away from reporting this to the Spark guys.
Thanks as always for the help.

How to get bootstrapped p-values and bootstrapped t-values and how does the function boot() work?

I would like to get the bootstrapped t-value and the bootstrapped p-value of a lm.
I have the following code (basically copied from a paper) which works.
# First of all you need the following packages
install.packages("car")
install.packages("MASS")
install.packages("boot")
library("car")
library("MASS")
library("boot")
boot.function <- function(data, indices){
data <- data[indices,]
mod <- lm(prestige ~ income + education, data=data) # the liear model
# the first element of the following vector contains the t-value
# and the second element is the p-value
c(summary(mod)[["coefficients"]][2,3], summary(mod)[["coefficients"]][2,4])
}
Now, I compute the bootstrapping model, which gives me the following:
duncan.boot <- boot(Duncan, boot.function, 1999)
duncan.boot
ORDINARY NONPARAMETRIC BOOTSTRAP
Call:
boot(data = Duncan, statistic = boot.function, R = 1999)
Bootstrap Statistics :
original bias std. error
t1* 5.003310e+00 0.288746545 1.71684664
t2* 1.053184e-05 0.002701685 0.01642399
I have two questions:
My understanding is that the bootsrapped value is the original plus the bias, which means that both bootstrapped values (the bootstrapped t-value as well as the bootstrapped p-value) are greater than the original values. This in turn is not possible, because if the t-value rises (which means more significance) the p-values MUST be lower, right? Therefore I think that I have not yet really understood the output of the boot function (here: duncan.boot). How do I compute the bootstrapped values?
I do not understand how the boot() works. If you look at duncan.boot <- boot(Duncan, boot.function, 1999) you see that I have not passed any arguments for the function "boot.function". I suppose that R sets data <- Duncan. But since I have not passed anything for the argument "indices", I do not understand how the following line in the function "boot.function" works data <- data[indices,]
I hope the questions make sense!??
The boot function is "expecting" to get a function that has two arguments: the first being a data.frame and the second being an "indices" vector (possibly with duplicate entries and probably not using all the indices) to use in selecting rows and probably having some duplicate or triplicates.) It then samples with replacement determined by the pattern of duplicates and triplicates from the original dataframe (multiple times determined by "R" with different "choice sets"), passes those to the indices argument in the boot.function, and then collects the results of the R number of function applications.
Regarding what is reported by the print method for boot objects, take a look at this (done after examining the returned object with str()
> duncan.boot$t0
[1] 5.003310e+00 1.053184e-05
> apply(duncan.boot$t, 2, mean)
[1] 5.342895220 0.002607943
> apply(duncan.boot$t, 2, mean) - duncan.boot$t0
[1] 0.339585441 0.002597411
It becomes more obvious that the T0 value is from the original data while the bias is the difference between the mean of the boot()-ed values and the T0 values. I don't think it makes a lot of sense to be asking why p-values based on parametric considerations are increasing in association with an increase in estimated t-statistics. You are really in two disparate regions of statistical thought when you do that. I would have interpreted the increase in p-values as an effect of the sampling process, which does not take into account the Normal distribution assumptions. It is simply saying something about the sampling distribution of the p-value (which is really just another sample statistic).
(Comment: The sourcebook used at the time of R development was Davison and Hinkley's "Bootstrap Methods and their Applications". I'm no claiming any support for my answer above, but I thought to put it in as a reference after Hagen Brenner asked about sampling with two indices in the comments below. There are many unexpected aspects of bootstrapping that arise after one goes beyond the simple parametric estimation and I would first turn to that reference if I were tackling more complex sampling situations.)

Cox regression in MATLAB

I know there is COXPHFIT function in MATLAB to do Cox regression, but I have problems understanding how to apply it.
1) How to compare two groups of samples with survival data in days (survdays), censoring (cens) and some predictor value (x)? The groups defined by groups logical variable. Groups have different number of samples.
2) What is the baseline parameter in coxphfit? I did read the docs, but how should I choose the baseline properly?
It would be great if you know a site with good detailed examples on medical survival data. I found only the Mathworks demo that does not even mention coxphfit.
Do you know may be another 3rd party function for Cox regression?
UPDATE: The r tag added since the answer I've got is for R.
With survival analysis, the hazard function is the instantaneous death rate.
In these analyses, you are typically measuring what effect something has on this hazard function. For example, you may ask "does swallowing arsenic increase the rate at which people die?". A background hazard is the level at which people would die anyway (without swallowing arsenic, in this case).
If you read the docs for coxphfit carefully, you will notice that that function tries to calculate the baseline hazard; it is not something that you enter.
baseline The X values at which to
compute the baseline hazard.
EDIT: MATLAB's coxphfit function doesn't obviously work with grouped data. If you are happy to switch to R, then the anaylsis is a one-liner.
library(survival)
#Create some data
n <- 20;
dfr <- data.frame(
survdays = runif(n, 5, 15),
cens = runif(n) < .3,
x = rlnorm(n),
groups = rep(c("first", "second"), each = n / 2)
)
#The Cox ph analysis
summary(coxph(Surv(survdays, cens) ~ x / groups, dfr))
ANOTHER EDIT: That baseline parameter to MATLAB's coxphfit appears to be a normalising constant. R's coxph function doesn't have an equivalent parameter. I looked in Statistical Computing by Michael Crawley and it seems to suggest that the baseline hazard isn't important, since it cancels out when you calculate the likelihood of your individual dying. See Chapter 33, and p615-616 in particular. My knowledge of how the model works isn't deep enough to explain the discrepancy in the MATLAB and R implementations; perhaps you could ask on the Stack Exchange Stats Analysis site.

Resources