i would like to develop a cox proportional hazard model with r, use it to predict input and evaluate the accuracy of the model. For the evaluation I would like to use the Brior score.
# import various packages, needed at some point of the script
library("survival")
library("survminer")
library("prodlim")
library("randomForestSRC")
library("pec")
library("rpart")
library("mlr")
library("Hmisc")
library("ipred")
# load lung cancer data
data("lung")
head(lung)
# recode status variable
lung$status <- lung$status-1
# Delete rows with missing values
lung <- na.omit(lung)
# split data into training and testing
## 80% of the sample size
smp_size <- floor(0.8 * nrow(lung))
## set the seed to make your partition reproducible
set.seed(123)
train_ind <- sample(seq_len(nrow(lung)), size = smp_size)
# training and testing data
train.lung <- lung[train_ind, ]
test.lung <- lung[-train_ind, ]
# time and failure event
s <- Surv(train.lung$time, train.lung$status)
# create model
cox.ph2 <- coxph(s~age+meal.cal+wt.loss, data=train.lung)
# predict
pred <- predict(cox.ph2, newdata = train.lung)
# evaluate
sbrier(s, pred)
as an outcome of the prediction I would expect the time (as in "when does this individuum experience failure). Instead I get values like this
[1] 0.017576359 -0.135928959 -0.347553969 0.112509137 -0.229301199 -0.131861582 0.044589175 0.002634008
[9] 0.345966978 0.209488560 0.002418358
What does that mean?
Furthermore sbrier does not work. Apparently it can not work with the prediction pred (no surprise there)
How do I solve this? How do I make a prediction with cox.ph2? How can I evaluate the model afterwards?
The predict() function won't return a time value, you have to specify the argument type = c("lp", "risk","expected","terms","survival") in the predict() function.
If you want to get the hazard ratios :
predict(cox.ph2, newdata = test.lung, type = "risk")
Note that you want to predict the values on the test set not the training set.
I have read that you can use AFT models in your case :
https://stats.stackexchange.com/questions/79362/how-to-get-predictions-in-terms-of-survival-time-from-a-cox-ph-model
You also can read this post :
Calculate the Survival prediction using Cox Proportional Hazard model in R
Hope it will help
Related
I am working with the wine quality database.
I am studying regression trees depending on different variables as:
library(rpart)
library(rpart.plot)
library(rattle)
library(naniar)
library(dplyr)
library(ggplot2)
vinos <- read.csv(file = 'Wine.csv', header = T)
arbol0<-rpart(formula=quality~chlorides, data=vinos, method="anova")
fancyRpartPlot(arbol0)
arbol1<-rpart(formula=quality~chlorides+density, data=vinos, method="anova")
fancyRpartPlot(arbol1)
I want to calculate the mean square error to see if arbol1 is better than arbol0. I will use my own dataset since no more data is available. I have tried to do it as
aaa<-predict(object=arbol0, newdata=data.frame(chlorides=vinos$chlorides), type="anova")
bbb<-predict(object=arbol1, newdata=data.frame(chlorides=vinos$chlorides, density=vinos$density), type="anova")
and then substract manually the last column of the dataframe from aaa and bbb. However, I am getting an error. Can someone please help me?
This website could be useful for you. It's very important to split your dataset into train and test subsets before training your models. In the following code, I've done it with base functions, but there's another function called sample.split from the caTools package that does the same procedure. I attach you this website where you can see all the ways to split data in R.
Remember that the function of the Mean Squared Error (MSE) is the following one:
So, it's very simple to apply it with R. You just have to compute the mean of the squared difference between the observed (i.e, the response variable from your test subset) and predicted values (i.e, the values you have predicted from the model with the predict function).
A solution for your wine dataset could be this one, based on the previous website.
library(rpart)
library(dplyr)
library(data.table)
vinos <- fread(file = 'Winequality-red.csv', header = TRUE)
# Split data into train and test subsets
sample_index <- sample(nrow(vinos), size = nrow(vinos)*0.75)
train <- vinos[sample_index, ]
test <- vinos[-sample_index, ]
# Train regression trees models
arbol0 <- rpart(formula = quality ~ chlorides, data = train, method = "anova")
arbol1 <- rpart(formula = quality ~ chlorides + density, data = train, method = "anova")
# Make predictions for each model
pred0 <- predict(arbol0, newdata = test)
pred1 <- predict(arbol1, newdata = test)
# Calculate MSE for each model
mean((pred0 - test$quality)^2)
mean((pred1 - test$quality)^2)
I have a response variable contains 100 observation and I wish to estimate them by using 8 independent variables via employing supper Vector Regression.
I have searched a lot to find a template in order to implement my SVR with training and testing sets in R, but I could not find the way which I wanted.
I have used the following code to fit the model and calculate RMSE, but I want to check my model for unseen data and I do not know how to perform this in R.
My code is as follows:
data<-read.csv("Enzyme.csv",header = T)
Testset <- data[c(11:30),]
Trainset <- data[-c(11:30), ]
#attached dependent variable
Y<-Trainset$Urease
Trainset<-Trainset[,-c(1)]
SVMUr <- svm (Urease~., data=Trainset, kernel="radial",gamma=
1,epsilon=seq(0,1,0.1), cost=10)
summary(SVMUr)
################### RMSE SVMUr ##########################
RMSE <- function(observed, predicted){
sqrt(mean((predicted - observed)^2, na.rm=TRUE))
}
RMSE(observed =Y,predicted = predSVMUr)
######## Check the model for unseen data via using testset ######
predicted_test <- predict(SVMUr, Testset[,-1])
RMSE(Testset$Urease, predicted_test)
The way you want to go about testing your model is to:
First apply your model on unseen data using predict(SVMUr, Testset[,-1]) assuming the first variable is your target response Y. If it is the 15th variable for example, replace -1 with -15.
Now use the RMSE() function to get the RMSE of the model on your test dataset
Additional Recommendation:
I would not split the data the way you do because as you've pointed out you end up with too little training data in relation to test data. If you want to split it by 80%-20%, you can adjust from my code below:
data<-read.csv("Enzyme.csv",header = T)
split_data <- sample(nrow(data), nrow(data)*0.8)
Trainset <- data[split_data, ]
Testset <- data[-split_data, ]
That would put 80% of your data in the train set and 20% in the test set.
The rest of the code:
SVMUr <- svm (Urease~., data=Trainset, kernel="radial",gamma=
1,epsilon=seq(0,1,0.1), cost=10)
summary(SVMUr)
################### RMSE SVMUr ##########################
RMSE <- function(observed, predicted){
sqrt(mean((predicted - observed)^2, na.rm=TRUE))
}
RMSE(observed =Trainset$Urease, predicted = predSVMUr)
######## Check the model for unseen data via using testset ######
predicted_test <- predict(SVMUr, Testset[,-1])
RMSE(Testset$Urease, predicted_test)
I use caret a lot for my machine learning tasks in R and I like it a lot.
But I face the following problem:
I train a model in caret, say a linear regression with lm()
When I want to score new data, I do: predict(model, new_data)
When new_datacontains missing values in my predictors, predict returns no prediction, instead of say NA
Is it possible to either:
return a prediction for all rows in new_data with a prediction of NA when it is not possible or
return predictions + the row number of the dataframe the prediction corresponds to?
E.g. like the mlr-package does with an id-column that shows which row the prediction corresponds to:
Here is the link to the mlr-predict page with more details:
mlr-package: predict with row-id
Any help greatly appreciated!
You can identify the cases with missing values prior to running caret::train() by creating a new column with the row names in your data set, since these default to the row numbers in the data frame.
Using the Sonar data set from the mlbench package as an illustration:
library(mlbench)
data(Sonar)
library(caret)
set.seed(95014)
# add row numbers
Sonar$rowId <- rownames(Sonar)
# create training & testing data sets
inTraining <- createDataPartition(Sonar$Class, p = .75, list=FALSE)
training <- Sonar[inTraining,]
testing <- Sonar[-inTraining,]
# set column 60 to NA for some values in test data
testing[48:51,60] <- NA
testing[!complete.cases(testing),"rowId"]
...and the output:
> testing[!complete.cases(testing),"rowId"]
[1] "193" "194" "200" "206"
You can then run predict() on the rows in the test data set that have complete cases. Again using the Sonar dataset with a random forest model and 3 fold cross validation to expedite processing:
fitControl <- trainControl(method = "cv",number = 3)
fit <- train(x,y, method="rf",data=Sonar,trControl = fitControl)
predicted <- predict(fit,testing[complete.cases(testing),])
Another way to handle this situation is to use an imputation strategy to eliminate the missing values for the independent variables in your model. My article on Github, Strategies for Handling Missing Values links to a number of research papers on this topic.
I'm trying to use R's gbm regression model.
I want to compute the coefficient of determination (R squared) between the cross validation predicted response values and the true response values. However, the cv.fitted values of the gbm.object only provides the predicted response values for 1-train.fraction. So in order to get what I want I need to find which of the observations correspond to the cv.fitted values.
Any idea how to get that information?
You can use the predict function to easily get at model predictions, if I'm understanding your question correctly.
dat <- data.frame(y = runif(1000), x=rnorm(1000))
gbmMod <- gbm::gbm(y~x, data=dat, n.trees=5000, cv.folds=0)
summary(lm(predict(gbmMod, n.trees=5000) ~ dat$y))$adj.r.squared
But shouldn't we hold data to the side and assess model accuracy on test data? This would correspond to the following, where I partition the data into a training set (70%) and testing set (30%):
inds <- sample(1:nrow(dat), 0.7*nrow(dat))
train <- dat[inds, ]
test <- dat[-inds, ]
gbmMod2 <- gbm::gbm(y~x, data=train, n.trees=5000)
preds <- predict(gbmMod2, newdata = test, n.trees=5000)
summary(lm(preds ~ test[,1]))$adj.r.squared
It's also worth noting that the number of trees in the gbm can be tuned using the gbm.perf function and the cv.folds argument to the gbm function. This helps avoids overfitting.
I have a dataset with two columns as shown below, where Column 1, timestamp is a particular value for time for which Column.10 gives the total power usage at that instance of time. There are totally 81502 instances for this data.
I'm doing support vector regression on this data in R using the e1071 package to predict the future usage of power. The code is given below. I first divided the dataset into training and test data. Then using the training data modeled the data using the svm function and then predict the power usage for the testset.
library(e1071)
attach(data.csv)
index <- 1:nrow(data.csv)
testindex <- sample(index,trunc(length(index)/3))
testset <- na.omit(data.csv[testindex, ])
trainingset <- na.omit(data.csv[-testindex, ])
model <- svm(Column.10 ~ timestamp, data=trainingset)
prediction <- predict(model, testset[,-2])
tab <- table(pred = prediction, true = testset[,2])
However, when I try to make a confusion matrix from the prediction, I'm getting the error:
Error in table(pred = prediction, true = testset[, 2]) : all arguments must have the same length
So I tried to find the length of the two arguments and found that
the length(prediction) to be 81502
and the length(testset[,2]) to be 27167
Since I had done the prediction only for the testset, I don't know how prediction is done for 81502 values. How are the total no of values different for the prediction and the testset? How is the power value for the entire dataset getting predicted eventhough I gave it only for the testset?
Change
prediction <- predict(model, testset[,-2])
in
prediction <- predict(model, testset)
However, you should not use table when doing regression, use the MSE instead.