Plot learning curves with caret package and R - r

I would like to study the optimal tradeoff between bias/variance for model tuning. I'm using caret for R which allows me to plot the performance metric (AUC, accuracy...) against the hyperparameters of the model (mtry, lambda, etc.) and automatically chooses the max. This typically returns a good model, but if I want to dig further and choose a different bias/variance tradeoff I need a learning curve, not a performance curve.
For the sake of simplicity, let's say my model is a random forest, which has just one hyperparameter 'mtry'
I would like to plot the learning curves of both training and test sets. Something like this:
(red curve is the test set)
On the y axis I put an error metric (number of misclassified examples or something like that); on the x axis 'mtry' or alternatively the training set size.
Questions:
Has caret the functionality to iteratively train models based of training set folds different in size? If I have to code by hand, how can I do that?
If I want to put the hyperparameter on the x axis, I need all the models trained by caret::train, not just the final model (the one with maximum performance got after CV). Are these "discarded" model still available after train?

Caret will iteratively test lots of cv models for you if you set the
trainControl() function and the parameters (e.g. mtry) using a tuneGrid().
Both of these are then passed as control options to the train()
function. The specifics of the tuneGrid parameters (e.g. mtry, ntree) will be different for each
model type.
Yes the final trainFit model will contain the error rate (however you specified it) for all folds of your CV.
So you could specify e.g. a 10-fold CV times a grid with 10 values of mtry -which would be 100 iterations. You might want to go get a cup of tea or possibly lunch.
If this sounds complicated ... there is a very good example here - caret being one of the best documented packages about.

Here's my code on how I approached this issue of plotting a learning curve in R while using the Caret package to train your model. I use the Motor Trend Car Road Tests in R for illustrative purposes. To begin, I randomize and split the mtcars dataset into training and test sets. 21 records for training and 13 records for the test set. The response feature is mpg in this example.
# set seed for reproducibility
set.seed(7)
# randomize mtcars
mtcars <- mtcars[sample(nrow(mtcars)),]
# split iris data into training and test sets
mtcarsIndex <- createDataPartition(mtcars$mpg, p = .625, list = F)
mtcarsTrain <- mtcars[mtcarsIndex,]
mtcarsTest <- mtcars[-mtcarsIndex,]
# create empty data frame
learnCurve <- data.frame(m = integer(21),
trainRMSE = integer(21),
cvRMSE = integer(21))
# test data response feature
testY <- mtcarsTest$mpg
# Run algorithms using 10-fold cross validation with 3 repeats
trainControl <- trainControl(method="repeatedcv", number=10, repeats=3)
metric <- "RMSE"
# loop over training examples
for (i in 3:21) {
learnCurve$m[i] <- i
# train learning algorithm with size i
fit.lm <- train(mpg~., data=mtcarsTrain[1:i,], method="lm", metric=metric,
preProc=c("center", "scale"), trControl=trainControl)
learnCurve$trainRMSE[i] <- fit.lm$results$RMSE
# use trained parameters to predict on test data
prediction <- predict(fit.lm, newdata = mtcarsTest[,-1])
rmse <- postResample(prediction, testY)
learnCurve$cvRMSE[i] <- rmse[1]
}
pdf("LinearRegressionLearningCurve.pdf", width = 7, height = 7, pointsize=12)
# plot learning curves of training set size vs. error measure
# for training set and test set
plot(log(learnCurve$trainRMSE),type = "o",col = "red", xlab = "Training set size",
ylab = "Error (RMSE)", main = "Linear Model Learning Curve")
lines(log(learnCurve$cvRMSE), type = "o", col = "blue")
legend('topright', c("Train error", "Test error"), lty = c(1,1), lwd = c(2.5, 2.5),
col = c("red", "blue"))
dev.off()
The output plot is as shown below:

At some point, probably after this question was asked, the caret package added the learning_curve_dat function which helps assess model performance across a range of training set sizes.
Here is the example from the function documentation:
library(caret)
set.seed(1412)
class_dat <- twoClassSim(1000)
set.seed(29510)
lda_data <- learning_curve_dat(dat = class_dat,
outcome = "Class",
test_prop = 1/4,
## `train` arguments:
method = "lda",
metric = "ROC",
trControl = trainControl(classProbs = TRUE,
summaryFunction = twoClassSummary))
ggplot(lda_data, aes(x = Training_Size, y = ROC, color = Data)) +
geom_smooth(method = loess, span = .8)
The performance metric(s) are found for each Training_Size and saved in lda_data along with the Data variable ("Resampling", "Training", and optionally "Testing").
Here is a link to the function documentation: https://rdrr.io/cran/caret/man/learning_curve_dat.html
To be clear, this answers the first part of the question but not the second part.
NOTE Before at least August 2020 there was a typo in the caret package code and documentation. The function call was learing_curve_dat before it was corrected to learning_curve_dat. I've updated my answer to reflect this change. Make sure you are using a recent version of the caret package.

Related

Unusually high accuracy metrics for SVM in R - Have I made a mistake?

I have a dataset where I am trying to predict a binary outcome. I have a built an SVM model but am concerned by the accuracy, sensitivity, and specificity values as they seem too high to be plausible. I am relatively new to coding and machine learning. I am assuming I have made a mistake in my approach and was wondering if anyone could identify any potential issues with my code.
I am working with a dataset that has approximately 10,000 rows and 39 columns. My first step was to split into train, validate, and test dataframes since I will be comparing multiple models:
spec = c(train = .6, test = .2, validate = .2)
g = sample(cut(
seq(nrow(subset)),
nrow(subset)*cumsum(c(0,spec)),
labels = names(spec)
))
res = split(subset, g)
The predictor variable is just almost split equally among 0 and 1 in the original dataset. I have confirmed that this same ratio is maintained in the train, validate, and test sets so there is no class imbalance.
I have then tuned and built the svm model using the e1071 package in R. I am using a linear kernal and type C-classification:
tune.out <- tune(method = 'svm', YNPGALL120~., data = res$train, ranges = list(cost = c(0.001, 0.01, 0.1, 1, 5, 10, 100)))
bestmod <- tune.out$best.model
svm_model <- svm(YNPGALL120 ~ ., data = res$train, type = 'C-classification', kernel = 'linear', C = 5, scaled = TRUE)
The bestmod results in a cost of 5 and 967 support vectors. I then use the best model to make predictions using the validation set which produces a confusion matrix with values of .9781 for accuracy, .9854 for sensitivity, and .9713 for specificity:
predicted_svm <- predict(tune.out$best.model, newdata = res$validate)
confusionMatrix(res$validate$YNPGALL120, predicted_svm)
The high performance values seem too good to be true especially since I have run a logistic regression model on the same dataset and came up with accuracy, specificity, and sensitivity metrics in the mid 50s.
Is there anything wrong I've done in my steps that could lead to such high performance metrics? Alternatively, are there any good resources anyone could point me to in regards to building and tuning SVM models in R that might help?

How to use Cross Validation to Determine a Final Model using Training, Validation, & Test Sets

I am having trouble understanding which datasets: training, validation, and test need to be used for the model selection phase vs the Final Model testing phase. I try to explain as much of it in detail below while posting reproducible code at the bottom. Thank you for any and all advice / suggestions!
Let's say we use the open "Life Expectancy (WHO)" dataset available on Kaggle to create predictions on the feature Life expectancy while using RMSE as our measurement of error. (I am asking more so about the concepts behind CV here rather than targeting the lowest RMSE). We first partition a training and test set led_train and led_test from the original dataset led.
Next we create a linear model with y = Life expectancy and x = GDP with data = led_train and do the same for random forest and knn models using repeated cross validation using the Caret Package. We then run predictions with the newly created models and led_test. The RMSE can be calculated using a function of true vs predicted ratings.
I now have RMSEs of Linear Model = 9.81141, Random Forest = 9.828415, kNN = 8.923281 on the test set. Based on these values, I would obviously select the kNN Model to be my "Final Model," however I am not sure how to test it on new "unseen" data to see how well it actually performs.
Do I need to split "led" into 3 sets (training, validation, and test) then use validation for the model selection phase, saving test for the "Final Model?" Additionally, if I choose the kNN model, would I change the data inside the train function = led_train to led so that it is run on ALL of the data, after which I use the led_test for the prediction? In the Final Model, would I again set trControl and run cross validation or is this no longer necessary because this was done on the training data? Please find my reproducible code posted below (you will have to read in the .csv according to your wd) and thank you again for taking a look!
*The seed is set to 123 for reproducibility and I am running R 3.63.
library(pacman)
pacman::p_load(readr, caret, tidyverse, dplyr)
# Download the dataset:
download.file("https://raw.githubusercontent.com/christianmckinnon/StackQ/master/LifeExpectancyData.csv", "LifeExpectancyData.csv")
# Read in the data:
led <-read_csv("LifeExpectancyData.csv")
# Check for NAs
sum(is.na(led))
# Set all NAs to 0
led[is.na(led)] <- 0
# Rename `Life expectancy` to life_exp to avoid using spaces
led <-led %>% rename(life_exp = `Life expectancy`)
# Partition training and test sets
set.seed(123, sample.kind = "Rounding")
test_index <- createDataPartition(y = led$life_exp, times = 1, p = 0.2, list = F)
led_train <- led[-test_index,]
led_test <- led[test_index,]
# Add RMSE as unit of error measurement
RMSE <-function(true_ratings, predicted_ratings){
sqrt(mean((true_ratings - predicted_ratings)^2))
}
# Create a linear model
led_lm <- lm(life_exp ~ GDP, data = led_train)
# Create prediction
lm_preds <-predict(led_lm, led_test)
# Check RMSE
RMSE(led_test$life_exp, lm_preds)
# The linear Model achieves an RMSE of 9.81141
# Create a Random Forest Model with Repeated Cross Validation
led_cv <- trainControl(method = "repeatedcv", number = 5, repeats = 3,
search = "random")
# Set the seed for reproducibility:
set.seed(123, sample.kind = "Rounding")
train_rf <- train(life_exp ~ GDP, data = led_train,
method = "rf", ntree = 150, trControl = led_cv,
tuneLength = 5, nSamp = 1000,
preProcess = c("center","scale"))
# Create Prediction
rf_preds <-predict(train_rf, led_test)
# Check RMSE
RMSE(led_test$life_exp, rf_preds)
# The rf Model achieves an RMSE of 9.828415
# kNN Model:
knn_cv <-trainControl(method = "repeatedcv", repeats = 1)
# Set the seed for reproducibility:
set.seed(123, sample.kind = "Rounding")
train_knn <- train(life_exp ~ GDP, method = "knn", data = led_train,
tuneLength = 10, trControl = knn_cv,
preProcess = c("center","scale"))
# Create the Prediction:
knn_preds <-predict(train_knn, led_test)
# Check the RMSE:
RMSE(led_test$life_exp, knn_preds)
# The kNN model achieves the lowest RMSE of 8.923281
My approach would be the following. The final model should use all of the data. I am not sure what would motivate not including all data in the final model. You are just throwing away predictive power.
For cross validation, just split the data into training and test data. Then choose the modelling method with the best performance for the full model, and then create the complete model.
The bigger problem with the current code is that the cross validation method is likely to result in two things: spurious accuracy and potentially spurious model comparisons. You need to deal with temporal autocorrelation in the cross validation. For example, if my training dataset has features for the UK for 2014 and 2016, you expect something like a random forest to be able to predict life expectancy for 2015 with high accuracy. And that is potentially all you are measuring with the current type of cross validation. Better to create a segregated dataset so that the countries in training and test are different, or splitting it into clearly distinct time periods. The exact approach would depend on exactly what you want the model to predict

ROC curve from train/test set in caret R package

I am working to plot a ROC curve of a model that uses a test/train set created with the caret R package. I either am not putting in the right data to plot or am missing something about the creation of my test/train set. Any insight??
*Edited with correct answer
library(caret)
library(mlbench)
set.seed(506)
data(whas)
inTrain <- createDataPartition(y = whas$bin.frail,
p = .75, list = FALSE)
str(inTrain)
training <- whas[ inTrain,]
testing <- whas[-inTrain,]
nrow(training)
nrow(testing)
tc <- trainControl("cv", 10, savePredictions=T) #"cv" = cross-validation, 10-fold
mod1 <- train(bin.frail ~ ,
data = training ,
method = "glm" ,
family = binomial ,
trControl = tc)
library(pROC)
mod1pred<- predict(mod1, newdata=testingresponse="prob")
plot(roc(testing$bin.frail, mod1pred[[2]]), print.auc=TRUE, col="red",
xlim=c(0,1))
It's hard to know for sure without a reproducible answer, but presumably your response variable bin.frail isn't numeric. For example, it might be coded using letters (e.g., "Y", "N"); or with numbers which are being stored as a factor. You could check this using is.numeric(whas$bin.frail).
As a side note, in your call to roc() it looks like mod1pred is being created from your training data whereas testing$bin.frail is from your test data. You could correct this by adding newdata = testing to your call to predict when creating mod1pred.

How to create a learning curve (bias/variance) from the output of caret::train

I am new to the caret library. I would like to use the train function to run cross-validation on my dataset (using the rpart method for classification). My goal is is to produce learning curves using the data returned from my call to train. The learning curve would plot the dataset size on the x-axis. The error of the predictions on the training and cross validation sets would be plotted as a function of dataset size.
My question is, does caret make predictions on both the training and cv folds? If the answer is yes, how would I go about extracting that data?
Assuming the answer is yes, here is a simple code sample that you could append to to illustrate:
library(MASS)
data(biopsy)
biopsy <- biopsy[, -1]
names(biopsy) <- c("thick", "u.size", "u.shape", "adhsn", "s.size", "nucl", "chrom", "n.nuc", "mit", "class")
biopsy.v2 <- na.omit(biopsy)
set.seed(1)
ind <- sample(2, nrow(biopsy.v2), replace = TRUE, prob = c(0.7, + 0.3))
biop.train <- biopsy.v2[ind == 1, ]
tr.model <- caret::train(class ~ ., data= biop.train, trControl = trainControl(method="cv", number=4, verboseIter = FALSE, savePredictions = "final"), method='rpart')
#Can I extract train and cv accuracies from tr.model?
Thanks.
note: I realize that I may need to call train repeatedly with different samples of my dataset (assuming caret doesn't also support this), and that is not reflected in the code sample here.
You can try this:
A data frame with predictions for each resample:
tr.model$pred
A data frame with columns for each performance metric. Each row corresponds to each resample:
tr.model$resample
A data frame with the final parameters:
tr.model$bestTune
A data frame with the training error rate and values of the tuning parameters:
tr.model$results
To specify repeated CV:
trainControl(..., repeats = n)
where n is an integer (the number of complete sets of folds to compute)
EDIT: determine which resamples were in the test folds:
the relevant information is in tr.model$pred data frame:
tr.model$pred[tr.model$pred$Resample=="Fold1",4:5]
tr.model$pred[tr.model$pred$Resample=="Fold2",4:5]
tr.model$pred[tr.model$pred$Resample=="Fold3",4:5]
tr.model$pred[tr.model$pred$Resample=="Fold4",4:5]
the ones that were not in the test folds were in the training folds

caret::train: specify training data parameters

I am designing a neural network model that predicts estimation of van genuchten water retention parameters (theta_r, thera_s, alpha, n) using limited to more extended input data like texture, bulk density, and one or two water retention. Investigating neural networks in R project I found RSNNS package and I create and train multiple multi-layer perceptrons (MLPs) with tuning on the number of hidden units and the learning rate. The general performance characterized with training and testing RMSEs for these models is really poor and random, in fact, i used log-transformed values of alpha and n parameters to avoid bias and account for their approximately lognormal distributions but this does not help much :(. I was recommended to work with nnet and caret package but I've had trouble adapting the code i don't know what I'm doing wrong, any suggestion?
#input dataset
basic <- read.table(url("https://dl.dropboxusercontent.com/s/m8qe4k5swz1m3ij/basic.txt?dl=1&token_hash=AAH6Z3d6fWTLoQZYi04Ys72sdufdERE5gm4v7eF0cgMlkQ"), header=T, sep=" ")
#output dataset
fitted <- read.table(url("https://dl.dropboxusercontent.com/s/rjx745ej80osbbu/fitted.txt?dl=1&token_hash=AAHP1zcPQyw4uSe8rw8swVm3Buqe3TP7I1j-4_SOeeUTvw"), header=T, sep=" ")
# Use log-transformed values of alpha and n output parameters
fitted$alpha <- log(fitted$alpha)
fitted$n <- log(fitted$n)
#Fit model with caret package
library(caret)
model <- train(x = basic, y = fitted, method='nnet', linout=TRUE, trace = FALSE,
#Grid of tuning parameters to try:
tuneGrid=expand.grid(.size=c(1,5,10),.decay=c(0,0.001,0.1)))
caret is just a wrapper to the algorithms it is calling so you can specify any parameter in the algorith even if it is not an option in caret's tuning grid. This is accomplishing via the "..." in caret's train() function, which is basically saying that you can pass any extra parameters into the method you are calling. I'm not sure what parameters you want to adjust to your nnet call (and I'm getting errors accessing your dropbox data) so here is a trivial example passing in specific values to maxit and Hess:
> library(caret)
> m1 <- train(Species~.,data=iris, method='nnet', linout=TRUE, trace = FALSE,trControl=trainControl("cv"))
> #this time pass in values for maxint and Hess
> m2 <- train(Species~.,data=iris, method='nnet', linout=TRUE, trace = FALSE,trControl=trainControl("cv"),maxint=10,Hess=T)
> m1$finalModel$call
nnet.formula(formula = modFormula, data = data, size = tuneValue$.size,
decay = tuneValue$.decay, linout = TRUE, trace = FALSE)
> m2$finalModel$call
nnet.formula(formula = modFormula, data = data, size = tuneValue$.size,
decay = tuneValue$.decay, linout = TRUE, trace = FALSE, maxint = 10,
Hess = ..4)

Resources