Let me start by saying that I have read many posts on Cross Validation and it seems there is much confusion out there. My understanding of that it is simply this:
Perform k-fold Cross Validation i.e. 10 folds to understand the average error across the 10 folds.
If acceptable then train the model on the complete data set.
I am attempting to build a decision tree using rpart in R and taking advantage of the caret package. Below is the code I am using.
# load libraries
library(caret)
library(rpart)
# define training control
train_control<- trainControl(method="cv", number=10)
# train the model
model<- train(resp~., data=mydat, trControl=train_control, method="rpart")
# make predictions
predictions<- predict(model,mydat)
# append predictions
mydat<- cbind(mydat,predictions)
# summarize results
confusionMatrix<- confusionMatrix(mydat$predictions,mydat$resp)
I have one question regarding the caret train application. I have read A Short Introduction to the caret Package train section which states during the resampling process the "optimal parameter set" is determined.
In my example have I coded it up correctly? Do I need to define the rpart parameters within my code or is my code sufficient?
when you perform k-fold cross validation you are already making a prediction for each sample, just over 10 different models (presuming k = 10).
There is no need make a prediction on the complete data, as you already have their predictions from the k different models.
What you can do is the following:
train_control<- trainControl(method="cv", number=10, savePredictions = TRUE)
Then
model<- train(resp~., data=mydat, trControl=train_control, method="rpart")
if you want to see the observed and predictions in a nice format you simply type:
model$pred
Also for the second part of your question, caret should handle all the parameter stuff. You can manually try tune parameters if you desire.
An important thing to be noted here is not confuse model selection and model error estimation.
You can use cross-validation to estimate the model hyper-parameters (regularization parameter for example).
Usually that is done with 10-fold cross validation, because it is good choice for the bias-variance trade-off (2-fold could cause models with high bias, leave one out cv can cause models with high variance/over-fitting).
After that, if you don't have an independent test set you could estimate an empirical distribution of some performance metric using cross validation: once you found out the best hyper-parameters you could use them in order to estimate de cv error.
Note that in this step the hyperparameters are fixed but maybe the model parameters are different accross the cross validation models.
In the first page of the short introduction document for caret package, it is mentioned that the optimal model is chosen across the parameters.
As a starting point, one must understand that cross-validation is a procedure for selecting best modeling approach rather than the model itself CV - Final model selection. Caret provides grid search option using tuneGrid where you can provide a list of parameter values to test. The final model will have the optimized parameter after training is done.
Related
The caret library in R has a hyper-parameter 'selectionFunction' inside trainControl().
It's used to prevent over-fitting models using Breiman's one standard error rule, or tolerance, etc.
Does mlr have an equivalent? If so, which function is it within?
mlr only allows to choose optimal hyperparameters by optimizing certain measures/metrics.
However, essentially each "measure" in mlr is just a function that specifies how a certain performance is handled.
You can try to write your own custom measure as outlined in this vignette.
Other than that, it might be worth opening this as a feature request in the new mlr3 framework, specifically in mlr3measures, since mlr itself is deprecated.
Posting an answer to my own question, I found this..
Estimate relative overfitting.
Source: R/relativeOverfitting.R
Estimates the relative overfitting of a model as the ratio of the difference in test and train performance to the difference of test performance in the no-information case and train performance. In the no-information case the features carry no information with respect to the prediction. This is simulated by permuting features and predictions.
estimateRelativeOverfitting(
predish,
measures,
task,
learner = NULL,
pred.train = NULL,
iter = 1
)
Arguments
predish - (ResampleDesc ResamplePrediction Prediction) Resampling strategy or resampling prediction or test predictions.
measures - (Measure list of Measure) Performance measure(s) to evaluate. Default is the default measure for the task, see here getDefaultMeasure.
task - (Task) The task.
learner - (Learner character(1)) The learner. If you pass a string the learner will be created via makeLearner.
pred.train - (Prediction) Training predictions. Only needed if test predictions are passed.
iter - (integer) Iteration number. Default 1, usually you don't need to specify this. Only needed if test predictions are passed.
I want to perform a random forest model, so I split my data into 70% for the train and 30% for the test. I applied a cross validation procedure on my train data (70%) and obtained a precision for the cross validation. After that, I test my model on the test data (30%), then I have another clarification.
So, I want to know if this is a good approach to test the robustness of my model, and what is the interpretation of these two precision.
Thanks in advance.
You do not need to perform Cross-Validation when building a RF model, as RF calculates its own CV score knows as OOB score. In fact, the results that you get from the model (the confusion matrix at model_name$confusion) is based on the OOB scores.
You can use the OOB scores (and the various metrics derived from them, such as Precision, Recall, etc.) to select a model from a list of models (for ex. models with different parameters / arguments) and then use the test data to check if the selected model generalises well.
I have a really large dataset and i'm trying to build a classification model using R.
However I need to use a train, test and validation set. But i'm a bit confused about the way to perform this. For example, I built a tree using a train set and then i computed the predicion using a test set. But I believe that i should be using the train and the test set to best tune the tree and after that use the validation set to validate. How can i do this?
library(rpart)
part.installed <- rpart(TARGET ~ RS_DESC+SAP_STATUS +
ACTIVATION_STATUS+ROTUL_STATUS+SIM_STATUS+RATE_PLAN_SEGMENT_NORM,
trainSet, method="class")
part.predictions <- predict(part.installed, testSet, type="class")
(P.S the tree is only an example. It could be another classification algorithm)
Usually the terminology is as follows:
The training set is used to build the classifier
The validation set is used to tune the algorithm hyperparameters repeatedly. So there will be some overfitting here, but that is why there is another stage:
The test set must not be touched until the classifier is final to prevent overfitting. It serves to estimate the true accuracy, if you would put the model into production.
I was reading the glmnet documentation and I found this:
Note also that the results of cv.glmnet are random, since the folds
are selected at random. Users can reduce this randomness by running
cv.glmnet many times, and averaging the error curves.
The following code uses caret with a repeated cv.
library(caret)
ctrl <- trainControl(verboseIter = TRUE, classProbs = TRUE,
summaryFunction = twoClassSummary, method = "repeatedcv",
repeats = 10)
fit <- train(x, y, method = "glmnet", metric = "ROC", trControl = ctrl)
Is that the best way to run glmnet with cross validation through caret?, or is it better to run glmnet directly?
You need to define best way. Do you want to use
A regularized regression alone on a dataset for feature selection? (in which case, use glmnet--Max Kuhn has implied that you may be better off using models with in-built CV features as they would have been optimized for both predictor selection and minimizing error). See below.
"In many cases, using these models with built-in feature selection will be more efficient than algorithms where the search routine for
the right predictors is external to the model. Built-in feature
selection typically couples the predictor search algorithm with the
parameter estimation and are usually optimized with a single
objective function (e.g. error rates or likelihood)." (Kuhn, caret
package documentation: caret feature selection overview)
Or are you comparing different models, one of which is glmnet? In which case, caret may be a great choice.
Using the defaults of the train in caret package, I am trying to train a random forest model for the dataset xtr2 (dim(xtr2): 765 9408). The problem is that it unbelievably takes too long (more than one day for one training) to fit the function. As far as I know train in its default uses bootstrap sampling (25 times) and three random selection of mtry, so why it should take so long?
Please notice that I need to train the rf, three times in each run (because I need to make a mean of the results of different random forest models with the same data), and it takes about three days, and I need to run the code for 10 different samples, so it would take me 30 days to have the results.
My question is how I can make it faster?
Can changing the defaults of train make the operation time less? for example using CV for training?
Can parallel processing with caret package help? if yes, how it can be done?
Can tuneRF of random forest package make any changes to the time?
This is the code:
rffit=train(xtr2,ytr2,method="rf",ntree=500)
rf.mdl =randomForest(x=xtr2,y=as.factor(ytr2),ntree=500,
keep.forest=TRUE,importance=TRUE,oob.prox =FALSE ,
mtry = rffit$bestTune$mtry)
Thank you,
My thoughts on your questions:
Yes! But don't forget you also have control over the search grid caret uses for the tuning parameters; in this case, mtry. I'm not sure what the default search grid is for mtry, but try the following:
ctrl <- trainControl("cv", number = 5, verboseIter = TRUE)
set.seed(101) # for reproducibility
rffit <- train(xtr2, ytr2, method = "rf", trControl = ctrl, tuneLength = 5)
Yes! See the caret website: http://topepo.github.io/caret/parallel-processing.html
Yes and No! tuneRF simply uses the OOB error to find an optimal value of mtry (the only tuning parameter in randomForest). Using cross-validation tends to work better and produce a more honest estimate of model performance. tuneRF can take a long time but should be quicker than k-fold cross-validation.
Overall, the online manual for caret is quite good: http://topepo.github.io/caret/index.html.
Good luck!
You use train for determining mtry only. I would skip the train step, and stay with default mtry:
rf.mdl =randomForest(x=xtr2,y=as.factor(ytr2),ntree=500,
keep.forest=TRUE,importance=TRUE,oob.prox =FALSE)
I strongly doubt that 3 different runs is a good idea.
If you do 10 fold cross-validation (I am not sure it should be done anyways, as validation is ingrained into the random forest), 10 parts is too much, if you are short in time. 5 parts would be enough.
Finally, the time of randomForest is proportional to nTree. Set nTree=100, and your program will run 5 time faster.
I would also just add, that it the main issue is speed, there are several other random forest implementations in caret, and many of them are much faster than the original randomForest which is notoriously slow. I've found ranger to be a nice alternative that suited my very simple needs.
Here is a nice summary of the random forest packges in R. Many of these are in caret already.
Also for consideration, here's an interesting study of the performance of ranger vs rborist, where you can see how performance is affected by the tradeoff between sample size and features.