I've searched and search but can't find the answer. I have a c5_model trained and ready but I needed to do 100 trails to get it working to the level I want it to. But I'm stuck on trying to get it out of the model in R. I have done a summary but how do I get the decision tree out. Which trial do I want to use?
Update:
I'm building the model by doing the following
control <- trainControl(method = "repeatedcv",
number = 5,
repeats = 3,
classProbs = TRUE,
summaryFunction = twoClassSummary)
grid <- expand.grid( .winnow = c(FALSE),
.trials=100,
.model="tree" )
c5_model <- train(HasFraud ~ .,data = train, method = "C5.0",trControl = control,metric = "ROC",tuneGrid = grid,verbose = FALSE)
Is this the wrong method to train the model?
An object of class C5.0 has a number of elements, as described in the help file you can pull up with ?C50::C5.0.default. One of those elements is tree. If you've assigned the output of a call to C5.0() to a value, say model, you can extract any of its elements using the $ operator. For example:
model <- C5.0(<the call you made that generated the model>)
model$tree
Related
In tree package we can use following code for choosing number of terminal nods:
tree.model = tree(...)
tree.prune = prune.tree(tree.model, best = 20)
This code returns a new tree with 20 terminal nods.
In rpart package following code can use for this:
rpart.model = rpart(...)
rpart.prune = prune.rpart(rpart.model, cp =?)
That cp is cost complexity parameter. but I want similar best argument in prune.tree.
rpart package doesn't have a similar argument to best of tree package. The tree package was developed to cover the functionalities rpart was missing on.
To choose appropriate number of nodes, you can tune other parameters in rpart. For eg.
prune.control <- rpart.control(minsplit = 20, minbucket = round(minsplit/3), xval = 10)
rpart(formula, data, method, control = prune.control)
Then, evaluate the cross validated error vs cp, to choose a cp value. Also, you can automatically tune cp value using caret package. For eg.
ctrl <- trainControl(method = "repeatedcv", number = 10, repeats = 5)
model <- train(x = train_data,
y = labels,
method = "rpart",
trControl = ctrl)
Like the title says, I'm trying to run a decision tree both with and without cross-validation using the rpart package in R. I'm doing this using the xval parameter, as described in the vignette (https://cran.r-project.org/web/packages/rpart/vignettes/longintro.pdf)
Unfortunately, I'm getting the same tree with and without CV. I've compared the calculation time for each and the CV model looks like it takes about 10 times as long, so its apparently doing something, I just can't figure out what.
I've also redone the model a number of times with different complexity parameters, but it hasn't made any difference.
Here's sample code that shows my problem, the printcp's show the same results and the predictions from both on the training and a hold-out set are the same.
library(rpart)
library(caret)
abalone <- read.csv(file = 'https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.data',header = FALSE)
names(abalone) <- c("sex", "length", "diameter", "height", "whole_weight", "shucked_weight", "viscera_weight", "shell_weight", "rings")
train_set <- createDataPartition(abalone$sex, times = 1, p = 0.8, list = FALSE)
abalone_train <- slice(abalone, train_set)
abalone_test <- slice(abalone, -train_set)
abalone_fit_noCV <- rpart(sex ~ .,
data = abalone_train,
method = "class",
parms = list(split = 'information'),
control = rpart.control(xval = 0,
cp = 0.005))
abalone_fit_CV <- rpart(sex ~ .,
data = abalone_train,
method = "class",
parms = list(split = 'information'),
control = rpart.control(xval = 10,
cp = 0.005))
printcp(abalone_fit_noCV)
printcp(abalone_fit_CV)
CV_pred <- predict(abalone_fit_CV, type = "class")
noCV_pred <- predict(abalone_fit_noCV, type = "class")
confusionMatrix(CV_pred, noCV_pred)
CV_pred <- predict(abalone_fit_CV, abalone_test, type = "class")
noCV_pred <- predict(abalone_fit_noCV, abalone_test, type = "class")
confusionMatrix(CV_pred, noCV_pred)
In true beginner fashion, I figured this out shortly after posting.
For anybody else coming upon this issue, it is basically answered on Cross Validated :
The final tree that is returned is still the initial tree. You must use the prune function using the cross-validation plot to choose the best subtree.
This is clear if you read the full Pruning the tree section of the vignette, rather than just the cross-validation section.
I'm trying to use SMOTE in R within the trainControl function in caret. Following the author's example I do as follows:
#first, create an imbalanced data set
set.seed(2969)
imbal_train <- twoClassSim(10000, intercept = -20, linearVars = 20)
imbal_test <- twoClassSim(10000, intercept = -20, linearVars = 20)
table(imbal_train$Class)
Class1 Class2
9411 589
I want to use the SMOTE algorithm to oversample my minority class. However, this has to be done carefully. For instance, we shouldn't oversample before doing cross validation. This would lead us to optimistic generalization error.
#create my folds (5 in this case)
folds <- createFolds(factor(imbal_train$Class), k = 5, list = TRUE,returnTrain=TRUE)
#trainControl to set up my training phase.
ctrl <- trainControl(method = "cv", index = folds,
classProbs = TRUE,
summaryFunction = twoClassSummary,
savePredictions = "all",
## new option here:
sampling = "smote")
#train the model
set.seed(5627)
smote_inside <- train(Class ~ ., data = imbal_train,
method = "treebag",
nbagg = 50,
metric = "ROC",
trControl = ctrl)
It runs without error. I now want to see the training and testing set used in each iteration. I need to make sure that before oversampling the training folders, one folder was hold out and no new synthetic records were created inside of it.
Looking into the objects output by train, I could see that smote_inside$control may have some information. Concretely, it has the index and index_out: these are the row indexes for the training and testing in each cv iteration. However, when I do :
lista=smote_inside$control
dd=imbal_train[lista$index$Fold1,] #training data first cv iteration
table(dd$Class)
Class1 Class2
7529 471
You can see that it is still imbalanced. SMOTE is supposed to create some synthetic records from the minority class. Maybe this information is saved in another place?
Questions:
How can I see the new training records that were created using smote to balance the data?
How can I be sure that the testing folder wasn't contaminated with the oversampling?
Where can I find what caret is doing with SMOTE? pointers to a source code.
Some answers:
It does not retain that information
It is designed not to contaminate the holdout data. If you want proof (beyond what is shown in the link that you reference), look at createModel to see how it does the sampling and predictionFunction for how the data are handled prior to prediction.
The package sources are available basically everywhere. The two functions above (along with probFunction) to the work.
I am using the caret package to train an elastic net model on my dataset modDat. I take a grid search approach paired with repeated cross validation to select the optimal values of the lambda and fraction parameters required by the elastic net function. My code is shown below.
library(caret)
library(elasticnet)
grid <- expand.grid(
lambda = seq(0.5, 0.7, by=0.1),
fraction = seq(0, 1, by=0.1)
)
ctrl <- trainControl(
method = 'repeatedcv',
number = 5, #folds
repeats = 10, #repeats
classProbs = FALSE
)
set.seed(1)
enetTune <- train(
y ~ .,
data = modDat,
method = 'enet',
metric = 'RMSE',
tuneGrid = grid,
verbose = FALSE,
trControl = ctrl
)
I can get predictions using y_hat <- predict(enetTune, modDat), but I cannot view the coefficients underlying the predictions.
I have tried coef(enetTune$finalModel) but the only thing returned is NULL. I am suspecting that I have to give the coef() function more information but not sure how to do this.
In addition, I would like to produce a box plot of the 50 sets of coefficients (10 repeats of 5 folds) associated with the optimal lambda and fraction parameters.
To see the coefficients, use predict:
predict(enetTune$finalModel, type = "coefficients")
See ?predict.enet for more information on how to get specific coefficients.
Following on from the answer by #Weihuang Wong, you can get the coefficients from the final model using the following code:
predict.enet(enetTune$finalModel, s=enetTune$bestTune[1, "fraction"], type="coef", mode="fraction")$coefficients
To me what works best is stats::predict, as is #Weihuang Wong answer. However, as OP pointed out in a comment, that provides a list of coefficients for every value of lambda tested.
The important thing to understand here is that when you are using predict, your intention is precisely to predict the value of the parameters, and not really to retrieve them. You should then be aware of that an explore the options available.
In this case, you could use the same function with the argument s for the penalty parameter lambda. Remebember that you are still predicting, but this time you will get the coefficients you are looking for.
stats::predict(enetTune$finalModel, type = "coefficients", s = enetTune$bestTune$lambda)
Primary Question:
After reading the documentation and google searching, I am still stumped as to what the situations are where it is advisable to pre-define resampling indices such as:
resamples <- createResample(classVector_training, times = 500, list=TRUE)
or predefine seeds such as:
seeds <- vector(mode = "list", length = 501) #length is = (n_repeats*nresampling)+1
for(i in 1:501) seeds[[i]]<- sample.int(n=1000, 1)
My plan is to train a bunch of different reproducible models using parallel processing via the doParallel package. Is predefining resamples unnecessary due to the seeds already being set? Do I need to predefine seeds in the way above instead of setting seeds=NULL in the trainControl object because I intend to use parallel processing? Is there any reason to pre-define both index and seeds as I've seen at least once via searching google? And what is a reason to ever use indexOut?
Side Question:
So far, I've managed to run train fine for RF:
rfControl <- trainControl(method="oob", number = 500, p = 0.7, returnData=TRUE, returnResamp = "all", savePredictions=TRUE, classProbs = TRUE, summaryFunction = twoClassSummary, allowParallel=TRUE)
mtryGrid <- expand.grid(mtry = 9480^0.5) #set mtry parameter to the square root of the number of variables
rfTrain <- train(x = training, y = classVector_training, method = "rf", trControl = rfControl, tuneGrid = mtryGrid)
But when I try to run train() with method = "baruta" as such:
borutaControl <- trainControl(method="bootstrap", number = 500, p = 0.7, returnData=TRUE, returnResamp = "all", savePredictions=TRUE, classProbs = TRUE, summaryFunction = twoClassSummary, allowParallel=TRUE)
borutaTrain <- train(x = training, y = classVector_training, method = "Boruta", trControl = borutaControl, tuneGrid = mtryGrid)
I end up getting the following error:
Error in names(trControl$indexOut) <- prettySeq(trControl$indexOut) : 'names' attribute [1] must be the same length as the vector [0]
Anyone know why?
There are a few different times random numbers are used here, so I'll try to be specific about which seeds.
Is predefining resamples unnecessary due to the seeds already being set?
If you do not provide your own resampling indices, the first things that train, rfe, sbf, gafs, and safs do is to create them. So, setting the overall seed prior to calling these controls the randomness of creating resamples. So, you can call these functions repeatedly and use the same samples of you set the main seed beforehand:
set.seed(2346)
mod1 <- train(y ~ x, data = dat, method = "a", ...)
set.seed(2346)
mod2 <- train(y ~ x, data = dat, method = "b", ...)
set.seed(2346)
mod3 <- rfe(x, y, ...)
You can use createResamples or createFolds if you like and give those to trainControl's index argument too.
One other note about this: if indexOut is missing, the holdouts are defined as whatever samples were not used to train the model. There are cases when this is bad (see the exception below) and that is why indexOut exists.
Do I need to predefine seeds in the way above instead of setting seeds=NULL in the trainControl object because I intend to use parallel processing?
That was the main intent. When the worker processes startup, there was no way to control the randomness inside the model fit prior to our addition of the seeds argument. You don't have to use it, but it will lead to reproducible models.
Note that, like resamples, train will create seeds for you if you do not supply them. They are found in the control$seeds element in the train object.
Note that trainControl(seeds) has nothing to do with creating the resamples.
Is there any reason to pre-define both index and seeds as I've seen at least once via searching google?
If you want to pre-define the resamples and control any potential randomness in the worker processes that build the models, then yes.
And what is a reason to ever use indexOut?
There are always specialized situations. The reason it is there is for time series data where you might have data splits that do not involve all the samples passed to train (this is the exception mentioned above). See the white space in this graphic.
tl/dr
trainControl(seeds) only controls the randomness of the model fits
setting the seed prior to calling train is one way to control the randomness of data splitting
Max