I'm using the caret package to analyse Random Forest models built using ranger. I can't figure out how to call the train function using the tuneGrid argument to tune the model parameters.
I think I'm calling the tuneGrid argument wrong, but can't figure out why it's wrong. Any help would be appreciated.
data(iris)
library(ranger)
model_ranger <- ranger(Species ~ ., data = iris, num.trees = 500, mtry = 4,
importance = 'impurity')
library(caret)
# my tuneGrid object:
tgrid <- expand.grid(
num.trees = c(200, 500, 1000),
mtry = 2:4
)
model_caret <- train(Species ~ ., data = iris,
method = "ranger",
trControl = trainControl(method="cv", number = 5, verboseIter = T, classProbs = T),
tuneGrid = tgrid,
importance = 'impurity'
)
Here is the syntax for ranger in caret:
library(caret)
add . prior to tuning parameters:
tgrid <- expand.grid(
.mtry = 2:4,
.splitrule = "gini",
.min.node.size = c(10, 20)
)
Only these three are supported by caret and not the number of trees. In train you can specify num.trees and importance:
model_caret <- train(Species ~ ., data = iris,
method = "ranger",
trControl = trainControl(method="cv", number = 5, verboseIter = T, classProbs = T),
tuneGrid = tgrid,
num.trees = 100,
importance = "permutation")
to get variable importance:
varImp(model_caret)
#output
Overall
Petal.Length 100.0000
Petal.Width 84.4298
Sepal.Length 0.9855
Sepal.Width 0.0000
To check if this works set number of trees to 1000+ - the fit will be much slower. After changing importance = "impurity":
#output:
Overall
Petal.Length 100.00
Petal.Width 81.67
Sepal.Length 16.19
Sepal.Width 0.00
If it does not work I recommend installing latest ranger from CRAN and caret from git hub:
devtools::install_github('topepo/caret/pkg/caret')
To train the number of trees you can use lapply with fixed folds created by createMultiFolds or createFolds.
EDIT: while the above example works with caret package version 6.0-84, using the names of hyper parameters without dots works as well.
tgrid <- expand.grid(
mtry = 2:4,
splitrule = "gini",
min.node.size = c(10, 20)
)
Related
I'm having doubts during the hyperparameters tune step. I think I might be making some confusion.
I split my dataset into training (70%), validation (15%) and testing (15%). Below is the code used for regression with Random Forest.
1. Training
I perform the initial training with the dataset, as follows:
rf_model <- ranger(y ~.,
date = train ,
num.trees = 500,
mtry = 5,
min.node.size = 100,
importance = "impurity")
I get the R squared and the RMSE using the actual and predicted data from the training set.
pred_rf <- predict(rf_model,train)
pred_rf <- data.frame(pred = pred_rf, obs = train$y)
RMSE_rf <- RMSE(pred_rf$pred, pred_rf$obs)
R2_rf <- (color(pred_rf$pred, pred_rf$obs)) ^2
2. Parameter optimization
Using a parameter grid, the best model is chosen based on performance.
hyper_grid <- expand.grid(mtry = seq(3, 12, by = 4),
sample_size = c(0.5,1),
min.node.size = seq(20, 500, by = 100),
MSE = as.numeric(NA),
R2 = as.numeric(NA),
OOB_RMSE = as.numeric(NA)
)
And I perform the search for the best model according to the smallest OOB error, for example.
for (i in 1:nrow(hyper_grid)) {
model <- ranger(formula = y ~ .,
date = train,
num.trees = 500,
mtry = hyper_grid$mtry[i],
sample.fraction = hyper_grid$sample_size[i],
min.node.size = hyper_grid$min.node.size[i],
importance = "impurity",
replace = TRUE,
oob.error = TRUE,
verbose = TRUE
)
hyper_grid$OOB_RMSE[i] <- sqrt(model$prediction.error)
hyper_grid[i, "MSE"] <- model$prediction.error
hyper_grid[i, "R2"] <- model$r.squared
hyper_grid[i, "OOB_RMSE"] <- sqrt(model$prediction.error)
}
Choose the best performing model
x <- hyper_grid[which.min(hyper_grid$OOB_RMSE), ]
The final model:
rf_fit_model <- ranger(formula = y ~ .,
date = train,
num.trees = 100,
mtry = x$mtry,
sample.fraction = x$sample_size,
min.node.size = x$min.node.size,
oob.error = TRUE,
verbose = TRUE,
importance = "impurity"
)
Perform model prediction with validation data
rf_predict_val <- predict(rf_fit_model, validation)
rf_predict_val <- as.data.frame(rf_predict_val[1])
names(rf_predict_val) <- "pred"
rf_predict_val <- data.frame(pred = rf_predict_val, obs = validation$y)
RMSE_rf_fit <- RMSE rf_predict_val$pred, rf_predict_val$obs)
R2_rf_fit <- (cor(rf_predict_val$pred, rf_predict_val$obs)) ^ 2
Well, now I wonder if I should replicate the model evaluation with the test data.
The fact is that the validation data is being used only as a "test" and is not effectively helping to validate the model.
I've used cross validation in other methods, but I'd like to do it more manually. One of the reasons is that the CV via caret is very slow.
I'm in the right way?
Code using Caret, but very slow:
ctrl <- trainControl(method = "repeatedcv",
repeats = 10)
grid <- expand.grid(interaction.depth = seq(1, 7, by = 2),
n.trees = 1000,
shrinkage = c(0.01,0.1),
n.minobsinnode = 50)
gbmTune <- train(y ~ ., data = train,
method = "gbm",
tuneGrid = grid,
verbose = TRUE,
trControl = ctrl)
Something is wrong; all the Accuracy metric values are missing:
getting this error while applying k-nn on iris data.
''' iris.knn<- iris
# Dividing data into test_train
set.seed(532)
sample.iris.knn <- sample.split(iris.knn, SplitRatio = 0.8)
train.iris.knn <- subset(iris.knn, sample.iris.knn== TRUE)
test.iris.knn <- subset(iris.knn, sample.iris.knn == FALSE)
dim(train.iris.knn)
str(train.iris.knn)
head(train.iris.knn)
# fitting K-nn model
set.seed(8237)
trControl.iris.knn <- trainControl(method = "repeatedcv",
number = 10,
repeats = 3)
iris.knn.model <- train(Species ~., data = train.iris.knn,
method = 'knn',
trainControl = trControl.iris.knn,
preProcess = c("center", "scale"),
tuneLength = 13)
# Model check
iris.knn.model
'''
There is no argument in the function train named trainControl , it is trControl so change it will solve your problem
iris.knn.model <- train(Species ~., data = train.iris.knn,
method = 'knn',
trControl = trControl.iris.knn,
preProcess = c("center", "scale"),
tuneLength = 13)
I am trying to return the ROC curves for a test dataset using the MLevals package.
# Load data
train <- readRDS(paste0("Data/train.rds"))
test <- readRDS(paste0("Data/test.rds"))
# Create factor class
train$class <- ifelse(train$class == 1, 'yes', 'no')
# Set up control function for training
ctrl <- trainControl(method = "cv",
number = 5,
returnResamp = 'none',
summaryFunction = twoClassSummary(),
classProbs = T,
savePredictions = T,
verboseIter = F)
gbmGrid <- expand.grid(interaction.depth = 10,
n.trees = 18000,
shrinkage = 0.01,
n.minobsinnode = 4)
# Build using a gradient boosted machine
set.seed(5627)
gbm <- train(class ~ .,
data = train,
method = "gbm",
metric = "ROC",
tuneGrid = gbmGrid,
verbose = FALSE,
trControl = ctrl)
# Predict results -
pred <- predict(gbm, newdata = test, type = "prob")[,"yes"]
roc <- evalm(data.frame(pred, test$class))
I have used the following post, ROC curve for the testing set using Caret package,
to try and plot the ROC from test data using MLeval and yet I get the following error message:
MLeval: Machine Learning Model Evaluation
Input: data frame of probabilities of observed labels
Error in names(x) <- value :
'names' attribute [3] must be the same length as the vector [2]
Can anyone please help? Thanks.
Please provide a reproducible example with sample data so we can replicate the error and test for solutions (i.e., we cannot access train.rds or test.rds).
Nevertheless, the below may fix your issue.
pred <- predict(gbm, newdata = test, type = "prob")
roc <- evalm(data.frame(pred, test$class))
I am trying to combine signals from different models using the example described here . I have different datasets which predicts the same output. However, when I combine the model output in caretList, and ensemble the signals, it gives an error
Error in check_bestpreds_resamples(modelLibrary) :
Component models do not have the same re-sampling strategies
Here is the reproducible example
library(caret)
library(caretEnsemble)
df1 <-
data.frame(x1 = rnorm(200),
x2 = rnorm(200),
y = as.factor(sample(c("Jack", "Jill"), 200, replace = T)))
df2 <-
data.frame(z1 = rnorm(400),
z2 = rnorm(400),
y = as.factor(sample(c("Jack", "Jill"), 400, replace = T)))
library(caret)
check_1 <- train( x = df1[,1:2],y = df1[,3],
method = "nnet",
tuneLength = 10,
trControl = trainControl(method = "cv",
classProbs = TRUE,
savePredictions = T))
check_2 <- train( x = df2[,1:2],y = df2[,3] ,
method = "nnet",
preProcess = c("center", "scale"),
tuneLength = 10,
trControl = trainControl(method = "cv",
classProbs = TRUE,
savePredictions = T))
combine <- c(check_1, check_2)
ens <- caretEnsemble(combine)
First of all, you are trying to combine 2 models trained on different training data sets. That is not going to work. All ensemble models will need to be based on the same training set. You will have different sets of resamples in each trained model. Hence your current error.
Also building your models without using caretList is dangerous because you will have a big change of getting different resample strategies. You can control that a bit better by using the index in trainControl (see vignette).
If you use 1 dataset you can use the following code:
ctrl <- trainControl(method = "cv",
number = 5,
classProbs = TRUE,
savePredictions = "final")
set.seed(1324)
# will generate the following warning:
# indexes not defined in trControl. Attempting to set them ourselves, so
# each model in the ensemble will have the same resampling indexes.
models <- caretList(x = df1[,1:2],
y = df1[,3] ,
trControl = ctrl,
tuneList = list(
check_1 = caretModelSpec(method = "nnet", tuneLength = 10),
check_2 = caretModelSpec(method = "nnet", tuneLength = 10, preProcess = c("center", "scale"))
))
ens <- caretEnsemble(models)
A glm ensemble of 2 base models: nnet, nnet
Ensemble results:
Generalized Linear Model
200 samples
2 predictor
2 classes: 'Jack', 'Jill'
No pre-processing
Resampling: Bootstrapped (25 reps)
Summary of sample sizes: 200, 200, 200, 200, 200, 200, ...
Resampling results:
Accuracy Kappa
0.5249231 0.04164767
Also read this guide on different ensemble strategies.
I am trying to repeat the following lines of code:
x.mat <- as.matrix(train.df[,predictors])
y.class <- train.df$Response
cv.lasso.fit <- cv.glmnet(x = x.mat, y = y.class,
family = "binomial", alpha = 1, nfolds = 10)
... with the caret package, but it doesn't work:
trainControl <- trainControl(method = "cv",
number = 10,
# Compute Recall, Precision, F-Measure
summaryFunction = prSummary,
# prSummary needs calculated class probs
classProbs = T)
modelFit <- train(Response ~ . -Id, data = train.df,
method = "glmnet",
trControl = trainControl,
metric = "F", # Optimize by F-measure
alpha=1,
family="binomial")
The parameter "alpha" is not recognized, and "the model fit fails in every fold".
What am I doing wrong? Help would be much appreciated. Thanks.
Try to use tuneGrid. For example as follows:
tuneGrid=expand.grid(
.alpha=1,
.lambda=seq(0, 100, by = 0.1))