XGBoost (R) CV test vs. training error - r

I'll preface my question by saying that I am, currently, unable to share my data due to extremely strict confidentiality agreements surrounding it. Hopefully I'll be able to get permission to share the blinded data shortly.
I am struggling to get XGBoost trained properly in R. I have been following the guide here and am so far stuck on step 1, tuning the nrounds parameter. The results I'm getting from my cross validation aren't doing what I'd expect them to do leaving me at a loss for where to proceed.
My data contains 105 obervations, a continuous response variable (histogram in the top left pane of the image in the link below) and 16095 predictor variables. All of the predictors are on the same scale and a histogram of them all is in the top right pane of the image in the link below. The predictor variables are quite zero heavy with 62.82% of all values being 0.
As a separate set of test data I have a further 48 observations. Both data sets have a very similar range in their response variables.
So far I've been able to fit a PLS model and a Random Forest (using the R library ranger). Applying these two models to my test data set I've been able to predict and get a RMSE of 19.133 from PLS and 15.312 from ranger. In the case of ranger successive model fits are proving very stable using 2000 trees and 760 variables each split.
Returning to XGBoost, using the code below, I have been fixing all parameters except nrounds and using the xgb.cv function in the R package xgboost to calculate the training and test errors.
data.train<-read.csv("../Data/Data_Train.csv")
data.test<-read.csv("../Data/Data_Test.csv")
dtrain <- xgb.DMatrix(data = as.matrix(data.train[,-c(1)]),
label=data.train[,1])
# dtest <- xgb.DMatrix(data = as.matrix(data.test[,-c(1)]), label=data.test[,1]) # Not used here
## Step 1 - tune number of trees using CV function
eta = 0.1; gamma = 0; max_depth = 15;
min_child_weight = 1; subsample = 0.8; colsample_bytree = 0.8
nround=2000
cv <- xgb.cv(
params = list(
## General Parameters
booster = "gbtree", # Default
silent = 0, # Default
## Tree Booster Parameters
eta = eta,
gamma = gamma,
max_depth = max_depth,
min_child_weight = min_child_weight,
subsample = subsample,
colsample_bytree = colsample_bytree,
num_parallel_tree = 1, # Default
## Linear Booster Parameters
lambda = 1, # Default
lambda_bias = 0, # Default
alpha = 0, # Default
## Task Parameters
objective = "reg:linear", # Default
base_score = 0.5, # Default
# eval_metric = , # Evaluation metric, set based on objective
nthread = 60
),
data = dtrain,
nround = nround,
nfold = 5,
stratified = TRUE,
prediction = TRUE,
showsd = TRUE,
# early_stopping_rounds = 20,
# maximize = FALSE,
verbose = 1
)
library(ggplot)
plot.df<-data.frame(NRound=as.matrix(cv$evaluation_log)[,1], Train=as.matrix(cv$evaluation_log)[,2], Test=as.matrix(cv$evaluation_log)[,4])
library(reshape2)
plot.df<-melt(plot.df, measure.vars=2:3)
ggplot(data=plot.df, aes(x=NRound, y=value, colour=variable)) + geom_line() + ylab("Mean RMSE")
If this function does what I believe it is does I was hoping to see the training error decrease to a plateau and the test error to decrease then begin to increase again as the model overfits. However the output I'm getting looks like the code below (and also the lower figure in the link above).
##### xgb.cv 5-folds
iter train_rmse_mean train_rmse_std test_rmse_mean test_rmse_std
1 94.4494006 1.158343e+00 94.55660 4.811360
2 85.5397674 1.066793e+00 85.87072 4.993996
3 77.6640230 1.123486e+00 78.21395 4.966525
4 70.3846390 1.118935e+00 71.18708 4.759893
5 63.7045868 9.555162e-01 64.75839 4.668103
---
1996 0.0002458 8.158431e-06 18.63128 2.014352
1997 0.0002458 8.158431e-06 18.63128 2.014352
1998 0.0002458 8.158431e-06 18.63128 2.014352
1999 0.0002458 8.158431e-06 18.63128 2.014352
2000 0.0002458 8.158431e-06 18.63128 2.014352
Considering how well ranger works I'm inclined to believe that I'm doing something foolish and causing XGBoost to struggle!
Thanks

To tune your parameters you can use tuneParams. Here is an example
task = makeClassifTask(id = id, data = "your data", target = "the name of the column in your data of the y variable")
# Define the search space
tuning_options <- makeParamSet(
makeNumericParam("eta", lower = 0.1, upper = 0.4),
makeNumericParam("colsample_bytree", lower = 0.5, upper = 1),
makeNumericParam("subsample", lower = 0.5, upper = 1),
makeNumericParam("min_child_weight", lower = 3, upper = 10),
makeNumericParam("gamma", lower = 0, upper = 10),
makeNumericParam("lambda", lower = 0, upper = 5),
makeNumericParam("alpha", lower = 0, upper = 5),
makeIntegerParam("max_depth", lower = 1, upper = 10),
makeIntegerParam("nrounds", lower = 50, upper = 300))
ctrl = makeTuneControlRandom(maxit = 50L)
rdesc = makeResampleDesc("CV", iters = 3L)
learner = makeLearner("classif.xgboost", predict.type = "response",par.vals = best_param)
res = tuneParams(learner = learner,task = task, resampling = rdesc,
par.set = tuning_options, control = ctrl,measures = acc)
Of course you can play around with the intervals for your parameters. In the end res will contain the optimal set of parameters for your xgboost and then you can train your xgboost using this parameters. Keep in mind that you can choose other method except apart from cross-validation, try ?makeResampleDesc
I hope it helps

Related

Benchmarking multiple AutoTuning instances

I have been trying to use mlr3 to do some hyperparameter tuning for xgboost. I want to compare three different models:
xgboost tuned over just the alpha hyperparameter
xgboost tuned over alpha and lambda hyperparameters
xgboost tuned over alpha, lambda, and maxdepth hyperparameters.
After reading the mlr3 book, I thought that using AutoTuner for the nested resampling and benchmarking would be the best way to go about doing this. Here is what I have tried:
task_mpcr <- TaskRegr$new(id = "mpcr", backend = data.numeric, target = "n_reads")
measure <- msr("poisson_loss")
xgb_learn <- lrn("regr.xgboost")
set.seed(103)
fivefold.cv = rsmp("cv", folds = 5)
param.list <- list( alpha = p_dbl(lower = 0.001, upper = 100, logscale = TRUE),
lambda = p_dbl(lower = 0.001, upper = 100, logscale = TRUE),
max_depth = p_int(lower = 2, upper = 10)
)
model.list <- list()
for(model.i in 1:length(param.list)){
param.list.subset <- param.list[1:model.i]
search_space <- do.call(ps, param.list.subset)
model.list[[model.i]] <- AutoTuner$new(
learner = xgb_learn,
resampling = fivefold.cv,
measure = measure,
search_space = search_space,
terminator = trm("none"),
tuner = tnr("grid_search", resolution = 10),
store_tuning_instance = TRUE
)
}
grid <- benchmark_grid(
task = task_mpcr,
learner = model.list,
resampling = rsmp("cv", folds =3)
)
bmr <- benchmark(grid, store_models = TRUE)
Note that I added Poisson loss as a measure for the count data I am working with.
For some reason after running the benchmark function, the Poisson loss of all my models is nearly identical per fold, making me think that no tuning was done.
I also cannot find a way to access the hyperparameters used to get the lowest loss per train/test iteration.
Am I misusing the benchmark function entirely?
Also, this is my first question on SO, so any formatting advice would be appreciated!
To see whether tuning has an effect, you can just add an untuned learner to the benchmark. Otherwise, the conclusion could be that tuning alpha is sufficient for your example.
I adapted the code so that it runs with an example task.
library(mlr3verse)
task <- tsk("mtcars")
measure <- msr("regr.rmse")
xgb_learn <- lrn("regr.xgboost")
param.list <- list(
alpha = p_dbl(lower = 0.001, upper = 100, logscale = TRUE),
lambda = p_dbl(lower = 0.001, upper = 100, logscale = TRUE)
)
model.list <- list()
for(model.i in 1:length(param.list)){
param.list.subset <- param.list[1:model.i]
search_space <- do.call(ps, param.list.subset)
at <- AutoTuner$new(
learner = xgb_learn,
resampling = rsmp("cv", folds = 5),
measure = measure,
search_space = search_space,
terminator = trm("none"),
tuner = tnr("grid_search", resolution = 5),
store_tuning_instance = TRUE
)
at$id = paste0(at$id, model.i)
model.list[[model.i]] <- at
}
model.list <- c(model.list, list(xgb_learn)) # add baseline learner
grid <- benchmark_grid(
task = task,
learner = model.list,
resampling = rsmp("cv", folds =3)
)
bmr <- benchmark(grid, store_models = TRUE)
autoplot(bmr)
bmr_data = bmr$data$as_data_table() # convert benchmark result to a handy data.table
bmr_data$learner[[1]]$learner$param_set$values # the final learner used by AutoTune is nested in $learner
# best found value during grid search
bmr_data$learner[[1]]$archive$best()
# transformed value (the one that is used for the learner)
bmr_data$learner[[1]]$archive$best()$x_domain
In the last lines you see how one can access the individual runs of the benchmark. Im my example we have 9 runs resulting for 3 learners and 3 outer resampling folds.

Bayesian Optimization of Hyperparameters in R

I've been looking into Bayesian optimization for hyperparameter tuning and trying to compare the results I get to those I get using different methods (random grid search).
I came across this site, where the author uses the mlrMBO package to MAXIMIZE the log-likelihood (see Example #2): https://www.simoncoulombe.com/2019/01/bayesian/. I have a different scenario, where I want to MINIMIZE the log-loss, so I made some minor adjustments to the author's code when defining the objective function, but I am not sure if it is correct. His objective function returned the maximum value of the test log-likelihood obtained via cross-validation and the minimize argument in the makeSingleObjectiveFunction function in the smoof library is set to FALSE. Since I want to minimize the log-loss, I returned the minimum of the log-loss from cross-validation and set the minimize argument to TRUE. Because this is my first attempt at using the package and am not too savvy with machine learning in general, I am not sure if my code is right. Any insights would be greatly appreciated!
obj.fun <- makeSingleObjectiveFunction(
name = "xgb_cv_bayes",
fn = function(x){
set.seed(12345)
cv <- xgb.cv(params = list(
booster = "gbtree",
eta = x["eta"],
max_depth = x["max_depth"],
min_child_weight = x["min_child_weight"],
gamma = x["gamma"],
subsample = x["subsample"],
colsample_bytree = x["colsample_bytree"],
objective = "binary:logistic",
eval_metric = "logloss"),
data = dtrain,
nrounds = x["nrounds"],
folds = cv_folds,
prediction = FALSE,
showsd = TRUE,
early_stopping_rounds = 10,
verbose = 0)
cv$evaluation_log[, min(test_logloss_mean)]
},
par.set = makeParamSet(
makeNumericParam("eta", lower = 0.1, upper = 0.5),
makeNumericParam("gamma", lower = 0, upper = 5),
makeIntegerParam("max_depth", lower = 3, upper = 6),
makeIntegerParam("min_child_weight", lower= 1, upper = 2),
makeNumericParam("subsample", lower = 0.6, upper = 0.8),
makeNumericParam("colsample_bytree", lower = 0.5, upper = 0.7),
makeIntegerParam("nrounds", lower = 100, upper = 1000)
),
minimize = TRUE
)

xgboost always predict 1 level with imbalance dataset

I'm using xgboost to build a model. The dataset only has 200 rows and 10000 cols.
I tried chi-2 to get 100 cols, but my confusion matrix looks like this:
1 0
1 190 0
0 10 0
I tried to use 10000 attributes, randomly select 100 attributes, select 100 attributes according to chi-2, but I never get 0 case predicted. Is it because of the dataset, or because the way I use xgboost?
My factor(pred.cv) is always showing only 1 level, while factor(y+1) has 1 or 2 as levels.
param <- list("objective" = "binary:logistic",
"eval_metric" = "error",
"nthread" = 2,
"max_depth" = 5,
"eta" = 0.3,
"gamma" = 0,
"subsample" = 0.8,
"colsample_bytree" = 0.8,
"min_child_weight" = 1,
"max_delta_step"= 5,
"learning_rate" =0.1,
"n_estimators" = 1000,
"seed"=27,
"scale_pos_weight" = 1
)
nfold=3
nrounds=200
pred.cv = matrix(bst.cv$pred, nrow=length(bst.cv$pred)/1, ncol=1)
pred.cv = max.col(pred.cv, "last")
factor(y+1) # this is the target in train, level 1 and 2
factor(pred.cv) # this is the issue, it is always only 1 level
I found caret to be slow and it is not able to tune all the parameters of xgboost models without building a custom model which is quite more complicated than using ones own custom function for evaluation.
However if you are doing some up/down sampling or smote/rose caret is the way to go since it incorporates them correctly in the model evaluating phase (during re-sampling). See: https://topepo.github.io/caret/subsampling-for-class-imbalances.html
However I found these techniques have a very small impact on the results and usually for the worse, at least in the models I trained.
scale_pos_weight gives a higher weight to a certain class, if the minority class is at 10% abundance then playing with scale_pos_weight around 5 - 10 should be beneficial.
Tuning regularization parameters can be quite beneficial for xgboost: here one has several parameters: alpha, beta and gamma - I found valid values to be 0 - 3. Other useful parameters that add direct regularization (by adding uncertainty) are subsample, colsample_bytree and colsample_bylevel. I found that playing with colsample_bylevel can also have a positive outcome on the model. subsample and colsample_bytree you are already utilizing.
I would test a much smaller eta and more trees to see if the model benefits. early_stopping_rounds rounds can speed up the process in that case.
Other eval_metric are probably going to be more beneficial than accuracy. Try logloss or auc and even map and ndcg
Here is a function for grid search of hyper-parameters. It uses auc as evaluation metric but one can change that easily
xgb.par.opt=function(train, seed){
require(xgboost)
ntrees=2000
searchGridSubCol <- expand.grid(subsample = c(0.5, 0.75, 1),
colsample_bytree = c(0.6, 0.8, 1),
gamma = c(0, 1, 2),
eta = c(0.01, 0.03),
max_depth = c(4,6,8,10))
aucErrorsHyperparameters <- apply(searchGridSubCol, 1, function(parameterList){
#Extract Parameters to test
currentSubsampleRate <- parameterList[["subsample"]]
currentColsampleRate <- parameterList[["colsample_bytree"]]
currentGamma <- parameterList[["gamma"]]
currentEta =parameterList[["eta"]]
currentMaxDepth =parameterList[["max_depth"]]
set.seed(seed)
xgboostModelCV <- xgb.cv(data = train,
nrounds = ntrees,
nfold = 5,
objective = "binary:logistic",
eval_metric= "auc",
metrics = "auc",
verbose = 1,
print_every_n = 50,
early_stopping_rounds = 200,
stratified = T,
scale_pos_weight=sum(all_data[train,1]==0)/sum(all_data[train,1]==1),
max_depth = currentMaxDepth,
eta = currentEta,
gamma = currentGamma,
colsample_bytree = currentColsampleRate,
min_child_weight = 1,
subsample = currentSubsampleRate
seed = seed)
xvalidationScores <- as.data.frame(xgboostModelCV$evaluation_log)
auc = xvalidationScores[xvalidationScores$iter==xgboostModelCV$best_iteration,c(1,4,5)]
auc = cbind(auc, currentSubsampleRate, currentColsampleRate, currentGamma, currentEta, currentMaxDepth)
names(auc) = c("iter", "test.auc.mean", "test.auc.std", "subsample", "colsample", "gamma", "eta", "max.depth")
print(auc)
return(auc)
})
return(aucErrorsHyperparameters)
}
One can add other parameters to the expand.grid call.
I usually train hyper-parameters on one CV repetition and evaluate them on additional repetitions with other seeds or on the validation set (but doing it on validation set should be used with caution to avoid over-fitting)
test
param <- list("objective" = "binary:logistic",
"eval_metric" = "error",
"nthread" = 2,
"max_depth" = 5,
"eta" = 0.3,
"gamma" = 0,
"subsample" = 0.8,
"colsample_bytree" = 0.8,
"min_child_weight" = 1,
"max_delta_step"= 5,
"learning_rate" =0.1,
"n_estimators" = 1000,
"seed"=27,
"scale_pos_weight" = 1
)
nfold=3
nrounds=200
pred.cv = matrix(bst.cv$pred, nrow=length(bst.cv$pred)/1, ncol=1)
pred.cv = max.col(pred.cv, "last")
factor(y+1) # this is the target in train, level 1 and 2
factor(pred.cv) # this is the issue, it is always only 1 level

Combining train + test data and running cross validation in R

I have the following R code that runs a simple xgboost model on a set of training and test data with the intention of predicting a binary outcome.
We start by
1) Reading in the relevant libraries.
library(xgboost)
library(readr)
library(caret)
2) Cleaning up the training and test data
train.raw = read.csv("train_data", header = TRUE, sep = ",")
drop = c('column')
train.df = train.raw[, !(names(train.raw) %in% drop)]
train.df[,'outcome'] = as.factor(train.df[,'outcome'])
test.raw = read.csv("test_data", header = TRUE, sep = ",")
drop = c('column')
test.df = test.raw[, !(names(test.raw) %in% drop)]
test.df[,'outcome'] = as.factor(test.df[,'outcome'])
train.c1 = subset(train.df , outcome == 1)
train.c0 = subset(train.df , outcome == 0)
3) Running XGBoost on the properly formatted data.
train_xgb = xgb.DMatrix(data.matrix(train.df [,1:124]), label = train.raw[, "outcome"])
test_xgb = xgb.DMatrix(data.matrix(test.df[,1:124]))
4) Running the model
model_xgb = xgboost(data = train_xgb, nrounds = 8, max_depth = 5, eta = .1, eval_metric = "logloss", objective = "binary:logistic", verbose = 5)
5) Making predicitions
pred_xgb <- predict(model_xgb, newdata = test_xgb)
My question is: How can I adjust this process so that I'm just pulling in / adjusting a single 'training' data set, and getting predictions on the hold-out sets of the cross-validated file?
To specify k-fold CV in the xgboost call one needs to call xgb.cv with nfold = some integer argument, to save the predictions for each resample use prediction = TRUE argument. For instance:
xgboostModelCV <- xgb.cv(data = dtrain,
nrounds = 1688,
nfold = 5,
objective = "binary:logistic",
eval_metric= "auc",
metrics = "auc",
verbose = 1,
print_every_n = 50,
stratified = T,
scale_pos_weight = 2
max_depth = 6,
eta = 0.01,
gamma=0,
colsample_bytree = 1 ,
min_child_weight = 1,
subsample= 0.5 ,
prediction = T)
xgboostModelCV$pred #contains predictions in the same order as in dtrain.
xgboostModelCV$folds #contains k-fold samples
Here's a decent function to pick hyperparams
function(train, seed){
require(xgboost)
ntrees=2000
searchGridSubCol <- expand.grid(subsample = c(0.5, 0.75, 1),
colsample_bytree = c(0.6, 0.8, 1),
gamma=c(0, 1, 2),
eta=c(0.01, 0.03),
max_depth=c(4,6,8,10))
aucErrorsHyperparameters <- apply(searchGridSubCol, 1, function(parameterList){
#Extract Parameters to test
currentSubsampleRate <- parameterList[["subsample"]]
currentColsampleRate <- parameterList[["colsample_bytree"]]
currentGamma <- parameterList[["gamma"]]
currentEta =parameterList[["eta"]]
currentMaxDepth =parameterList[["max_depth"]]
set.seed(seed)
xgboostModelCV <- xgb.cv(data = train,
nrounds = ntrees,
nfold = 5,
objective = "binary:logistic",
eval_metric= "auc",
metrics = "auc",
verbose = 1,
print_every_n = 50,
early_stopping_rounds = 200,
stratified = T,
scale_pos_weight=sum(all_data_nobad[index_no_bad,1]==0)/sum(all_data_nobad[index_no_bad,1]==1),
max_depth = currentMaxDepth,
eta = currentEta,
gamma=currentGamma,
colsample_bytree = currentColsampleRate,
min_child_weight = 1,
subsample= currentSubsampleRate)
xvalidationScores <- as.data.frame(xgboostModelCV$evaluation_log)
#Save rmse of the last iteration
auc=xvalidationScores[xvalidationScores$iter==xgboostModelCV$best_iteration,c(1,4,5)]
auc=cbind(auc, currentSubsampleRate, currentColsampleRate, currentGamma, currentEta, currentMaxDepth)
names(auc)=c("iter", "test.auc.mean", "test.auc.std", "subsample", "colsample", "gamma", "eta", "max.depth")
print(auc)
return(auc)
})
return(aucErrorsHyperparameters)
}
You can change the grid values and the params in the grid, as well as loss/evaluation metric. It is similar as provided by caret grid search, but caret does not provide the possibility to define alpha, lambda, colsample_bylevel, num_parallel_tree... hyper parameters in the grid search apart defining a custom function which I found cumbersome. Caret has the advantage of automatic preprocessing, automatic up/down sampling within CV etc.
setting the seed outside the xgb.cv call will pick the same folds for CV but not the same trees at each round so you will end up with a different model. Even if you set the seed inside the xgb.cv function call there is no guarantee you will end up with the same model but there's a much higher chance (depends on threads, type of model.. - I for one like the uncertainty and found it to have little impact on the result).
You can use xgb.cv and set prediction = TRUE.

xgboost in R: how does xgb.cv pass the optimal parameters into xgb.train

I've been exploring the xgboost package in R and went through several demos as well as tutorials but this still confuses me: after using xgb.cv to do cross validation, how does the optimal parameters get passed to xgb.train? Or should I calculate the ideal parameters (such as nround, max.depth) based on the output of xgb.cv?
param <- list("objective" = "multi:softprob",
"eval_metric" = "mlogloss",
"num_class" = 12)
cv.nround <- 11
cv.nfold <- 5
mdcv <-xgb.cv(data=dtrain,params = param,nthread=6,nfold = cv.nfold,nrounds = cv.nround,verbose = T)
md <-xgb.train(data=dtrain,params = param,nround = 80,watchlist = list(train=dtrain,test=dtest),nthread=6)
Looks like you misunderstood xgb.cv, it is not a parameter searching function. It does k-folds cross validation, nothing more.
In your code, it does not change the value of param.
To find best parameters in R's XGBoost, there are some methods. These are 2 methods,
(1) Use mlr package, http://mlr-org.github.io/mlr-tutorial/release/html/
There is a XGBoost + mlr example code in the Kaggle's Prudential challenge,
But that code is for regression, not classification. As far as I know, there is no mlogloss metric yet in mlr package, so you must code the mlogloss measurement from scratch by yourself. CMIIW.
(2) Second method, by manually setting the parameters then repeat, example,
param <- list(objective = "multi:softprob",
eval_metric = "mlogloss",
num_class = 12,
max_depth = 8,
eta = 0.05,
gamma = 0.01,
subsample = 0.9,
colsample_bytree = 0.8,
min_child_weight = 4,
max_delta_step = 1
)
cv.nround = 1000
cv.nfold = 5
mdcv <- xgb.cv(data=dtrain, params = param, nthread=6,
nfold=cv.nfold, nrounds=cv.nround,
verbose = T)
Then, you find the best (minimum) mlogloss,
min_logloss = min(mdcv[, test.mlogloss.mean])
min_logloss_index = which.min(mdcv[, test.mlogloss.mean])
min_logloss is the minimum value of mlogloss, while min_logloss_index is the index (round).
You must repeat the process above several times, each time change the parameters manually (mlr does the repeat for you). Until finally you get best global minimum min_logloss.
Note: You can do it in a loop of 100 or 200 iterations, in which for each iteration you set the parameters value randomly. This way, you must save the best [parameters_list, min_logloss, min_logloss_index] in variables or in a file.
Note: better to set random seed by set.seed() for reproducible result. Different random seed yields different result. So, you must save [parameters_list, min_logloss, min_logloss_index, seednumber] in the variables or file.
Say that finally you get 3 results in 3 iterations/repeats:
min_logloss = 2.1457, min_logloss_index = 840
min_logloss = 2.2293, min_logloss_index = 920
min_logloss = 1.9745, min_logloss_index = 780
Then you must use the third parameters (it has global minimum min_logloss of 1.9745). Your best index (nrounds) is 780.
Once you get best parameters, use it in the training,
# best_param is global best param with minimum min_logloss
# best_min_logloss_index is the global minimum logloss index
nround = 780
md <- xgb.train(data=dtrain, params=best_param, nrounds=nround, nthread=6)
I don't think you need watchlist in the training, because you have done the cross validation. But if you still want to use watchlist, it is just okay.
Even better you can use early stopping in xgb.cv.
mdcv <- xgb.cv(data=dtrain, params=param, nthread=6,
nfold=cv.nfold, nrounds=cv.nround,
verbose = T, early.stop.round=8, maximize=FALSE)
With this code, when mlogloss value is not decreasing in 8 steps, the xgb.cv will stop. You can save time. You must set maximize to FALSE, because you expect minimum mlogloss.
Here is an example code, with 100 iterations loop, and random chosen parameters.
best_param = list()
best_seednumber = 1234
best_logloss = Inf
best_logloss_index = 0
for (iter in 1:100) {
param <- list(objective = "multi:softprob",
eval_metric = "mlogloss",
num_class = 12,
max_depth = sample(6:10, 1),
eta = runif(1, .01, .3),
gamma = runif(1, 0.0, 0.2),
subsample = runif(1, .6, .9),
colsample_bytree = runif(1, .5, .8),
min_child_weight = sample(1:40, 1),
max_delta_step = sample(1:10, 1)
)
cv.nround = 1000
cv.nfold = 5
seed.number = sample.int(10000, 1)[[1]]
set.seed(seed.number)
mdcv <- xgb.cv(data=dtrain, params = param, nthread=6,
nfold=cv.nfold, nrounds=cv.nround,
verbose = T, early.stop.round=8, maximize=FALSE)
min_logloss = min(mdcv[, test.mlogloss.mean])
min_logloss_index = which.min(mdcv[, test.mlogloss.mean])
if (min_logloss < best_logloss) {
best_logloss = min_logloss
best_logloss_index = min_logloss_index
best_seednumber = seed.number
best_param = param
}
}
nround = best_logloss_index
set.seed(best_seednumber)
md <- xgb.train(data=dtrain, params=best_param, nrounds=nround, nthread=6)
With this code, you run cross validation 100 times, each time with random parameters. Then you get best parameter set, that is in the iteration with minimum min_logloss.
Increase the value of early.stop.round in case you find out that it's too small (too early stopping). You need also to change the random parameter values' limit based on your data characteristics.
And, for 100 or 200 iterations, I think you want to change verbose to FALSE.
Side note: That is example of random method, you can adjust it e.g. by Bayesian optimization for better method. If you have Python version of XGBoost, there is a good hyperparameter script for XGBoost, https://github.com/mpearmain/BayesBoost to search for best parameters set using Bayesian optimization.
Edit: I want to add 3rd manual method, posted by "Davut Polat" a Kaggle master, in the Kaggle forum.
Edit: If you know Python and sklearn, you can also use GridSearchCV along with xgboost.XGBClassifier or xgboost.XGBRegressor
This is a good question and great reply from silo with lots of details! I found it very helpful for someone new to xgboost like me. Thank you. The method to randomize and compared to boundary is very inspiring. Good to use and good to know. Now in 2018 some slight revise are needed, for example, early.stop.round should be early_stopping_rounds. The output mdcv is organized slightly differently:
min_rmse_index <- mdcv$best_iteration
min_rmse <- mdcv$evaluation_log[min_rmse_index]$test_rmse_mean
And depends on the application (linear, logistic,etc...), the objective, eval_metric and parameters shall be adjusted accordingly.
For the convenience of anyone who is running a regression, here is the slightly adjusted version of code (most are the same as above).
library(xgboost)
# Matrix for xgb: dtrain and dtest, "label" is the dependent variable
dtrain <- xgb.DMatrix(X_train, label = Y_train)
dtest <- xgb.DMatrix(X_test, label = Y_test)
best_param <- list()
best_seednumber <- 1234
best_rmse <- Inf
best_rmse_index <- 0
set.seed(123)
for (iter in 1:100) {
param <- list(objective = "reg:linear",
eval_metric = "rmse",
max_depth = sample(6:10, 1),
eta = runif(1, .01, .3), # Learning rate, default: 0.3
subsample = runif(1, .6, .9),
colsample_bytree = runif(1, .5, .8),
min_child_weight = sample(1:40, 1),
max_delta_step = sample(1:10, 1)
)
cv.nround <- 1000
cv.nfold <- 5 # 5-fold cross-validation
seed.number <- sample.int(10000, 1) # set seed for the cv
set.seed(seed.number)
mdcv <- xgb.cv(data = dtrain, params = param,
nfold = cv.nfold, nrounds = cv.nround,
verbose = F, early_stopping_rounds = 8, maximize = FALSE)
min_rmse_index <- mdcv$best_iteration
min_rmse <- mdcv$evaluation_log[min_rmse_index]$test_rmse_mean
if (min_rmse < best_rmse) {
best_rmse <- min_rmse
best_rmse_index <- min_rmse_index
best_seednumber <- seed.number
best_param <- param
}
}
# The best index (min_rmse_index) is the best "nround" in the model
nround = best_rmse_index
set.seed(best_seednumber)
xg_mod <- xgboost(data = dtest, params = best_param, nround = nround, verbose = F)
# Check error in testing data
yhat_xg <- predict(xg_mod, dtest)
(MSE_xgb <- mean((yhat_xg - Y_test)^2))
I found silo's answer is very helpful.
In addition to his approach of random research, you may want to use Bayesian optimization to facilitate the process of hyperparameter search, e.g. rBayesianOptimization library.
The following is my code with rbayesianoptimization library.
cv_folds <- KFold(dataFTR$isPreIctalTrain, nfolds = 5, stratified = FALSE, seed = seedNum)
xgb_cv_bayes <- function(nround,max.depth, min_child_weight, subsample,eta,gamma,colsample_bytree,max_delta_step) {
param<-list(booster = "gbtree",
max_depth = max.depth,
min_child_weight = min_child_weight,
eta=eta,gamma=gamma,
subsample = subsample, colsample_bytree = colsample_bytree,
max_delta_step=max_delta_step,
lambda = 1, alpha = 0,
objective = "binary:logistic",
eval_metric = "auc")
cv <- xgb.cv(params = param, data = dtrain, folds = cv_folds,nrounds = 1000,early_stopping_rounds = 10, maximize = TRUE, verbose = verbose)
list(Score = cv$evaluation_log$test_auc_mean[cv$best_iteration],
Pred=cv$best_iteration)
# we don't need cross-validation prediction and we need the number of rounds.
# a workaround is to pass the number of rounds(best_iteration) to the Pred, which is a default parameter in the rbayesianoptimization library.
}
OPT_Res <- BayesianOptimization(xgb_cv_bayes,
bounds = list(max.depth =c(3L, 10L),min_child_weight = c(1L, 40L),
subsample = c(0.6, 0.9),
eta=c(0.01,0.3),gamma = c(0.0, 0.2),
colsample_bytree=c(0.5,0.8),max_delta_step=c(1L,10L)),
init_grid_dt = NULL, init_points = 10, n_iter = 10,
acq = "ucb", kappa = 2.576, eps = 0.0,
verbose = verbose)
best_param <- list(
booster = "gbtree",
eval.metric = "auc",
objective = "binary:logistic",
max_depth = OPT_Res$Best_Par["max.depth"],
eta = OPT_Res$Best_Par["eta"],
gamma = OPT_Res$Best_Par["gamma"],
subsample = OPT_Res$Best_Par["subsample"],
colsample_bytree = OPT_Res$Best_Par["colsample_bytree"],
min_child_weight = OPT_Res$Best_Par["min_child_weight"],
max_delta_step = OPT_Res$Best_Par["max_delta_step"])
# number of rounds should be tuned using CV
#https://www.hackerearth.com/practice/machine-learning/machine-learning-algorithms/beginners-tutorial-on-xgboost-parameter-tuning-r/tutorial/
# However, nrounds can not be directly derivied from the bayesianoptimization function
# Here, OPT_Res$Pred, which was supposed to be used for cross-validation, is used to record the number of rounds
nrounds=OPT_Res$Pred[[which.max(OPT_Res$History$Value)]]
xgb_model <- xgb.train (params = best_param, data = dtrain, nrounds = nrounds)

Resources