Its have to do with
parallelism implementation of XGBoost
I am trying to optimize XGBoost execution by giving it parameter nthread= 16 where my system has 24 cores. But when I train my model, it doesn't seem to even cross approx 20% of CPU utilization at any point in time while model training.
Code snippet is as follows:-
param_30 <- list("objective" = "reg:linear", # linear
"subsample"= subsample_30,
"colsample_bytree" = colsample_bytree_30,
"max_depth" = max_depth_30, # maximum depth of tree
"min_child_weight" = min_child_weight_30,
"max_delta_step" = max_delta_step_30,
"eta" = eta_30, # step size shrinkage
"gamma" = gamma_30, # minimum loss reduction
"nthread" = nthreads_30, # number of threads to be used
"scale_pos_weight" = 1.0
)
model <- xgboost(data = training.matrix[,-5],
label = training.matrix[,5],
verbose = 1, nrounds=nrounds_30, params = param_30,
maximize = FALSE, early_stopping_rounds = searchGrid$early_stopping_rounds_30[x])
Please explain me (if possible) on how I can increase CPU utilization and speed up the model training for efficient execution. Code in R shall be helpful for further understanding.
Assumption:- It is about the execution in R package of XGBoost
This is a guess... but I have had this happen to me ...
You are spending to much time communicating during the parallelism and are not ever getting CPU bound. https://en.wikipedia.org/wiki/CPU-bound
Bottom line is your data isn't large enough (rows and columns ), and/or your trees aren't deep enough max_depth to warrant that many cores. Too much overhead. xgboost parallelizes split evaluations so deep trees on big data can keep the CPU humming at max.
I have trained many models where single threaded outperforms 8/16 cores. Too much time switching and not enough work.
**MORE DATA, DEEPER TREES OR LESS CORES :) **
I tried to answer this question but my post was deleted by a moderator. Please see https://stackoverflow.com/a/67188355/5452057 which I believe could help you also, it relates to missing MPI support in the xgboost R-package for Windows available from CRAN.
Related
I'm trying to overfit a GBM with h2o (I know it's weird, but I need this to make a point). So I increased the max_depth of my trees and the shrinkage, and disabled the stopping criterion :
overfit <- h2o.gbm(y=response
, training_frame = tapp.hex
, ntrees = 100
, max_depth = 30
, learn_rate = 0.1
, distribution = "gaussian"
, stopping_rounds = 0
, distribution = "gaussian"
)
The overfitting works great, but I've noticed that the training error does not improve after the 64th tree. Do you know why ? If I understand the concept of boosting well enough, the training error should converge to 0 as the number of trees increase.
Information on my data :
Around 1 million observations
10 variables
Response variable is quantitative.
Have a good day !
Did you try to lower the min_split_improvement parameter? The default of 1e-5 is already microscopic but relevant when having a million lines.
I guess all trees after the 64th (in your example) will be trivial?
If the 0.1 learning rate isn't working for you, I'd recommend decreasing the learning rate so something like 0.01 or 0.001. Although you state that the training error stops decreasing after tree 64, I'd still recommend trying to add more trees, at least 1000-5000, especially if you try out a slower learning rate.
I've got a rather small dataset (162,000 observations with 13 attributes)
that I'm trying to use for modelling, using h2o.GBM. The response variable is categorical with large number of levels (~ 20,000 levels)
The model doesn't run out of memory or give any errors, but it's been going for nearly 24 hours without any progress (says 0% on H2o.GBM reporting)
I finally gave in and stopped it.
I'm wondering if there's anything wrong with my hyperparameters, as data is not particularly large.
here's my code:
library(h2o)
localH2O <- h2o.init(nthreads = -1, max_mem_size = "12g")
train.h20 <- as.h2o(analdata_train)
gbm1 <- h2o.gbm(
y = response_var
, x = independ_vars
, training_frame = train.h20
, ntrees = 3
, max_depth = 5
, min_rows = 10
, stopping_tolerance = 0.001
, learn_rate = 0.1
, distribution = "multinomial"
)
The way H2O GBM multinomial classification works is, when you ask for 1 tree as a parameter, it actually builds a tree for each level in the response column underneath the hood.
So 1 tree really means 20,000 trees in your case.
2 trees would really mean 40,000, and so on...
(Note the binomial classification case takes a shortcut and builds only one tree for both classes.)
So... it will probably finish but it could take quite a long time!
It's probably not a good idea to train a classifier with 20,000 classes -- most GBM implementations won't even let you do that. Can you group/cluster the classes into a smaller number of groups so that you can train a model with a smaller number of classes? If so, then you could perform your training in a two-stage process -- the first model would have K classes (assuming you clustered your classes into K groups). Then you can train secondary models that further classify the observations into your original classes.
This type of two-stage process may make sense if your classes represent groups that naturally clusters into a hierarchy of groups -- such as zip codes or ICD-10 medical diagnostic codes, for example.
If your use-case really demands that you train a 20,000 class GBM (and there's no way around it), then you should get a bigger cluster of machines to use in your H2O cluster (it's unclear how many CPUs you are using currently). H2O GBM should be able to finish training, assuming it has enough memory and CPUs, but it may take a while.
I'm trying to use the R package mlr to train a glmnet model on a binary classification problem with a large dataset (about 850000 rows and about 100 features) on very modest hardware (my laptop with 4GB RAM --- I don't have access to more CPU muscle). I decided to use mlr because I need to use nested cross-validation to tune the hyperparameters of my classifier and evaluate the expected performance of the final model. To the best of my knowledge, neither caret or h2o offer nested cross-validation at present, but mlr provides provides the infrastructure to do this. However, I find the huge number of functions provided by mlr extremely overwhelming, and it's difficult to know how to slot everything together to achieve my goal. What goes where? How do they fit together? I've read through the entire documentation here: https://mlr-org.github.io/mlr-tutorial/release/html/ and I'm still confused. There are code snippets that show how to do specific things, but it's unclear (to me) how to stitch these together. What's the big picture? I looked for a complete worked example to use as a template and only found this: https://www.bioconductor.org/help/course-materials/2015/CSAMA2015/lab/classification.html which I have been using as my start point. Can anyone help fill in the gaps?
Here's what I want to do:
Tune the hyperparameters (l1 and l2 regularisation parameters) of a glmnet model using grid search or random grid search (or anything faster if it exists -- iterated F-racing? Adaptive resampling?) and stratified k-fold cross-validation inner loop, with an outer cross-validation loop to assess the expected final performance. I want to include a feature preprocessing step in the inner loop with centering, scaling, and Yeo-Johnson transformation, and fast filter-based feature selection (the latter is a necessity because I have very modest hardware and I need to slim the feature space to decrease training time). I have imbalanced classes (positive class is about 20%) so I have opted to use AUC as my optimisation objective, but this is only a surrogate for the real metric of interest, with is the false positive rate for a small number of true positive fixed points (i.e., I want to know the FPR for TPR = 0.6, 0.7, 0.8). I'd like to tune the probability thresholds to achieve those TPRs, and note that this is possible in nested CV, but it's not clear exactly what is being optimised here:
https://github.com/mlr-org/mlr/issues/856
I'd like to know where the cut should be without incurring information leakage, so I want to pick this using CV.
I'm using glmnet because I'd rather spend my CPU cycles on building a robust model than a fancy model that produces over-optimistic results. GBM or Random Forest can be done later if I find it can be done fast enough, but I don't expect the features in my data to be informative enough to bother investing much time in training anything particularly complex.
Finally, after I've obtained an estimate of what performance I can expect from the final model, I want to actually build the final model and obtain the coefficients of the glmnet model --- including which ones are zero, so I know which features have been selected by the LASSO penalty.
Hope all this makes sense!
Here's what I've got so far:
df <- as.data.frame(DT)
task <- makeClassifTask(id = "glmnet",
data = df,
target = "Flavour",
positive = "quark")
task
lrn <- makeLearner("classif.glmnet", predict.type = "prob")
lrn
# Feature preprocessing -- want to do this as part of CV:
lrn <- makePreprocWrapperCaret(lrn,
ppc.center = TRUE,
ppc.scale = TRUE,
ppc.YeoJohnson = TRUE)
lrn
# I want to use the implementation of info gain in CORElearn, not Weka:
infGain = makeFilter(
name = "InfGain",
desc = "Information gain ",
pkg = "CORElearn",
supported.tasks = c("classif", "regr"),
supported.features = c("numerics", "factors"),
fun = function(task, nselect, ...) {
CORElearn::attrEval(
getTaskFormula(task),
data = getTaskData(task), estimator = "InfGain", ...)
}
)
infGain
# Take top 20 features:
lrn <- makeFilterWrapper(lrn, fw.method = "InfGain", fw.abs = 20)
lrn
# Now things start to get foggy...
tuningLrn <- makeTuneWrapper(
lrn,
resampling = makeResampleDesc("CV", iters = 2, stratify = TRUE),
par.set = makeParamSet(
makeNumericParam("s", lower = 0.001, upper = 0.1),
makeNumericParam("alpha", lower = 0.0, upper = 1.0)
),
control = makeTuneControlGrid(resolution = 2)
)
r2 <- resample(learner = tuningLrn,
task = task,
resampling = rdesc,
measures = auc)
# Now what...?
I have a training set that looks like
Name Day Area X Y Month Night
ATTACK Monday LA -122.41 37.78 8 0
VEHICLE Saturday CHICAGO -1.67 3.15 2 0
MOUSE Monday TAIPEI -12.5 3.1 9 1
Name is the outcome/dependent variable.
Here is what my code looks like so far in case it helps
ynn <- model.matrix(~Name , data = trainDF)
mnn <- model.matrix(~ Day+Area +X + Y + Month + Night, data = trainDF)
yCat<-make.names(trainDF$Name, unique=FALSE, allow_=TRUE)
I then setup tuning the parameters
nnTrControl=trainControl(method = "repeatedcv",number = 3,repeats=5,verboseIter = TRUE, returnData = FALSE, returnResamp = "all", classProbs = TRUE, summaryFunction = multiClassSummary,allowParallel = TRUE)
nnGrid = expand.grid(.size=c(1,4,7),.decay=c(0,0.001,0.1))
model <- train(y=yCat, x=mnn, method='nnet',linout=TRUE, trace = FALSE, trControl = nnTrControl,metric="logLoss", tuneGrid=nnGrid)
When I ran this, it was still running over 20 hours later, so I had to stop it
I read in the link below that its possible to parallelize the resampling of Caret using registerDoMC: R caret nnet package in Multicore
However, that only seems to work for cores. My machine uses 2 cores and 2 threads on each core. Is there a way to get a speedup using the threads in addition to using the 2 cores and registerDoMC(2)?
I also see in this link below that the user had to setup seeds for each resample: Fully reproducible parallel models using caret
Do I also have to do that for my code? Why was this not used in the former link? What about if I used xgboost instead of nnet?
If you want to reproduce your results you will have to set your seed on every thread that you spawn. This is required because every thread will have a different random number every time an instance is spawned. Depending on which OS you are working each thread will most likely be scheduled on a separate core on your CPU. This depends on your OS job scheduler. In regards to using xgboost versus nnet, I think that the most important aspect should be whether you are interested in the model properties. I think that if you are starting with machine learning xgboost may be a bit easier than nnet. If computational performance is your biggest concern you may try to run your problem on a smaller subset first.
One thing I would do first is run a MCA analysis, which can be find in FactoMineR. This will allow you to see the amount of variance in each of your variables. You could drop variables that have too little variance and thereby speed up the performance of your learning task.
I am building a GBM model with rather large datasets. data.table is great for processing data. But when I run GBM model, it takes forever to finish. Looking at Activity Monitor (in Mac), I can see the process doesn't use up all memory, and doesn't max out processor.
Since GBM is single core, and I can't modify it to run on multicore, what are my options to improve my run time? Right now I am using Macbook Air with 4BG RAM and 1.7GHz i5.
I am not sure which of the following options would help performance the most: buying a (i) computer with bigger memory; (ii) get a more powerful chip (i7), or (iii) use Amazon AWS and install R there. How each of these will help?
Add sample code per Brandson's request:
library(gbm)
GBM_NTREES = 100
GBM_SHRINKAGE = 0.05
GBM_DEPTH = 4
GBM_MINOBS = 50
GBM_model <- gbm.fit(
x = data[,-target] ,
y = data[,target] ,
#var.monotone = TRUE, #NN added
distribution = "gaussian"
,n.trees = GBM_NTREES ,
shrinkage = GBM_SHRINKAGE ,
interaction.depth = GBM_DEPTH ,
n.minobsinnode = GBM_MINOBS ,
verbose = TRUE)
Maybe something worth considering is using the XGBoost library. According to the Github repo:
"XGBoost provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way."
I also realize the original question is quite old, but maybe this will help someone out down the road.
This seems to be more about parallel computing in R in general, rather than a specific question about gbm. I would start here: http://cran.r-project.org/web/views/HighPerformanceComputing.html.