I am trying to optimize a xgboost tree by using feature selection with caret's genetic algorithm
results <- gafs(iris[,1:4], iris[,5],
iters = 2,
method = "xgbTree",
metric = "Accuracy",
gafsControl = gafsControl(functions=caretGA, method="cv", repeats=2, verbose = TRUE),
trConrol = trainControl(method = "cv", classProbs = TRUE, verboseIter = TRUE)
)
this is however very slow and thus I would like to plot a progress bar. However I do not get any progress plotted, even though I used verbose = TRUE and verboseIter = TRUE. What am I doing wrong?
Related
I am using the Caret package in R, trying to implement multi-layer perceptron for classifying satellite images. I am using method=mlpML, and I would like to know which activation function is being used.
Here is my code:
controlparameters<-trainControl(method = "repeatedcv",
number=5,
repeats = 5,
savePredictions=TRUE,
classProbs = TRUE)
mlp_grid<-expand.grid(layer1=13,
layer2=0,
layer3=0)
model< train(as.factor(Species)~.,
data = smotedata,
method='mlpML',
preProc = c('center', 'scale'),
trcontrol=controlparameters,
tuneGrid=mlp_grid,
importance=T)
I used a single layer since it performed the best than using multi-layers.
Looking at the caret source code for mlpML, it turns out that it uses the mlp function of the RSNNS package.
According to the RSNNS mlp documentation, its default arguments are:
mlp(x, ...)
## Default S3 method:
mlp(x, y, size = c(5), maxit = 100,
initFunc = "Randomize_Weights", initFuncParams = c(-0.3, 0.3),
learnFunc = "Std_Backpropagation", learnFuncParams = c(0.2, 0),
updateFunc = "Topological_Order", updateFuncParams = c(0),
hiddenActFunc = "Act_Logistic", shufflePatterns = TRUE,
linOut = FALSE, outputActFunc = if (linOut) "Act_Identity" else
"Act_Logistic", inputsTest = NULL, targetsTest = NULL,
pruneFunc = NULL, pruneFuncParams = NULL, ...)
from which it is apparent that hiddenActFunc = "Act_Logistic", i.e the activation function for the hidden layers, is the logistic one.
This section of the ml tutorial: https://mlr.mlr-org.com/articles/tutorial/nested_resampling.html#filter-methods-with-tuning explains how to use a TuneWrapper with a FilterWrapper to tune the threshold for the filter. But what if my filter has hyperparameters that need tuning as well, such as a random forest variable importance filter? I don't seem to be able to tune any parameters except the threshold.
For example:
library(survival)
library(mlr)
data(veteran)
set.seed(24601)
task_id = "MAS"
mas.task <- makeSurvTask(id = task_id, data = veteran, target = c("time", "status"))
mas.task <- createDummyFeatures(mas.task)
tuning = makeResampleDesc("CV", iters=5, stratify=TRUE) # Tuning: 5-fold CV, no repeats
cox.filt.rsfrc.lrn = makeTuneWrapper(
makeFilterWrapper(
makeLearner(cl="surv.coxph", id = "cox.filt.rfsrc", predict.type="response"),
fw.method="randomForestSRC_importance",
cache=TRUE,
ntree=2000
),
resampling = tuning,
par.set = makeParamSet(
makeIntegerParam("fw.abs", lower=2, upper=10),
makeIntegerParam("mtry", lower = 5, upper = 15),
makeIntegerParam("nodesize", lower=3, upper=25)
),
control = makeTuneControlRandom(maxit=20),
show.info = TRUE)
produces the error message:
Error in checkTunerParset(learner, par.set, measures, control) :
Can only tune parameters for which learner parameters exist: mtry,nodesize
Is there any way to tune the hyperparameters of the random forest?
EDIT: Other attempts following suggestion in comments:
Wrap tuner around base learner before feeding to filter (filter not shown) - fails
cox.lrn = makeLearner(cl="surv.coxph", id = "cox.filt.rfsrc", predict.type="response")
cox.tune = makeTuneWrapper(cox.lrn,
resampling = tuning,
measures=list(cindex),
par.set = makeParamSet(
makeIntegerParam("mtry", lower = 5, upper = 15),
makeIntegerParam("nodesize", lower=3, upper=25),
makeIntegerParam("fw.abs", lower=2, upper=10)
),
control = makeTuneControlRandom(maxit=20),
show.info = TRUE)
Error in checkTunerParset(learner, par.set, measures, control) :
Can only tune parameters for which learner parameters exist: mtry,nodesize,fw.abs
Two levels of tuning - fails
cox.lrn = makeLearner(cl="surv.coxph", id = "cox.filt.rfsrc", predict.type="response")
cox.filt = makeFilterWrapper(cox.lrn,
fw.method="randomForestSRC_importance",
cache=TRUE,
ntree=2000)
cox.tune = makeTuneWrapper(cox.filt,
resampling = tuning,
measures=list(cindex),
par.set = makeParamSet(
makeIntegerParam("fw.abs", lower=2, upper=10)
),
control = makeTuneControlRandom(maxit=20),
show.info = TRUE)
cox.tune2 = makeTuneWrapper(cox.tune,
resampling = tuning,
measures=list(cindex),
par.set = makeParamSet(
makeIntegerParam("mtry", lower = 5, upper = 15),
makeIntegerParam("nodesize", lower=3, upper=25)
),
control = makeTuneControlRandom(maxit=20),
show.info = TRUE)
Error in makeBaseWrapper(id, learner$type, learner, learner.subclass = c(learner.subclass, :
Cannot wrap a tuning wrapper around another optimization wrapper!
It looks like that you currently can not tune hyperparameters of filters. You can manually change certain parameters by passing them in makeFilterWrapper() but not tune them.
You can only tune one of fw.abs, fw.perc or fw.tresh when it comes to filtering.
I do not know how big the effect will be on the ranking when using different hyperpars for the RandomForest filter. One way to check the robustness would be to compare the rankings of single RF model fits using different settings for mtry and friends with the help of getFeatureImportance(). If there is a very high rank correlation between these, you can safely ignore the tuning of the RF filter. (Maybe you want to use a different filter which does not come with this issue at all?)
If you insist on having this feature, you might need to raise PR for the package :)
lrn = makeLearner(cl = "surv.coxph", id = "cox.filt.rfsrc", predict.type = "response")
filter_wrapper = makeFilterWrapper(
lrn,
fw.method = "randomForestSRC_importance",
cache = TRUE,
ntrees = 2000
)
cox.filt.rsfrc.lrn = makeTuneWrapper(
filter_wrapper,
resampling = tuning,
par.set = makeParamSet(
makeIntegerParam("fw.abs", lower = 2, upper = 10)
),
control = makeTuneControlRandom(maxit = 20),
show.info = TRUE)
I am tuning more than 2 hyperparameters, while Generate hyperparameter effect data using the function generateHyperParsEffectData I set partial.dep = TRUE, while plotting plotHyperParsEffect i am getting error for classification learner, its requiring regressor learner
This is my task and learner for classification
classif.task <- makeClassifTask(id = "rfh2o.task", data = Train_clean, target = "Action")
rfh20.lrn.base = makeLearner("classif.h2o.randomForest", predict.type = "prob",fix.factors.prediction=TRUE)
rfh20.lrn <- makeFilterWrapper(rfh20.lrn.base, fw.method = "chi.squared", fw.perc = 0.5)
This is my tuning
rdesc <- makeResampleDesc("CV", iters = 3L, stratify = TRUE)
ps<- makeParamSet(makeDiscreteParam("fw.perc", values = seq(0.2, 0.8, 0.1)),
makeIntegerParam("mtries", lower = 2, upper = 10),
makeIntegerParam("ntrees", lower = 20, upper = 50)
)
Tuned_rf <- tuneParams(rfh20.lrn, task = QBE_classif.task, resampling = rdesc.h2orf, par.set = ps.h2orf, control = makeTuneControlGrid())
While plotting the tune
h2orf_data = generateHyperParsEffectData(Tuned_rf, partial.dep = TRUE)
plotHyperParsEffect(h2orf_data, x = "iteration", y = "mmce.test.mean", plot.type = "line", partial.dep.learn =rfh20.lrn)
I am getting the Error
Error in checkLearner(partial.dep.learn, "regr") :
Learner 'classif.h2o.randomForest.filtered' must be of type 'regr', not: 'classif'
I would expect to see the plot for any more tuning requirement so I can add more hyper tuning, am I missing some thing.
The partial.dep.learn parameter needs a regression learner; see the documentation.
I would like to compare simple logistic regressions models where each model considers a specified set of features only. I would like to perform comparisons of these regression models on resamples of the data.
The R package mlr allows me to select columns at the task level using dropFeatures. The code would be something like:
full_task = makeClassifTask(id = "full task", data = my_data, target = "target")
reduced_task = dropFeatures(full_task, setdiff( getTaskFeatureNames(full_task), list_feat_keep))
Then I can do benchmark experiments where I have a list of tasks.
lrn = makeLearner("classif.logreg", predict.type = "prob")
rdesc = makeResampleDesc(method = "Bootstrap", iters = 50, stratify = TRUE)
bmr = benchmark(lrn, list(full_task, reduced_task), rdesc, measures = auc, show.info = FALSE)
How can I generate a learner that only considers a specified set of features.
As far as I know the filter or selection methods always apply some statistical
procedure but do not allow to select the features directly. Thank you!
The first solution is lazy and also not optimal because the filter calculation is still carried out:
library(mlr)
task = sonar.task
sel.feats = c("V1", "V10")
lrn = makeLearner("classif.logreg", predict.type = "prob")
lrn.reduced = makeFilterWrapper(learner = lrn, fw.method = "variance", fw.abs = 2, fw.mandatory.feat = sel.feats)
bmr = benchmark(list(lrn, lrn.reduced), task, cv3, measures = auc, show.info = FALSE)
The second one uses the preprocessing wrapper to filter the data and should be the fastest solution and is also more flexible:
lrn.reduced.2 = makePreprocWrapper(
learner = lrn,
train = function(data, target, args) list(data = data[, c(sel.feats, target)], control = list()),
predict = function(data, target, args, control) data[, sel.feats]
)
bmr = benchmark(list(lrn, lrn.reduced.2), task, cv3, measures = auc, show.info = FALSE)
I've worked in R and a question arose.
I've made up an R code using the train() function in caret library, and I also set the parallel process by registDoParallel() in doParallel.
If I execute without parallel processing, then verbose messages arise in my console. But when I apply parallel process, no verbose messages appears...
Here is my code.
xgbt_ml_data1 <-
train(x = TRAIN_SPRS_PRT$data
, y = TRAIN_SPRS_PRT$label
, method = 'xgbTree'
, tuneGrid = expand.grid(nrounds = c(10,100,1000), max_depth = c(3,6,12), eta = .01,colsample_bytree = seq(.5,.8,.1), gamma = 0, min_child_weight = c(2:10), subsample = .85)
, metric = "RMSE"
, trControl = trainControl(
method = "cv",
number = 5,
verboseIter = TRUE,
returnData = FALSE,
returnResamp = "all", # save losses across all models
## classProbs = TRUE, # set to TRUE for AUC to be computed
## summaryFunction = twoClassSummary,
allowParallel = TRUE)
)
Although I set the allowParallel = TRUE, single core works for it.
(I've checked in system monitor)
And verbose messages appear in the console.
After I execute command registDoParallel(core = 4) then all 4 cores work
and there are no verbose messages.
Is this normal operation?