XGBoost - predict not exported in namespace - r

I am trying to tune an xgboost model with a multiclass dependent variable in R. I am using MLR to do this, however I run into an error where xgboost doesn't have predict within its namespace - which I assume MLR wants to use. I have had a look online and see that other people have encountered similar issues. However, I can't entirely understand the answers that have been provided (e.g. https://github.com/mlr-org/mlr/issues/935), when I try to implement them the issue persists. My code is as follows:
# Tune parameters
#create tasks
train$result <- as.factor(train$result) # Needs to be a factor variable for makeClass to work
test$result <- as.factor(test$result)
traintask <- makeClassifTask(data = train,target = "result")
testtask <- makeClassifTask(data = test,target = "result")
lrn <- makeLearner("classif.xgboost",predict.type = "response")
# Set learner value and number of rounds etc.
lrn$par.vals <- list(
objective = "multi:softprob", # return class with maximum probability,
num_class = 3, # There are three outcome categories
eval_metric="merror",
nrounds=100L,
eta=0.1
)
# Set parameters to be tuned
params <- makeParamSet(
makeDiscreteParam("booster",values = c("gbtree","gblinear")),
makeIntegerParam("max_depth",lower = 3L,upper = 10L),
makeNumericParam("min_child_weight",lower = 1L,upper = 10L),
makeNumericParam("subsample",lower = 0.5,upper = 1),
makeNumericParam("colsample_bytree",lower = 0.5,upper = 1)
)
# Set resampling strategy
rdesc <- makeResampleDesc("CV",stratify = T,iters=5L)
# search strategy
ctrl <- makeTuneControlRandom(maxit = 10L)
#parallelStartSocket(cpus = detectCores()) # Enable parallel processing
mytune <- tuneParams(learner = lrn
,task = traintask
,resampling = rdesc
,measures = acc
,par.set = params
,control = ctrl
,show.info = T)
The specific error I get is:
Error: 'predict' is not an exported object from 'namespace:xgboost'
My package versions are:
packageVersion("xgboost")
[1] ‘0.6.4’
packageVersion("mlr")
[1] ‘2.8’
Would anyone know what I should do here?
Thanks in advance.

Related

Create a multivariate matrix in tidymodels recipes::recipe()

I am trying to do a k-fold cross validation on a model that predicts the joint distribution of the proportion of tree species basal area from satellite imagery. This requires the use of the DiricihletReg::DirichReg() function, which in turn requires that the response variables be prepared as a matrix using the DirichletReg::DR_data() function. I originally tried to accomplish this in the caret:: package, but I found out that caret:: does not support multivariate responses. I have since tried to implement this in the tidymodels:: suite of packages. Following the documentation on how to register a new model in the parsnip:: (I appreciate Max Kuhn's vegetable humor) package, I created a "DREG" model and a "DR" engine. My registered model works when I simply call it on a single training dataset, but my goal is to do kfolds cross-validation, implementing the vfolds_cv(), a workflow(), and the 'fit_resample()' function. With the code I currently have I get warning message stating:
Warning message:
All models failed. See the `.notes` column.
Those notes state that Error in get(resp_char, environment(oformula)): object 'cbind(PSME, TSHE, ALRU2)' not found This, I believe is due to the use of DR_data() to preprocess the response variables into the format necessary for Dirichlet::DirichReg() to run properly. I think the solution I need to implement involve getting this pre-processing to happen in either the recipe() call or in the set_fit() call when I register this model with parsnip::. I have tried to use the step_mutate() function when specifying the recipe, but that performs a function on each column as opposed to applying the function with the columns as inputs. This leads to the following error in the "notes" from the output of fit_resample():
Must subset columns with a valid subscript vector.
Subscript has the wrong type `quosures`.
It must be numeric or character.
Is there a way to get the recipe to either transform several columns to a DirichletRegData class using the DR_data() function with a step_*() function or using the pre= argument in set_fit() and set_pred()?
Below is my reproducible example:
##Loading Necessary Packages##
library(tidymodels)
library(DirichletReg)
##Creating Fake Data##
set.seed(88)#For reproducibility
#Response variables#
PSME_BA<-rnorm(100,50, 15)
TSHE_BA<-rnorm(100,40,12)
ALRU2_BA<-rnorm(100,20,0.5)
Total_BA<-PSME_BA+TSHE_BA+ALRU2_BA
#Predictor variables#
B1<-runif(100, 0, 2000)
B2<-runif(100, 0, 1800)
B3<-runif(100, 0, 3000)
#Dataset for modeling#
DF<-data.frame(PSME=PSME_BA/Total_BA, TSHE=TSHE_BA/Total_BA, ALRU2=ALRU2_BA/Total_BA,
B1=B1, B2=B2, B3=B3)
##Modeling the data using Dirichlet regression with repeated k-folds cross validation##
#Registering the model to parsnip::#
set_new_model("DREG")
set_model_mode(model="DREG", mode="regression")
set_model_engine("DREG", mode="regression", eng="DR")
set_dependency("DREG", eng="DR", pkg="DirichletReg")
set_model_arg(
model = "DREG",
eng = "DR",
parsnip = "param",
original = "model",
func = list(pkg = "DirichletReg", fun = "DirichReg"),
has_submodel = FALSE
)
DREG <-
function(mode = "regression", param = NULL) {
# Check for correct mode
if (mode != "regression") {
rlang::abort("`mode` should be 'regression'")
}
# Capture the arguments in quosures
args <- list(sub_classes = rlang::enquo(param))
# Save some empty slots for future parts of the specification
new_model_spec(
"DREG",
args=args,
eng_args = NULL,
mode = mode,
method = NULL,
engine = NULL
)
}
set_fit(
model = "DREG",
eng = "DR",
mode = "regression",
value = list(
interface = "formula",
protect = NULL,
func = c(pkg = "DirichletReg", fun = "DirichReg"),
defaults = list()
)
)
set_encoding(
model = "DREG",
eng = "DR",
mode = "regression",
options = list(
predictor_indicators = "none",
compute_intercept = TRUE,
remove_intercept = TRUE,
allow_sparse_x = FALSE
)
)
set_pred(
model = "DREG",
eng = "DR",
mode = "regression",
type = "numeric",
value = list(
pre = NULL,
post = NULL,
func = c(fun = "predict.DirichletRegModel"),
args =
list(
object = expr(object$fit),
newdata = expr(new_data),
type = "response"
)
)
)
##Running the Model##
DF$Y<-DR_data(DF[,c(1:3)]) #Preparing the response variables
dreg_spec<-DREG(param="alternative") %>%
set_engine("DR")
dreg_mod<-dreg_spec %>%
fit(Y~B1+B2+B3, data = DF)#Model works when simply run on single dataset
##Attempting Crossvalidation##
#First attempt - simply call Y as the response variable in the recipe#
kfolds<-vfold_cv(DF, v=10, repeats = 2)
rcp<-recipe(Y~B1+B2+B3, data=DF)
dreg_fit<- workflow() %>%
add_model(dreg_spec) %>%
add_recipe(rcp)
dreg_rsmpl<-dreg_fit %>%
fit_resamples(kfolds)#Throws warning about all models failing
#second attempt - use step_mutate_at()#
rcp<-recipe(~B1+B2+B3, data=DF) %>%
step_mutate_at(fn=DR_data, var=vars(PSME, TSHE, ALRU2))
dreg_fit<- workflow() %>%
add_model(dreg_spec) %>%
add_recipe(rcp)
dreg_rsmpl<-dreg_fit %>%
fit_resamples(kfolds)#Throws warning about all models failing
This works, but I'm not sure if it's what you were expecting.
First--getting the data setup for CV and DR_data()
I don't know of any package that has built what would essentially be a translation for CV and DirichletReg. Therefore, that part is manually done. You might be surprised to find it's not all that complicated.
Using the data you created and the modeling objects you created for tidymodels (those prefixed with set_), I created the CV structure that you were trying to use.
df1 <- data.frame(PSME = PSME_BA/Total_BA, TSHE = TSHE_BA/Total_BA,
ALRU2=ALRU2_BA/Total_BA, B1, B2, B3)
set.seed(88)
kDf2 <- kDf1 <- vfold_cv(df1, v=10, repeats = 2)
For each of the 20 subset data frames identified in kDf2, I used DR_data to set the data up for the models.
# convert to DR_data (each folds and repeats)
df2 <- map(1:20,
.f = function(x){
in_ids = kDf1$splits[[x]]$in_id
dd <- kDf1$splits[[x]]$data[in_ids, ] # filter rows BEFORE DR_data
dd$Y <- DR_data(dd[, 1:3])
kDf1$splits[[x]]$data <<- dd
})
Because I'm not all that familiar with tidymodels, next conducted the modeling using DirichReg. I then did it again with tidymodels and compared them. (The output is identical.)
DirichReg Models and summaries of the fits
set.seed(88)
# perform crossfold validation on Dirichlet Model
df2.fit <- map(1:20,
.f = function(x){
Rpt = kDf1$splits[[x]]$id$id
Fld = kDf1$splits[[x]]$id$id2
daf = kDf1$splits[[x]]$data
fit = DirichReg(Y ~ B1 + B2, daf)
list(Rept = Rpt, Fold = Fld, fit = fit)
})
# summary of each fitted model
fit.a <- map(1:20,
.f = function(x){
summary(df2.fit[[x]]$fit)
})
tidymodels and summaries of the fits (the code looks the same, but there are a few differences--the output is the same, though)
# I'm not sure what 'alternative' is supposed to do here?
dreg_spec <- DREG(param="alternative") %>% # this is not model = alternative
set_engine("DR")
set.seed(88)
dfa.fit <- map(1:20,
.f = function(x){
Rpt = kDf1$splits[[x]]$id$id
Fld = kDf1$splits[[x]]$id$id2
daf = kDf1$splits[[x]]$data
fit = dreg_spec %>%
fit(Y ~ B1 + B2, data = daf)
list(Rept = Rpt, Fold = Fld, fit = fit)
})
afit.a <- map(1:20,
.f = function(x){
summary(dfa.fit[[x]]$fit$fit) # extra nest for parsnip
})
If you wanted to see the first model?
fit.a[[1]]
afit.a[[1]]
If you wanted the model with the lowest AIC?
# comare AIC, BIC, and liklihood?
# what do you percieve best fit with?
fmin = min(unlist(map(1:20, ~fit.a[[.x]]$aic))) # dir
# find min AIC model number
paste0((map(1:20, ~ifelse(fit.a[[.x]]$aic == fmin, .x, ""))), collapse = "")
fit.a[[19]]
afit.a[[19]]

Error (Setting objectives in 'params' and 'obj' at the same time is not allowed) in xgboost() function in R

Below is the code which i am executing on XGBOOST,
data(Glass, package = "mlbench")
levels(Glass$Type) <- c(0:5) #Proper Sequence. Should start with 0
Glass$Type <- as.integer(as.character(Glass$Type))
set.seed(100)
options(scipen = 999)
library(caret)
R_index <- createDataPartition(Glass$Type, p=.7, list = FALSE)
gl_train <- Glass[R_index,]
gl_test <- Glass[-R_index,]
'%ni%' <- Negate('%in%')
library(xgboost)
library(Matrix)
#Creating the matrix for training the model
train_gl <- xgb.DMatrix(data.matrix(gl_train[ ,colnames(gl_train) %ni% 'Type']),
label = as.numeric(gl_train$Type))
test_gl <- xgb.DMatrix(data.matrix(gl_test[ ,colnames(gl_test) %ni% 'Type']))
watchlist <- list(train = gl_train, test = gl_test)
#Define the parameters and cross validate
param <- list("objective" = "multi:softmax",
"eval_metric" = "mlogloss",
"num_class" = length(unique(gl_train$Type)))
cv.nround <- 5
cv.nfold <- 3
cvMod <- xgb.cv(param = param, data = train_gl,
nfold = cv.nfold,
nrounds = cv.nround,
watchlist=watchlist)
#Build the Model
nrounds = 50
xgMod = xgboost(param = param, data = train_gl, nrounds = nrounds, watchlist = watchlist)
After executing xgMod i am getting the below mentioned error,
Error in check.custom.obj() :
Setting objectives in 'params' and 'obj' at the same time is not allowed
Let me know what's wrong in my code.
Any help is appreciated.
Regards,
Mohan
The problem is due to the watchlist parameter passed to xgboost.
watchlist is a parameter of xgb.train but not of xgboost, hence it is considered by xgboost like "other parameters" (...) .
The following code
xgMod <- xgboost(param = param, data = train_gl, nrounds = nrounds)
works correctly
[1] train-mlogloss:1.259886
[2] train-mlogloss:0.963367
[3] train-mlogloss:0.755535
[4] train-mlogloss:0.601647
[5] train-mlogloss:0.478923
...

how to use multiclass.au1p measure in mlr

I am trying to use multiclass.au1p measure in mlr package. It gave me an error saying
Error in FUN(X[[i]], ...) : Measure multiclass.au1p requires
predict type to be: 'prob'!
When I tried to set the predict type to prob then it gave me an error similar to following for any classifier i used
Error in setPredictType.Learner(learner, predict.type) : Trying to
predict probs, but classif.xgboost.multiclass does not support that!
How can I resolve this?
Following is my code
trainTask <- makeClassifTask(data = no_out_pso,target = "response_grade")
Clslearn = makeLearner("classif.xgboost", predict.type = "prob")
Clslearn = makeMulticlassWrapper(Clslearn, mcw.method = "onevsrest")
Clslearn = setPredictType(Clslearn, "prob")
rdesc = makeResampleDesc("CV", iters = 3)
r = resample(Clslearn, trainTask, rdesc, measures = list(mlr::acc, mlr::multiclass.au1p, mlr::multiclass.au1u))
print(r)
It does not work with the makeMulticlassWrapper, because this does not support probability prediction (at the moment). I get also an error, when I try to set it to prob in your code.
Code that works:
Clslearn = makeLearner("classif.xgboost", predict.type = "prob")
rdesc = makeResampleDesc("CV", iters = 3)
r = resample(Clslearn, iris.task, rdesc, measures = list(mlr::acc, mlr::multiclass.au1p, mlr::multiclass.au1u))
You need to use a classifier that supports predicting probabilities. You can get a list with the listLearners() function:
listLearners(properties = "prob")

Using parallelMap Package with Custom Filter in mlr

I working with mlr to do a text classification task. I have written a custom filter as described here
Create Custom Filters
The filter works as intended, however when I try and and ustilise parallelization I receive the following error:
Exporting objects to slaves for mode socket: .mlr.slave.options
Mapping in parallel: mode = socket; cpus = 4; elements = 2.
Error in stopWithJobErrorMessages(inds, vcapply(result.list[inds], as.character)) :
Errors occurred in 2 slave jobs, displaying at most 10 of them:
00001: Error in parallel:::.slaveRSOCK() :
Assertion on 'method' failed: Must be element of set {'anova.test','carscore','cforest.importance','chi.squared','gain.ratio','information.gain','kruskal.test','linear.correlation','mrmr','oneR','permutation.importance','randomForest.importance','randomForestSRC.rfsrc','randomForestSRC.var.select','rank.correlation','relief','rf.importance','rf.min.depth','symmetrical.uncertainty','univariate','univariate.model.score','variance'}.
I'm assuming from the error that my custom filter needs to be an element in the set to stand a chance of working in parallel, but haven't managed to work out if (a) this is possible, and (b) if it is, how do I go about it.
Thanks in advance for any help,
Azam
Added: Test Script
I can't let you see the actual script/data I'm working with due to sensitivity, but this example reproduces the error I see. Apart from the custom feature selection and data-set, the steps to set up the learner and evaluate it are as I have in my 'real' script. As in my real case, if you remove the parallelStartSocket() command then the script runs as expected.
I should also add that I have sucessfully used (or at least I received no errors) parallel processing when tuning the hyper-parameters of an SVM with RBF kernel: the script being identical apart from the makeParamSet() definition.
library(parallelMap)
library(mlr)
library(kernlab)
makeFilter(
name = "nonsense.filter",
desc = "Calculates scores according to alphabetical order of features",
pkg = "mlr",
supported.tasks = c("classif", "regr", "surv"),
supported.features = c("numerics", "factors", "ordered"),
fun = function(task, nselect, decreasing = TRUE, ...) {
feats = getTaskFeatureNames(task)
imp = order(feats, decreasing = decreasing)
names(imp) = feats
imp
}
)
# set up svm with rbf kernal
svm.lrn <- makeLearner("classif.ksvm",predict.type = "response")
# wrap learner with filter
svm.lrn <- makeFilterWrapper(svm.lrn, fw.method = "nonsense.filter")
# define feature selection parameters
ps.svm = makeParamSet(
makeDiscreteParam("fw.abs", values = seq(2, 3, 1))
)
# define inner search and evaluation strategy
ctrl.svm = makeTuneControlGrid()
inner.svm = makeResampleDesc("CV", iters = 5, stratify = TRUE)
svm.lrn <- makeTuneWrapper(svm.lrn, resampling = inner.svm, par.set = ps.svm,
control = ctrl.svm)
# set up outer resampling
outer.svm <- makeResampleDesc("CV", iters = 10, stratify = TRUE)
# run it...
parallelStartSocket(2)
run.svm <- resample(svm.lrn, iris.task,
resampling = outer.svm, extract = getTuneResult)
parallelStop()
The problem is that makeFilter registers S3 methods, which are not available in separate R processes. You have two options to make this work: either simply use parallelStartMulticore(2) so that everything runs in the same R process, or tell parallelMap about the pieces that need to be present in the other R processes.
There are two parts to the latter. First, use parallelLibrary("mlr") to load mlr everywhere and pull out the definition of the filter into a separate file that can be loaded using parallelSource(). For example:
filter.R:
makeFilter(
name = "nonsense.filter",
desc = "Calculates scores according to alphabetical order of features",
pkg = "mlr",
supported.tasks = c("classif", "regr", "surv"),
supported.features = c("numerics", "factors", "ordered"),
fun = function(task, nselect, decreasing = TRUE, ...) {
feats = getTaskFeatureNames(task)
imp = order(feats, decreasing = decreasing)
names(imp) = feats
imp
}
)
main.R:
library(parallelMap)
library(mlr)
library(kernlab)
parallelStartSocket(2)
parallelLibrary("mlr")
parallelSource("filter.R")
# set up svm with rbf kernal
svm.lrn = makeLearner("classif.ksvm",predict.type = "response")
# wrap learner with filter
svm.lrn = makeFilterWrapper(svm.lrn, fw.method = "nonsense.filter")
# define feature selection parameters
ps.svm = makeParamSet(
makeDiscreteParam("fw.abs", values = seq(2, 3, 1))
)
# define inner search and evaluation strategy
ctrl.svm = makeTuneControlGrid()
inner.svm = makeResampleDesc("CV", iters = 5, stratify = TRUE)
svm.lrn = makeTuneWrapper(svm.lrn, resampling = inner.svm, par.set = ps.svm,
control = ctrl.svm)
# set up outer resampling
outer.svm = makeResampleDesc("CV", iters = 10, stratify = TRUE)
# run it...
run.svm = resample(svm.lrn, iris.task, resampling = outer.svm, extract = getTuneResult)
parallelStop()

How to incorporate logLoss in caret

I'm attempting to incorporate logLoss as the performance measure used when tuning randomForest (other classifiers) by way of caret (instead of the default options of Accuracy or Kappa).
The first R script executes without error using defaults. However, I get:
Error in { : task 1 failed - "unused argument (model = method)"
when using the second script.
The function logLoss(predict(rfModel,test[,-c(1,95)],type="prob"),test[,95]) works by way of leveraging a separately trained randomForest model.
The dataframe has 100+ columns and 10,000+ rows. All elements are numeric outside of the 9-level categorical "target" at col=95. A row id is located in col=1.
Unfortunately, I'm not correctly grasping the guidance provided by http://topepo.github.io/caret/training.html, nor having much luck via google searches.
Your help are greatly appreciated.
Working R script:
fitControl = trainControl(method = "repeatedcv",number = 10,repeats = 10)
rfGrid = expand.grid(mtry=c(1,9))
rfFit = train(target ~ ., data = train[,-1],method = "rf",trControl = fitControl,verbose = FALSE,tuneGrid = rfGrid)
Not working R script:
logLoss = function(data,lev=NULL,method=NULL) {
lLoss = 0
epp = 10^-15
for (i in 1:nrow(data)) {
index = as.numeric(lev[i])
p = max(min(data[i,index],1-epp),epp)
lLoss = lLoss - log(p)
}
lLoss = lLoss/nrow(data)
names(lLoss) = c("logLoss")
lLoss
}
fitControl = trainControl(method = "repeatedcv",number = 10,repeats = 10,summaryFunction = logLoss)
rfGrid = expand.grid(mtry=c(1,9))
rfFit = train(target ~ ., data = trainBal[,-1],method = "rf",trControl = fitControl,verbose = FALSE,tuneGrid = rfGrid)
I think you should set summaryFunction=mnLogLoss in trainControl and metric="logLoss" in train (I found it here). Like this:
# load libraries
library(caret)
# load the dataset
data(iris)
# prepare resampling method
control <- trainControl(method="cv", number=5, classProbs=TRUE, summaryFunction=mnLogLoss)
set.seed(7)
fit <- train(Species~., data=iris, method="rf", metric="logLoss", trControl=control)
# display results
print(fit)
Your argument name is not correct (i.e. "unused argument (model = method)"). The webpage says that the last function argument should be called model and not method.

Resources