Activation function used for mlpML in Caret - r

I am using the Caret package in R, trying to implement multi-layer perceptron for classifying satellite images. I am using method=mlpML, and I would like to know which activation function is being used.
Here is my code:
controlparameters<-trainControl(method = "repeatedcv",
number=5,
repeats = 5,
savePredictions=TRUE,
classProbs = TRUE)
mlp_grid<-expand.grid(layer1=13,
layer2=0,
layer3=0)
model< train(as.factor(Species)~.,
data = smotedata,
method='mlpML',
preProc = c('center', 'scale'),
trcontrol=controlparameters,
tuneGrid=mlp_grid,
importance=T)
I used a single layer since it performed the best than using multi-layers.

Looking at the caret source code for mlpML, it turns out that it uses the mlp function of the RSNNS package.
According to the RSNNS mlp documentation, its default arguments are:
mlp(x, ...)
## Default S3 method:
mlp(x, y, size = c(5), maxit = 100,
initFunc = "Randomize_Weights", initFuncParams = c(-0.3, 0.3),
learnFunc = "Std_Backpropagation", learnFuncParams = c(0.2, 0),
updateFunc = "Topological_Order", updateFuncParams = c(0),
hiddenActFunc = "Act_Logistic", shufflePatterns = TRUE,
linOut = FALSE, outputActFunc = if (linOut) "Act_Identity" else
"Act_Logistic", inputsTest = NULL, targetsTest = NULL,
pruneFunc = NULL, pruneFuncParams = NULL, ...)
from which it is apparent that hiddenActFunc = "Act_Logistic", i.e the activation function for the hidden layers, is the logistic one.

Related

How to add custom bias/offset in modeling neural network through neuralnet in R?

I have a code for a neural network model which uses keras.
features <- layer_input ( shape=c(ncol(feature_matrix)))
net <- features %>%
layer_dense(units=q,activation='tanh') %>%
layer_dense(units=1,activation=k_exp)
volumes <- layer_input(shape=c(1))
offset <- volumes %>%
layer_dense(units=1,activation='linear',use_bias=FALSE,trainable=FALSE,
weights=list(array(1,dim=c(1,1))))
merged <- list(net,offset) %>%
layer_multiply()
model <- keras_model(inputs=list(features,volumes),outputs=merged)
model %>% compile(loss='mse',optimizer='rmsprop')
fit <- model %>% fit(list(feature_matrix,offset_matrix),response_matrix,epochs=100,
batch_size=10000,validation_split=0.1)
However, I cannot find a way to display the network architecture with keras. I want to redefine my neural network using the neuralnet package instead.
I just encountered neuralnet and am clueless on where I should insert the custom bias/offset that I have.
Its usage is given by
neuralnet(formula, data, hidden = 1, threshold = 0.01,
stepmax = 1e+05, rep = 1, startweights = NULL,
learningrate.limit = NULL, learningrate.factor = list(minus = 0.5,
plus = 1.2), learningrate = NULL, lifesign = "none",
lifesign.step = 1000, algorithm = "rprop+", err.fct = "sse",
act.fct = "logistic", linear.output = TRUE, exclude = NULL,
constant.weights = NULL, likelihood = FALSE)
How do I do that?

error : argument "x" is missing, with no default?

As im very new to XGBoost, I am trying to tune the parameters using mlr library and model but after using setHayperPars() learning using train() throws an error (in particular when i run xgmodel line): Error in colnames(x) : argument "x" is missing, with no default, and i can't recognize what's this error means, below is the code:
library(mlr)
library(dplyr)
library(caret)
library(xgboost)
set.seed(12345)
n=dim(mydata)[1]
id=sample(1:n, floor(n*0.6))
train=mydata[id,]
test=mydata[-id,]
traintask = makeClassifTask (data = train,target = "label")
testtask = makeClassifTask (data = test,target = "label")
#create learner
lrn = makeLearner("classif.xgboost",
predict.type = "response")
lrn$par.vals = list( objective="multi:softprob",
eval_metric="merror")
#set parameter space
params = makeParamSet( makeIntegerParam("max_depth",lower = 3L,upper = 10L),
makeIntegerParam("nrounds",lower = 20L,upper = 100L),
makeNumericParam("eta",lower = 0.1, upper = 0.3),
makeNumericParam("min_child_weight",lower = 1L,upper = 10L),
makeNumericParam("subsample",lower = 0.5,upper = 1),
makeNumericParam("colsample_bytree",lower = 0.5,upper = 1))
#set resampling strategy
configureMlr(show.learner.output = FALSE, show.info = FALSE)
rdesc = makeResampleDesc("CV",stratify = T,iters=5L)
# set the search optimization strategy
ctrl = makeTuneControlRandom(maxit = 10L)
# parameter tuning
set.seed(12345)
mytune = tuneParams(learner = lrn, task = traintask,
resampling = rdesc, measures = acc,
par.set = params, control = ctrl,
show.info = FALSE)
# build model using the tuned paramters
#set hyperparameters
lrn_tune = setHyperPars(lrn,par.vals = mytune$x)
#train model
xgmodel = train(learner = lrn_tune,task = traintask)
Could anyone tell me what's wrong!?
You have to be very careful when loading multiple packages that may involve methods with the same name - here caret and mlr, which both include a train method. Moreover, the order of the library statements is significant: here, as caret is loaded after mlr, it masks functions with the same name from it (and possibly every other package loaded previously), like train.
In your case, where you obviously want to use the train method from mlr (and not from caret), you should declare this explicitly in your code:
xgmodel = mlr::train(learner = lrn_tune,task = traintask)

Partial dependence must be requested with partial.dep when tuning more than 2 hyperparameters?

I am tuning more than 2 hyperparameters, while Generate hyperparameter effect data using the function generateHyperParsEffectData I set partial.dep = TRUE, while plotting plotHyperParsEffect i am getting error for classification learner, its requiring regressor learner
This is my task and learner for classification
classif.task <- makeClassifTask(id = "rfh2o.task", data = Train_clean, target = "Action")
rfh20.lrn.base = makeLearner("classif.h2o.randomForest", predict.type = "prob",fix.factors.prediction=TRUE)
rfh20.lrn <- makeFilterWrapper(rfh20.lrn.base, fw.method = "chi.squared", fw.perc = 0.5)
This is my tuning
rdesc <- makeResampleDesc("CV", iters = 3L, stratify = TRUE)
ps<- makeParamSet(makeDiscreteParam("fw.perc", values = seq(0.2, 0.8, 0.1)),
makeIntegerParam("mtries", lower = 2, upper = 10),
makeIntegerParam("ntrees", lower = 20, upper = 50)
)
Tuned_rf <- tuneParams(rfh20.lrn, task = QBE_classif.task, resampling = rdesc.h2orf, par.set = ps.h2orf, control = makeTuneControlGrid())
While plotting the tune
h2orf_data = generateHyperParsEffectData(Tuned_rf, partial.dep = TRUE)
plotHyperParsEffect(h2orf_data, x = "iteration", y = "mmce.test.mean", plot.type = "line", partial.dep.learn =rfh20.lrn)
I am getting the Error
Error in checkLearner(partial.dep.learn, "regr") :
Learner 'classif.h2o.randomForest.filtered' must be of type 'regr', not: 'classif'
I would expect to see the plot for any more tuning requirement so I can add more hyper tuning, am I missing some thing.
The partial.dep.learn parameter needs a regression learner; see the documentation.

R: Can I pass the weight parameter into the params = list() in LightGBM

Recently, I am learning the LightGBM package and want to tune the parameters of it.
I want to try all the parameters which can be tuned in the LightGBM.
One question is: when I build the model using the function: lightgbm(data, label = NULL, weight = NULL, params = list(), nrounds = 10, verbose = 1), can I put the weight and nrounds and many other parameters into a list object and feed to the params argument?
The following code is what I used:
# input data for lgb.Dataset()
data_lgb <- lgb.Dataset(
data = X_tr,
label = y_tr
)
# can I put all parameters to be tuned into this list?
params_list <- list(weight = NULL, nrounds = 20, verbose = 1, learning_rate = 0.1)
# build lightgbm model using only: data_lgb and params_list
lgb_model <- lightgbm(data_lgb, params = params_list)
Can I do this using the above code?
I ask because I have a large training data set (2 million rows and 700 features). If I put the lgb.Dataset() into the lightgbm such as lightgbm(data = lgb.Dataset(data = X_tr,label = y_tr), params = params_list), then It takes time for multiple model building. Therefore, I first get the dataset used for lightgbm and for each model, the dataset is constant, what I did can only focus on the different parameters.
However, I am not sure, in total, how many parameters can be put into the params_list? Such as can the weight parameter be in the params_list? When I look the help ?lightgbm, I notice that the weight parameter and many other parameters are out side of the params_list.
Can you help me figure out: in total which parameters can be put into the params_list? That is the final model is built only using the data argument and params argument (other parameters are put into the params list object) as shown above, is that feasible?
Thank you.
Lightgbm has many params which you can tune. Please read the documentation.
I am pasting some part from one of my model script which shows the process. Should be a good hint for you.
nthread <- as.integer(future::availableCores())
seed <- 1000
EARLY_STOPPING <- 50
nrounds <- 1000
param <- list(objective = "regression"
metric = "rmse",
max_depth = 3,
num_leaves = 5,
learning_rate = 0.1,
nthread = nthread,
bagging_fraction = 0.7,
feature_fraction = 0.7,
bagging_freq = 5,
bagging_seed = seed,
verbosity = -1,
min_data_in_leaf = 5)
dtrain <- lgb.Dataset(data = as.matrix(train_X),
label = train_y)
dval <- lgb.Dataset(data = as.matrix(val_X),
label = val_y)
valids <- list(val = dval)
bst <- lgb.train(param,
data = dtrain,
nrounds = nrounds,
data_random_seed = seed,
early_stopping_rounds = EARLY_STOPPING,
valids = valids)

Get caret to print progress

I am trying to optimize a xgboost tree by using feature selection with caret's genetic algorithm
results <- gafs(iris[,1:4], iris[,5],
iters = 2,
method = "xgbTree",
metric = "Accuracy",
gafsControl = gafsControl(functions=caretGA, method="cv", repeats=2, verbose = TRUE),
trConrol = trainControl(method = "cv", classProbs = TRUE, verboseIter = TRUE)
)
this is however very slow and thus I would like to plot a progress bar. However I do not get any progress plotted, even though I used verbose = TRUE and verboseIter = TRUE. What am I doing wrong?

Resources