R Error with neural network caret parameter tuning - r

I'm working on tuning parameters for a neural network exercise on the Boston dataset. I have been getting a persistent error:
Error: The tuning parameter grid should have columns size, decay
The following is the set up of my Caret tuning:
caret_control <- trainControl(method = "repeatedcv",
number = 10,
repeats = 3)
caret_grid <- expand.grid(batch_size=seq(60,120,20),
dropout=0.5,
size=100,
decay = 0,
lr=2e-6,
activation = "relu")
caret_t <- train(medv ~ ., data = chasRad,
method = "nnet",
metric="RMSE",
trControl = caret_control,
tuneGrid = caret_grid,
verbose = FALSE)
Here chasRad is a 12x506 matrix. Could anyone help on fixing the error that seems triggered by the expanded grid?

The error you're getting should be interpreted as:
"The tuning parameter grid should ONLY have columns size, decay".
You're passing in four additional parameters that nnet can't tune in caret. For a full list of parameters that are tunable, run modelLookup(model = 'nnet').
To tune only size and decay, replace your caret_grid with:
caret_grid <- expand.grid(size=seq(from = 1, to = 10, by = 1),
decay = seq(from = 0.1, to = 0.5, by = 0.1))
and your code will run.

Related

How to get test data ROC plot from MLeval

I am trying to return the ROC curves for a test dataset using the MLevals package.
# Load data
train <- readRDS(paste0("Data/train.rds"))
test <- readRDS(paste0("Data/test.rds"))
# Create factor class
train$class <- ifelse(train$class == 1, 'yes', 'no')
# Set up control function for training
ctrl <- trainControl(method = "cv",
number = 5,
returnResamp = 'none',
summaryFunction = twoClassSummary(),
classProbs = T,
savePredictions = T,
verboseIter = F)
gbmGrid <- expand.grid(interaction.depth = 10,
n.trees = 18000,
shrinkage = 0.01,
n.minobsinnode = 4)
# Build using a gradient boosted machine
set.seed(5627)
gbm <- train(class ~ .,
data = train,
method = "gbm",
metric = "ROC",
tuneGrid = gbmGrid,
verbose = FALSE,
trControl = ctrl)
# Predict results -
pred <- predict(gbm, newdata = test, type = "prob")[,"yes"]
roc <- evalm(data.frame(pred, test$class))
I have used the following post, ROC curve for the testing set using Caret package,
to try and plot the ROC from test data using MLeval and yet I get the following error message:
MLeval: Machine Learning Model Evaluation
Input: data frame of probabilities of observed labels
Error in names(x) <- value :
'names' attribute [3] must be the same length as the vector [2]
Can anyone please help? Thanks.
Please provide a reproducible example with sample data so we can replicate the error and test for solutions (i.e., we cannot access train.rds or test.rds).
Nevertheless, the below may fix your issue.
pred <- predict(gbm, newdata = test, type = "prob")
roc <- evalm(data.frame(pred, test$class))

How to prevent "algorithm did not converge" errors in neuralnet / Caret / R?

I am trying to train a neural network using train function and neuralnet as my method paramater to predict times table.
I am scaling my training data set as well.
Even though I've tried different learningrates, stepmaxes, and thresholds for my neuralnet, each time I tried to train the network using train function one of the k-folds happened to fail every time saying
1: Algorithm did not converge in 1 of 1 repetition(s) within the stepmax.
2: predictions failed for Fold05.Rep1: layer1=8, layer2=0, layer3=0 Error in cbind(1, pred) %*% weights[[num_hidden_layers + 1]] :
requires numeric/complex matrix/vector arguments
I am guessing that this is because of weights being random so somehow each time I happen to get some weights that are not going to converge.
Is there anyway of preventing this? Maybe trying to re-train the particular fold which has failed using different weights?
Here is my code:
library(caret)
library(neuralnet)
# Create the dataset
tt = data.frame(multiplier = rep(1:10, times = 10), multiplicand = rep(1:10, each = 10))
tt = cbind(tt, data.frame(product = tt$multiplier * tt$multiplicand))
# Splitting
indexes = createDataPartition(tt$product,
times = 1,
p = 0.7,
list = FALSE)
tt.train = tt[indexes,]
tt.test = tt[-indexes,]
# Pre-process
preProc <- preProcess(tt, method = c('center', 'scale'))
tt.preProcessed <- predict(preProc, tt)
tt.preProcessed.train <- tt.preProcessed[indexes,]
tt.preProcessed.test <- tt.preProcessed[-indexes,]
# Train
train.control <- trainControl(method = "repeatedcv",
number = 10,
repeats = 3)
tune.grid <- expand.grid(layer1 = 8,
layer2 = 0,
layer3 = 0)
tt.cv <- train(product ~ .,
data = tt.preProcessed.train,
method = 'neuralnet',
tuneGrid = tune.grid,
trControl = train.control,
linear.output = TRUE,
algorithm = 'backprop',
learningrate = 0.01,
stepmax = 500000,
lifesign = 'minimal',
threshold = 0.01)

Tuning XGboost parameters Using Caret - Error: The tuning parameter grid should have columns

I am using caret for modeling using "xgboost"
1- However, I get following error :
"Error: The tuning parameter grid should have columns nrounds,
max_depth, eta, gamma, colsample_bytree, min_child_weight, subsample"
The code:
library(caret)
library(doParallel)
library(dplyr)
library(pROC)
library(xgboost)
## Create train/test indexes
## preserve class indices
set.seed(42)
my_folds <- createFolds(train_churn$churn, k = 10)
# Compare class distribution
i <- my_folds$Fold1
table(train_churn$churn[i]) / length(i)
my_control <- trainControl(
summaryFunction = twoClassSummary,
classProbs = TRUE,
verboseIter = TRUE,
savePredictions = TRUE,
index = my_folds
)
my_grid <- expand.grid(nrounds = 500,
max_depth = 7,
eta = 0.1,
gammma = 1,
colsample_bytree = 1,
min_child_weight = 100,
subsample = 1)
set.seed(42)
model_xgb <- train(
class ~ ., data = train_churn,
metric = "ROC",
method = "xgbTree",
trControl = my_control,
tuneGrid = my_grid)
2- I also want to get a prediction made by averaging the predictions made by using the model fitted for each fold.
I know it's 'tad' bit late but, check your spelling of gamma in the grid of tuning parameters. You misspelled it as gammma (with triple m's).

Can't pass xgb.DMatrix to caret

I am trying tune Hyperparametes of xgboost for a classification problem, using caret library, As there were a lot of factors in my data set and xgboost likes data as numerical, I created a dummy rows using Feature Hashing, but when I get to run caret train , I get an error
#Using Feature hashing to convert all the factor variables to dummies
objTrain_hashed = hashed.model.matrix(~., data=train1[,-27], hash.size=2^15, transpose=FALSE)
#created a dense matrix which is normally accepted by xgboost method in R
#Hoping I could pass it caret as well
dmodel <- xgb.DMatrix(objTrain_hashed[, ], label = train1$Walc)
xgb_grid_1 = expand.grid(
nrounds = 500,
max_depth = c(5, 10, 15),
eta = c(0.01, 0.001, 0.0001),
gamma = c(1, 2, 3),
colsample_bytree = c(0.4, 0.7, 1.0),
min_child_weight = c(0.5, 1, 1.5)
)
xgb_trcontrol_1 = trainControl(
method = "cv",
number = 3,
verboseIter = TRUE,
returnData = FALSE,
returnResamp = "all", # save losses across all models
classProbs = TRUE, # set to TRUE for AUC to be computed
summaryFunction = twoClassSummary,
allowParallel = TRUE
)
xgb_train1 <- train(Walc ~.,dmodel,method = 'xgbTree',trControl = xgb_trcontrol_1,
metric = 'accuracy',tunegrid = xgb_grid_1)
I am getting the following error
Error in as.data.frame.default(data) :
cannot coerce class ""xgb.DMatrix"" to a data.frame
Any suggestions, on how I can proceed ?
This is because you are inputting dmodel into the last part of your code. Try inputting objTrain_hashed, which is a matrix, and not an xgb.DMatrix
How about sparse.model.matrix() instead of hashed.model.matrix...
It works on my PC...
and don't transform to xgb.DMatrix()
put it in train() function just mere sparse.model.matrix() form.
like...
model_data <- sparse.model.matrix(Y~., raw_data)
and
xgb_train1 <- train(Y ~.,model_data, <bla bla> ...)
Wish it works... thank you.

How to instruct xgboost in caret to use mlogloss for optimization

I have a multiclass problem: For example, we can take the dataset mtcars dataset and we want to predict number of cylinders cyl.
data(mtcars)
I want to use xgboost and fit it using the caret package. For that I create grid for hyperparameters using
xgb_grid_param = expand.grid(
nrounds = 1000,
eta = c(0.01, 0.001, 0.0001),
max_depth = c(2, 4),
gamma = 0,
colsample_bytree =1,
min_child_weight =1
)
I can create training control parameters as
xgb_tr_ctrl = trainControl(
method = "cv",
number = 5,
repeats =2,
verboseIter = TRUE,
returnData = FALSE,
returnResamp = "all",
allowParallel = TRUE
)
When I then try to run the train function in caret using:
model <- train(factor(cyl)~., data = mtcars, method = "xgbTree",
trControl = xgb_grid_param, tuneGrid=xgb_grid_param)
I get the error ::
Error in trControl$classProbs && any(classLevels != make.names(classLevels)) :
invalid 'x' type in 'x && y'
How do I fix this error and how do I instruct xgbTree to use mlogloss to optimize the learning.
For another method I could solve "invalid 'x' type in 'x && y'" by setting the label attribute as last column of the data frame / matrix.

Resources