H2O deep learning model results with dropout scaled down - r

I am having the following figure when training an H2O Deep Learning model with dropout
Misaligned predictions
The code used to train the net is
m.nn <- h2o.deeplearning(x = 1:(nc-1),
y = nc,
training_frame = datTra,
#validation_frame = datTst,
nfolds = 5,
activation = 'RectifierWithDropout',
#input_dropout_ratio = 0.2,
hidden_dropout_ratios = c(dro, dro, dro),
hidden = c(120,30,8),
#hidden = 20,
epochs = 999,
#mini_batch_size = 100,
#variable_importances = TRUE,
standardize = TRUE,
regression_stop = 1e-3,
stopping_metric = "MSE",
stopping_tolerance = 1e-6,
stopping_rounds = 10)
Figure corresponds to dro=0.1
Why am I having that misalignment? Is there any option i am missing out?
You can find a piece of code to try below
(download 'SampleData.csv' from here)
library(h2o)
library(readr)
library(ggplot2)
df <- as.data.frame(read_delim(file = 'SampleData.csv', delim = ";"))
localH2O <- h2o.init(ip = "localhost", startH2O = TRUE, nthreads = 2, max_mem_size = '4g')
dat_h2o <- as.h2o(x = df)
model.ref <- h2o.deeplearning(x = 1:(ncol(df)-1), y = ncol(df),
training_frame = dat_h2o,
hidden = c(120,30,8),
activation = 'Rectifier',
epochs = 199,
mini_batch_size = 10,
regression_stop = 0.1,
stopping_metric = "MSE",
stopping_tolerance = 1e-6,
stopping_rounds = 10)
model.dro <- h2o.deeplearning(x = 1:(ncol(df)-1), y = ncol(df),
training_frame = dat_h2o,
hidden = c(120,30,8),
activation = 'RectifierWithDropout',
hidden_dropout_ratios = c(0.2, 0.2, 0.2),
epochs = 199,
mini_batch_size = 10,
regression_stop = 0.1,
stopping_metric = "MSE",
stopping_tolerance = 1e-6,
stopping_rounds = 10)
pred.ref <- as.data.frame(h2o.predict(object = model.ref, newdata = dat_h2o))
pred.dro <- as.data.frame(h2o.predict(object = model.dro, newdata = dat_h2o))
dfRes <- data.frame(cbind(df$SeqF, pred.ref$predict, pred.dro$predict))
colnames(dfRes) <- c('act', 'pred', 'pred2')
ggplot(data = dfRes) + geom_point(aes(x=act, y=pred), color='blue') +
geom_point(aes(x=act, y=pred2), color='red') + geom_abline()

Related

Error while running h2o.deeplearning algorithm in R

I am facing an error while running this command in H2O Deep Learning in R:
model <- h2o.deeplearning(x = x, y = y, seed = 1234,
training_frame = as.h2o(trainDF),
nfolds = 3,
stopping_rounds = 7,
epochs = 400,
overwrite_with_best_model = TRUE,
activation = "Tanh",
input_dropout_ratio = .1,
hidden = c(10,10),
l1 = 6e-4,
loss = "automatic",
distribution = 'AUTO',
stopping_metric = "MSE")
ERROR as below:
Error in h2o.deeplearning(x = x, y = y, seed = 1234, training_frame = as.h2o(trainDF), :
unused arguments (training_frame = as.h2o(trainDF), stopping_rounds = 7, overwrite_with_best_model = TRUE, distribution = "AUTO", stopping_metric = "MSE")
I was not able to reproduce your specific error, but I was able to get the code to work on my end by updating loss="automatic" to loss="Automatic" (note that loss it is case sensitive).

Retrain mxnet model in R

I have created a neural network with mxnet. Now I want to train this model iteratively on new data points. After I simulated a new data point I want to make a new gradient descent update on this model. I do not want to save the model to an external file and load it again.
I have written the following code, but the weights do not change after a new training step. I also get NaN as a training error.
library(mxnet)
data <- mx.symbol.Variable("data")
fc1 <- mx.symbol.FullyConnected(data, num_hidden = 2, no.bias = TRUE)
lro <- mx.symbol.LinearRegressionOutput(fc1)
# first data observation
train.x = matrix(0, ncol = 3)
train.y = matrix(0, nrow = 2)
# first training step
model = mx.model.FeedForward.create(lro,
X = train.x, y = train.y, initializer = mx.init.uniform(0.001),
num.round = 1, array.batch.size = 1, array.layout = "rowmajor",
learning.rate = 0.1, eval.metric = mx.metric.mae)
print(model$arg.params)
# second data observation
train.x = matrix(0, ncol = 3)
train.x[1] = 1
train.y = matrix(0, nrow = 2)
train.y[1] = -33
# retrain model on new data
# pass on params of old model
model = mx.model.FeedForward.create(symbol = model$symbol,
arg.params = model$arg.params, aux.params = model$aux.params,
X = train.x, y = train.y, num.round = 1,
array.batch.size = 1, array.layout = "rowmajor",
learning.rate = 0.1, eval.metric = mx.metric.mae)
# weights do not change
print(model$arg.params)
I found a solution. begin.round in the second training step must be greater than num.round in the first training step, so that the model continues to train.
library(mxnet)
data <- mx.symbol.Variable("data")
fc1 <- mx.symbol.FullyConnected(data, num_hidden = 2, no.bias = TRUE)
lro <- mx.symbol.LinearRegressionOutput(fc1)
# first data observation
train.x = matrix(0, ncol = 3)
train.y = matrix(0, nrow = 2)
# first training step
model = mx.model.FeedForward.create(lro,
X = train.x, y = train.y, initializer = mx.init.uniform(0.001),
num.round = 1, array.batch.size = 1, array.layout = "rowmajor",
learning.rate = 0.1, eval.metric = mx.metric.mae)
print(model$arg.params)
# second data observation
train.x = matrix(0, ncol = 3)
train.x[1] = 1
train.y = matrix(0, nrow = 2)
train.y[1] = -33
# retrain model on new data
# pass on params of old model
model = mx.model.FeedForward.create(symbol = model$symbol,
arg.params = model$arg.params, aux.params = model$aux.params,
X = train.x, y = train.y, begin.round = 2, num.round = 3,
array.batch.size = 1, array.layout = "rowmajor",
learning.rate = 0.1, eval.metric = mx.metric.mae)
print(model$arg.params)
did you try to call mx.model.FeedForward.create only once and then use the fit function for incremental training?

Custom Xgboost Hyperparameter tuning

I use the following code to tune parameters for my Xgboost implementation adapted from here:
searchGridSubCol <- expand.grid(subsample = c(0.5, 0.75, 1),
colsample_bytree = c(0.6, 0.8, 1))
ntrees <- 100
#Build a xgb.DMatrix object
#DMMatrixTrain <- xgb.DMatrix(data = yourMatrix, label = yourTarget)
rmseErrorsHyperparameters <- apply(searchGridSubCol, 1, function(parameterList){
#Extract Parameters to test
currentSubsampleRate <- parameterList[["subsample"]]
currentColsampleRate <- parameterList[["colsample_bytree"]]
xgboostModelCV <- xgb.cv(data = as.matrix(train), nrounds = ntrees, nfold = 5, showsd = TRUE, label = traintarget,
metrics = "rmse", verbose = TRUE, "eval_metric" = "rmse",
"objective" = "reg:linear", "max.depth" = 15, "eta" = 2/ntrees,
"subsample" = currentSubsampleRate, "colsample_bytree" = currentColsampleRate)
xvalidationScores <- as.data.frame(xgboostModelCV)
#Save rmse of the last iteration
rmse <- tail(xvalidationScores$test.rmse.mean, 1)
return(c(rmse, currentSubsampleRate, currentColsampleRate))
})
However I recieve the following error when storing the XGBoostModelCV:
Error in as.data.frame.default(xgboostModelCV) :
cannot coerce class ""xgb.cv.synchronous"" to a data.frame
Can someone explain to me what is causing this error and how may I fix it?
The above should be fixed by:
xvalidationScores <- xgboostModelCV
#Save rmse of the last iteration
rmse <- tail(xvalidationScores$evaluation_log$test_rmse_mean, 1)

Error in running h2o.ensemble

I am getting error while running h2o.ensemble in R. This is the error output
[1] "Cross-validating and training base learner 1: h2o.glm.wrapper"
|======================================================================| 100%
[1] "Cross-validating and training base learner 2: h2o.randomForest.1"
|============== | 19%
Got exception 'class java.lang.AssertionError', with msg 'null'
java.lang.AssertionError
at hex.tree.DHistogram.scoreMSE(DHistogram.java:323)
at hex.tree.DTree$DecidedNode$FindSplits.compute2(DTree.java:441)
at hex.tree.DTree$DecidedNode.bestCol(DTree.java:421)
at hex.tree.DTree$DecidedNode.<init>(DTree.java:449)
at hex.tree.SharedTree.makeDecided(SharedTree.java:489)
at hex.tree.SharedTree$ScoreBuildOneTree.onCompletion(SharedTree.java:436)
at jsr166y.CountedCompleter.__tryComplete(CountedCompleter.java:425)
at jsr166y.CountedCompleter.tryComplete(CountedCompleter.java:383)
at water.MRTask.compute2(MRTask.java:683)
at water.H2O$H2OCountedCompleter.compute(H2O.java:1069)
at jsr166y.CountedCompleter.exec(CountedCompleter.java:468)
at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263)
at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974)
at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477)
at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
Error: 'null'
This is my code that i am using. I am using this script for regression problem. "sales" column is for output prediction. Rest of the columns are for training.
response <- "Sales"
predictors <- setdiff(names(train), response)
h2o.glm.1 <- function(..., alpha = 0.0) h2o.glm.wrapper(..., alpha = alpha)
h2o.glm.2 <- function(..., alpha = 0.5) h2o.glm.wrapper(..., alpha = alpha)
h2o.glm.3 <- function(..., alpha = 1.0) h2o.glm.wrapper(..., alpha = alpha)
h2o.randomForest.1 <- function(..., ntrees = 200, nbins = 50, seed = 1) h2o.randomForest.wrapper(..., ntrees = ntrees, nbins = nbins, seed = seed)
h2o.randomForest.2 <- function(..., ntrees = 200, sample_rate = 0.75, seed = 1) h2o.randomForest.wrapper(..., ntrees = ntrees, sample_rate = sample_rate, seed = seed)
h2o.gbm.1 <- function(..., ntrees = 100, seed = 1) h2o.gbm.wrapper(..., ntrees = ntrees, seed = seed)
h2o.gbm.6 <- function(..., ntrees = 100, col_sample_rate = 0.6, seed = 1) h2o.gbm.wrapper(..., ntrees = ntrees, col_sample_rate = col_sample_rate, seed = seed)
h2o.gbm.8 <- function(..., ntrees = 100, max_depth = 3, seed = 1) h2o.gbm.wrapper(..., ntrees = ntrees, max_depth = max_depth, seed = seed)
h2o.deeplearning.1 <- function(..., hidden = c(500,500), activation = "Rectifier", epochs = 50, seed = 1) h2o.deeplearning.wrapper(..., hidden = hidden, activation = activation, seed = seed)
h2o.deeplearning.6 <- function(..., hidden = c(50,50), activation = "Rectifier", epochs = 50, seed = 1) h2o.deeplearning.wrapper(..., hidden = hidden, activation = activation, seed = seed)
h2o.deeplearning.7 <- function(..., hidden = c(100,100), activation = "Rectifier", epochs = 50, seed = 1) h2o.deeplearning.wrapper(..., hidden = hidden, activation = activation, seed = seed)
print("learning starts ")
#### Customized base learner library
learner <- c("h2o.glm.wrapper",
"h2o.randomForest.1", "h2o.randomForest.2",
"h2o.gbm.1", "h2o.gbm.6", "h2o.gbm.8",
"h2o.deeplearning.1", "h2o.deeplearning.6", "h2o.deeplearning.7")
metalearner <- "h2o.glm.wrapper"
#
#Train with new library:
fit <- h2o.ensemble(
x = predictors,
y= response,
training_frame=train,
family = "gaussian",
learner = learner,
metalearner = metalearner,
cvControl = list(V = 5))
All columns of train data are numeral. I am using R version 3.2.2.
The updated way to do this is
h2o.init(nthreads=-1,enable_assertions = FALSE)
As suggested by Spencer Aiello
Setting the assertion to FALSE in the h2o initialisation might do the trick
h2o.init(nthreads=-1, assertion = FALSE)
Make sure that you properly shutdown/restart h2o before applying the changes
h2o.shutdown()
h2o.init(nthreads=-1, assertion = FALSE)

Caret - Scaling SVM tuning parametert (Sigma) when using plot.train

I am using the Caret package to tune a SVM model.
Is there a way to scale the Sigma values similar to the Cost values when plotting the results (as shown in the attached Fig.).
Here is my tuning values:
svmGrid <- expand.grid(sigma= 2^c(-25, -20, -15,-10, -5, 0), C= 2^c(0:5))
Code to produce the plot:
pdf("./Figures/svm/svmFit_all.pdf", width=7, height = 5)
trellis.par.set(caretTheme())
plot(svmFit.all, scales = list(x = list(log = 2)))
dev.off()
Thanks
You would have to do it yourself via lattice:
library(caret)
set.seed(1345)
dat <- twoClassSim(2000)
svmGrid <- expand.grid(sigma= 2^c(-25, -20, -15,-10, -5, 0), C= 2^c(0:5))
set.seed(45)
mod <- train(Class ~ ., data = dat,
method = "svmRadial",
preProc = c("center", "scale"),
tuneGrid = svmGrid,
metric = "ROC",
trControl = trainControl(method = "cv",
classProbs = TRUE,
summaryFunction = twoClassSummary))
tmp <- mod$results
tmp$sigma2 <- paste0("2^", format(log2(tmp$sigma)))
xyplot(ROC ~ C, data = tmp,
groups = sigma2,
type = c("p", "l"),
auto.key = list(columns = 4, lines = TRUE),
scales = list(x = list(log = 2)),
xlab = "Cost",
ylab = "ROC (Cross-Validation)")
Max

Resources