Saving and Loading an mlr Model - r

I've been able to successfully generate a model to assist in my multilabel assignment using the mlr library using the following setup,
scene.task = makeMultilabelTask(data = training.data, target = labels)
lrn.br = makeLearner("classif.rpart", predict.type = "prob")
lrn.br = makeMultilabelNestedStackingWrapper(lrn.br)
mdl.lrn <- train(lrn.br, scene.task)
I went about saving my model using the following line,
save(mdl.lrn, file = "mdl.rda")
The model that is saved is just over 1GB in size.
I have attempted to load that saved model using the following line,
load("mdl.rda")
However, I continually run out of memory space as I receive messaging that reads, Error: cannot allocate vector of size ### Kb. I believe that my current amount of memory would allow for the loading of the model as it was enough to generate the model.
Has anyone been able to save and load an mlr generated model for reuse? If so, would you be able to recommend or confirm the approach of saving and loading a model?

Related

How to save a ranger model in mlr3 without data?

I have created a ranger model using mlr3 library. I saved this model to my machine using following command. The created file is huge in size. The saved file also has the data along with the model. Is there a way to only save the model without the data?
learner_ranger = lrn("classif.ranger", predict_type = "prob", predict_sets = c("train", "test"), importance = "impurity", num.threads = 8)
learner_ranger$train(train_task])
save(learner_ranger, file = "model.rda")
When I try to load this saved model it does not load the model correctly.
learner_ranger = load("model.rda")
str(learner_ranger)
Error in str(learner_ranger) : object 'learner_ranger' not found
To reduce the file size and save only the model, I have tried following but I am getting an error
save(learner_ranger$model, file = "model.rda")
The error I am getting is -
Error in save(learner_ranger$model, file = "model.rda") :
object ‘learner_randomF$model’ not found
After some research found that there are two ways to save and load the model in R:
using save(), load(): When we use save(), we will have to load it using the same name.
using saveRDS(), loadRDS(): saveRDS() does not save the model name and we have the flexibility to load the model in any other name. But saveRDS() can only save one object at a time as it is a lower-level function.
Most people prefer saveRDS() over save() as it serializes the object.
I am still looking for ways to save the model without data.

Error loading quantized BERT model from local repository

After quantizing the BERT model, it works without any issue. But if I save the quantized model and load, it does not work. It shows an error message: 'LinearPackedParams' object has no attribute '_modules". I have used the same device to save and load the quantized model.
model = SentenceTransformer('bert-base-nli-mean-tokens')
model.encode(sentences)
quantized_model = torch.quantization.quantize_dynamic(
model, {torch.nn.Linear}, dtype=torch.qint8)
quantized_model.encode(sentences) ```
torch.save(quantized_model,
"/PATH/TO/DESTINATION/Base_bert_quant.pt")
model=torch.load("/SAME/PATH/Base_bert_quant.pt")
model.encode(sentences) #shows the error

MXNet Time-series Example - Dropout Error when running locally

I am looking into using MXNet LSTM modelling for time-series analysis for a problem i am currently working on.
As a way of understanding how to implement this, I am following the example code given by xnNet from the link: https://mxnet.incubator.apache.org/tutorials/r/MultidimLstm.html
When running this script after downloading the necessary data to my local source, i am able to execute the code fine until i get to the following section to train the model:
## train the network
system.time(model <- mx.model.buckets(symbol = symbol,
train.data = train.data,
eval.data = eval.data,
num.round = 100,
ctx = ctx,
verbose = TRUE,
metric = mx.metric.mse.seq,
initializer = initializer,
optimizer = optimizer,
batch.end.callback = NULL,
epoch.end.callback = epoch.end.callback))
When running this section, the following error occurs once gaining connection to the API.
Error in mx.nd.internal.as.array(nd) :
[14:22:53] c:\jenkins\workspace\mxnet\mxnet\src\operator\./rnn-inl.h:359:
Check failed: param_.p == 0 (0.2 vs. 0) Dropout is not supported at the moment.
Is there currently a problem internally within the XNNet R package which is unable to run this code? I can't imagine they would provide a tutorial example for the package that is not executable.
My other thought is that it is something to do with my local device execution and connection to the API. I haven't been able to find any information about this being a problem for other users though.
Any inputs or suggestions would be greatly appreciated thanks.
Looks like you're running an old version of R package. I think following instructions on this page to build a recent R-package should resolve this issue.

Save the Deep learning model by R language

I have established a deep learning model with the h2o package of the R software. I gained a model with good presence and I wanna to save it. However, I tried all kinds of methods but failed. The code "save()" and "save.image()" are provided in the base package of R software. I used the "save()" function to conserve my model. But when I want to use the built model to run new data, it is said that the "model" object is not found in the function. I am really confused about this problem for a few days. If you have any good ideas, just tell me. Thanks for your reading~
load("F:/R/Rstudy/myfile") ##download the saved file
library(h2o)
h2o.init()
Te <- read.csv("F:/Rdata/Test.csv") ## import testing data
Te <- as.h2o(Te)
Te[,2] <- as.factor(Te[,2])
perf <- h2o.performance(model, Te) ## test model
ERROR: Unexpected HTTP Status code: 404 Not Found (url = http://localhost:54321/3/ModelMetrics/models/DeepLearning_model_R_1533035975237_1/frames/RTMP_sid_8185_2)
ERROR MESSAGE:
Object 'DeepLearning_model_R_1533035975237_1' not found in function: predict for argument: model
You can use the below to save and retrieve the model.
build the model
model <- h2o.deeplearning(params)
save the model
model_path <- h2o.saveModel(object=model, path=getwd(), force=TRUE)
print(model_path)
/tmp/mymodel/DeepLearning_model_R_1441838096933
load the model
saved_model <- h2o.loadModel(model_path)
Reference - http://docs.h2o.ai/h2o/latest-stable/h2o-docs/save-and-load-model.html
Hope this helps,
ND

How to save a model when using MXnet

I am using MXnet for training a CNN (in R) and I can train the model without any error with the following code:
model <- mx.model.FeedForward.create(symbol=network,
X=train.iter,
ctx=mx.gpu(0),
num.round=20,
array.batch.size=batch.size,
learning.rate=0.1,
momentum=0.1,
eval.metric=mx.metric.accuracy,
wd=0.001,
batch.end.callback=mx.callback.log.speedometer(batch.size, frequency = 100)
)
But as this process is time-consuming, I run it on a server during the night and I want to save the model for the purpose of using it after finishing the training.
I used:
save(list = ls(), file="mymodel.RData")
and
mx.model.save("mymodel", 10)
But none of them can save the model! for example when I load the "mymodel.RData", I can not predict the labels for the test set!
Another example is when I load the "mymodel.RData" and try to plot it with the following code:
graph.viz(model$symbol$as.json())
I get the following error:
Error in model$symbol$as.json() : external pointer is not valid
Can anybody give me a solution for saving and then loading this model for future use?
Thanks
You can save the model by
model <- mx.model.FeedForward.create(symbol=network,
X=train.iter,
ctx=mx.gpu(0),
num.round=20,
array.batch.size=batch.size,
learning.rate=0.1,
momentum=0.1,
eval.metric=mx.metric.accuracy,
wd=0.001,
epoch.end.callback=mx.callback.save.checkpoint("model_prefix")
batch.end.callback=mx.callback.log.speedometer(batch.size, frequency = 100)
)
A mxnet model is an R list, but its first component is not an R object but a C++ pointer and can't be saved and reloaded as an R object. Therefore, the model needs to be serialized to behave as an actual R object. The serialized object is also a list, but its first object is a text string containing model information.
To save a model:
modelR <- mx.serialize(model)
save(modelR, file="~/model1.RData")
To retrieve it and use it again:
load("~/model1.RData", verbose=TRUE)
model <- mx.unserialize(modelR)
The best practice for saving a snapshot of your training progress is to use save_snapshot (http://mxnet.io/api/python/module.html#mxnet.module.Module.save_checkpoint) as part of the callback after every epoch training. In R the equivalent command is probably mx.callback.save.checkpoint, but I'm not using R and not sure about it usage.
Using these snapshots can also allow you to take advantage of the low cost option of using AWS Spot market (https://aws.amazon.com/ec2/spot/pricing/ ), which for example now offers and instance with 16 K80 GPUs for $3.8/hour compare to the on-demand price of $14.4. Such 80%-90% discount is common in the spot market and can optimize the speed and cost of your training, as long as you use these snapshots correctly.

Resources