Access parameter names in torch - torch

I need to convert a torch model to pytorch. Since the torch model has layers that pytorch doesn't support(such as inception and LRN), it is not possible to use the build-in APIs. In order to convert such models from torch to pytorch, it is necessary to implement such layers in pytorch and save all the parameters from torch model as hdf5 file, and reload them to python as a dictionary. I'm new to lua and I would like to ask how to access the 'nickname' of all the parameters in torch.
btw, this can be easily done in pytorch, for example:
import torch.nn as nn
model = nn.Sequential(
nn.Conv2d(in_channels=3,out_channels=32,kernel_size=7,stride=1,bias=False),
nn.ReLU(inplace=True),
nn.BatchNorm2d(num_features=32,affine=True),
nn.MaxPool2d(kernel_size=2,stride=2)
)
for key in model.state_dict():
value = model.state_dict().get(key)
print(key, value.size())
if all the parameters are accessible in a dictionary format, reconstructing a model in pytorch can be done in the following code:
model = MyNewInceptionModel()
model.load_state_dict(param_dict)

Related

Defining a Torch Class in R package "torch"

this post is related to my earlier How to define a Python Class which uses R code, but called from rTorch? .
I came across the torch package in R (https://torch.mlverse.org/docs/index.html) which allows to Define a DataSet class definition. Yet, I also need to be able to define a model class like class MyModelClass(torch.nn.Module) in Python. Is this possible in the torch package in R?
When I tried to do it with reticulate it did not work - there were conflicts like
ImportError: /User/homes/mreichstein/miniconda3/envs/r-torch/lib/python3.6/site-packages/torch/lib/libtorch_python.so: undefined symbol: _ZTINSt6thread6_StateE
It also would not make much sense, since torch isn't wrapping Python.
But it is loosing at lot of flexibility, which rTorch has (but see my problem in the upper post).
Thanks for any help!
Markus
You can do that directly using R's torch package which seems quite comprehensive at least for the basic tasks.
Neural networks
Here is an example of how to create nn.Sequential like this:
library(torch)
model <- nn_sequential(
nn_linear(D_in, H),
nn_relu(),
nn_linear(H, D_out)
)
Below is a custom nn_module (a.k.a. torch.nn.Module) which is a simple dense (torch.nn.Linear) layer (source):
library(torch)
# creates example tensors. x requires_grad = TRUE tells that
# we are going to take derivatives over it.
dense <- nn_module(
clasname = "dense",
# the initialize function tuns whenever we instantiate the model
initialize = function(in_features, out_features) {
# just for you to see when this function is called
cat("Calling initialize!")
# we use nn_parameter to indicate that those tensors are special
# and should be treated as parameters by `nn_module`.
self$w <- nn_parameter(torch_randn(in_features, out_features))
self$b <- nn_parameter(torch_zeros(out_features))
},
# this function is called whenever we call our model on input.
forward = function(x) {
cat("Calling forward!")
torch_mm(x, self$w) + self$b
}
)
model <- dense(3, 1)
Another example, using torch.nn.Linear layers to create a neural network this time (source):
two_layer_net <- nn_module(
"two_layer_net",
initialize = function(D_in, H, D_out) {
self$linear1 <- nn_linear(D_in, H)
self$linear2 <- nn_linear(H, D_out)
},
forward = function(x) {
x %>%
self$linear1() %>%
nnf_relu() %>%
self$linear2()
}
)
Also there are other resources like here (using flow control and weight sharing).
Other
Looking at the reference it seems most of the layers are already provided (didn't notice transformer layers at a quick glance, but this is minor).
As far as I can tell basic blocks for neural networks, their training etc. are in-place (even JIT so sharing between languages should be possible).

Save the Deep learning model by R language

I have established a deep learning model with the h2o package of the R software. I gained a model with good presence and I wanna to save it. However, I tried all kinds of methods but failed. The code "save()" and "save.image()" are provided in the base package of R software. I used the "save()" function to conserve my model. But when I want to use the built model to run new data, it is said that the "model" object is not found in the function. I am really confused about this problem for a few days. If you have any good ideas, just tell me. Thanks for your reading~
load("F:/R/Rstudy/myfile") ##download the saved file
library(h2o)
h2o.init()
Te <- read.csv("F:/Rdata/Test.csv") ## import testing data
Te <- as.h2o(Te)
Te[,2] <- as.factor(Te[,2])
perf <- h2o.performance(model, Te) ## test model
ERROR: Unexpected HTTP Status code: 404 Not Found (url = http://localhost:54321/3/ModelMetrics/models/DeepLearning_model_R_1533035975237_1/frames/RTMP_sid_8185_2)
ERROR MESSAGE:
Object 'DeepLearning_model_R_1533035975237_1' not found in function: predict for argument: model
You can use the below to save and retrieve the model.
build the model
model <- h2o.deeplearning(params)
save the model
model_path <- h2o.saveModel(object=model, path=getwd(), force=TRUE)
print(model_path)
/tmp/mymodel/DeepLearning_model_R_1441838096933
load the model
saved_model <- h2o.loadModel(model_path)
Reference - http://docs.h2o.ai/h2o/latest-stable/h2o-docs/save-and-load-model.html
Hope this helps,
ND

Use Azure custom-vision trained model with tensorflow.js

I've trained a model with Azure Custom Vision and downloaded the TensorFlow files for Android
(see: https://learn.microsoft.com/en-au/azure/cognitive-services/custom-vision-service/export-your-model). How can I use this with tensorflow.js?
I need a model (pb file) and weights (json file). However Azure gives me a .pb and a textfile with tags.
From my research I also understand that there are also different pb files, but I can't find which type Azure Custom Vision exports.
I found the tfjs converter. This is to convert a TensorFlow SavedModel (is the *.pb file from Azure a SavedModel?) or Keras model to a web-friendly format. However I need to fill in "output_node_names" (how do I get these?). I'm also not 100% sure if my pb file for Android is equal to a "tf_saved_model".
I hope someone has a tip or a starting point.
Just parroting what I said here to save you a click. I do hope that the option to export directly to tfjs is available soon.
These are the steps I did to get an exported TensorFlow model working for me:
Replace PadV2 operations with Pad. This python function should do it. input_filepath is the path to the .pb model file and output_filepath is the full path of the updated .pb file that will be created.
import tensorflow as tf
def ReplacePadV2(input_filepath, output_filepath):
graph_def = tf.GraphDef()
with open(input_filepath, 'rb') as f:
graph_def.ParseFromString(f.read())
for node in graph_def.node:
if node.op == 'PadV2':
node.op = 'Pad'
del node.input[-1]
print("Replaced PadV2 node: {}".format(node.name))
with open(output_filepath, 'wb') as f:
f.write(graph_def.SerializeToString())
Install tensorflowjs 0.8.6 or earlier. Converting frozen models is deprecated in later versions.
When calling the convertor, set --input_format as tf_frozen_model and set output_node_names as model_outputs. This is the command I used.
tensorflowjs_converter --input_format=tf_frozen_model --output_json=true --output_node_names='model_outputs' --saved_model_tags=serve path\to\modified\model.pb folder\to\save\converted\output
Ideally, tf.loadGraphModel('path/to/converted/model.json') should now work (tested for tfjs 1.0.0 and above).
Partial answer:
Trying to achieve the same thing - here is the start of an answer - to make use of the output_node_names:
tensorflowjs_converter --input_format=tf_frozen_model --output_node_names='model_outputs' model.pb web_model
I am not yet sure how to incorporate this into same code - do you have anything #Kasper Kamperman?

How to save a model when using MXnet

I am using MXnet for training a CNN (in R) and I can train the model without any error with the following code:
model <- mx.model.FeedForward.create(symbol=network,
X=train.iter,
ctx=mx.gpu(0),
num.round=20,
array.batch.size=batch.size,
learning.rate=0.1,
momentum=0.1,
eval.metric=mx.metric.accuracy,
wd=0.001,
batch.end.callback=mx.callback.log.speedometer(batch.size, frequency = 100)
)
But as this process is time-consuming, I run it on a server during the night and I want to save the model for the purpose of using it after finishing the training.
I used:
save(list = ls(), file="mymodel.RData")
and
mx.model.save("mymodel", 10)
But none of them can save the model! for example when I load the "mymodel.RData", I can not predict the labels for the test set!
Another example is when I load the "mymodel.RData" and try to plot it with the following code:
graph.viz(model$symbol$as.json())
I get the following error:
Error in model$symbol$as.json() : external pointer is not valid
Can anybody give me a solution for saving and then loading this model for future use?
Thanks
You can save the model by
model <- mx.model.FeedForward.create(symbol=network,
X=train.iter,
ctx=mx.gpu(0),
num.round=20,
array.batch.size=batch.size,
learning.rate=0.1,
momentum=0.1,
eval.metric=mx.metric.accuracy,
wd=0.001,
epoch.end.callback=mx.callback.save.checkpoint("model_prefix")
batch.end.callback=mx.callback.log.speedometer(batch.size, frequency = 100)
)
A mxnet model is an R list, but its first component is not an R object but a C++ pointer and can't be saved and reloaded as an R object. Therefore, the model needs to be serialized to behave as an actual R object. The serialized object is also a list, but its first object is a text string containing model information.
To save a model:
modelR <- mx.serialize(model)
save(modelR, file="~/model1.RData")
To retrieve it and use it again:
load("~/model1.RData", verbose=TRUE)
model <- mx.unserialize(modelR)
The best practice for saving a snapshot of your training progress is to use save_snapshot (http://mxnet.io/api/python/module.html#mxnet.module.Module.save_checkpoint) as part of the callback after every epoch training. In R the equivalent command is probably mx.callback.save.checkpoint, but I'm not using R and not sure about it usage.
Using these snapshots can also allow you to take advantage of the low cost option of using AWS Spot market (https://aws.amazon.com/ec2/spot/pricing/ ), which for example now offers and instance with 16 K80 GPUs for $3.8/hour compare to the on-demand price of $14.4. Such 80%-90% discount is common in the spot market and can optimize the speed and cost of your training, as long as you use these snapshots correctly.

import R forecast library JAR files into java

I am trying to import the R package 'forecast; in netbeans to use its functions. I have managed to make the JRI connection and also to import the javaGD library and experimented with it with a certain success. The problem about the forecasting package is that I cannot find the corresponding JAR files so to include them as a library in my project. I am loading it normally : re.eval(library(forecast)), but when I implement one of the library's function, a null value is returned. Although I am quite sure that the code is correct I am posting it just in case.
tnx in advance
Rengine re = new Rengine(Rargs, false, null);
System.out.println("rengine created, waiting for R!");
if(!re.waitForR())
{
System.out.println("cannot load R");
return;
}
re.eval("library(forecast)");
re.eval("library(tseries)");
re.eval("myData <- read.csv('C:/.../I-35E-NB_1.csv', header=F, dec='.', sep=',')");
System.out.println(re.eval("myData"));
re.eval("timeSeries <- ts(myData,start=1,frequency=24)");
System.out.println("this is time series object : " + re.eval("timeSeries"));
re.eval("fitModel <- auto.arima(timeSeries)");
REXP fc = re.eval("forecast(fitModel, n=20)");
System.out.println("this is the forecast output values: " + fc);
You did not convert values from R into java, you should first create a numerical vector of auto.arima output in R, and then use the method .asDoubleArray() to read it into java.
I gave a complete example in [here] How I can load add-on R libraries into JRI and execute from Java? , that shows exactly How to use the auto.arima function in Java using JRI.

Resources