How to do a single image inference using a trained CNN model (torch7 format)? - torch

I have obtained a trained CNN model in torch7 format.
How can I use that already trained model to run inference on a single image and return predictions, similar to this?

you need use torch.load function by given filepath to load your model, for example: model = torch.load("/tmp/youmodel.t7")
then load your image, for example: img = image.load("/tmp/yourimage.png")
check that the img size is same to your model's input, if they are different, you may use image resize function to make the img to the size you want, see more from https://github.com/torch/image
using forward function to get the prediction,for example: res = model:forward(img)

Related

Best way to save a trained model in Flux.jl?

I have a model which I have trained and I would like to save it for future use and distributing to others. What is the best way to save a trained model with Flux.jl?
If your model does not have things like dynamically created/sized layers, you should be able to save just the weights instead of serializing the whole model. This can be much more robust than using BSON.jl or the Serialization stdlib to serialize the whole model (both of which can be very fragile).
The weights can be obtained from a model by weights=collect(params(cpu(model))) and loaded back into the model by Flux.loadparams!(model, weights). Thus, one just needs to save a Vector of numeric arrays to disk, instead of more complicated Julia-side objects in the model. So I would suggest a pattern like:
function make_model(config)
...define layers, put them in a chain, etc...
return model
end
# train model
...
# collect weights
weights=collect(params(cpu(model)))
# save them to disk somehow...
Then when it's time to reload the model,
weights = # load them from disk
fresh_model = make_model(config)
Flux.loadparams!(model, weights)
Note that this approach means you can't e.g. add a layer to make_model and reload old weights; they will no longer be the right size. So you need to version your code and your weights and ensure they match up.
Last week I helped make a new package LegolasFlux.jl to make this pattern easier (in particular, providing a way to use Arrow to save the weights to disk along with any other configuration parameters, losses, etc, you would like to save). It should be registered in two days.
According to the Flux.jl docs (https://fluxml.ai/Flux.jl/stable/saving/) the best way to save a trained model is using BSON.jl by doing the following:
julia> using Flux
julia> model = Chain(Dense(10,5,relu),Dense(5,2),softmax)
Chain(Dense(10, 5, NNlib.relu), Dense(5, 2), NNlib.softmax)
julia> using BSON: #save
julia> #save "mymodel.bson" model
and then you can load the saved model by doing:
julia> using Flux
julia> using BSON: #load
julia> #load "mymodel.bson" model
julia> model
Chain(Dense(10, 5, NNlib.relu), Dense(5, 2), NNlib.softmax)

R scale function for new inputs

I am training a neural network model in R. I have scaled my inputs in the training data set using the scale() function. However, I do not know how to use this function to transform new inputs to the model. Is there some parameter/formula that can be extracted from the use of the scale function?
Thanks.

Does Tree package in R provide class label as the result of prediction?

I'm a newbie in R, and have a question regarding "Tree" package.
I'm have created a classification model with the package, and now I want to try prediction. But I have no idea on how to do the prediction as well as class labeling.
All I've done so far is create the model with my training set and test set, and figure out its accuracy. But is there a way to do the actual prediction?
Like many models in R, you can use the predict function on new data points in order to get predictions for them, as well as class labels. More specifically, for a tree object, there is a specific doc page for its optional arguments.
In general, to get predictions on new data you can use this command
predict(your_tree_model, new_data)
and to get class labels
predict(your_tree_model, new_data, type = "class")

Used saveRDS to save a model but not enough memory to readRDS?

I created a model based on a very large dataset and had the program save the results using
saveRDS(featVarLogReg.mod, file="featVarLogReg.mod.RDS")
Now I'm trying to load the model to evaluate, but readRDS runs out of memory.
featVarLR.mod <- readRDS(file = "featVarLogReg.mod.RDS")
Is there a way to load the file that takes less memory? Or at least the same amount of memory that was used to save it?
The RDS file ended up being 1.5GB in size for this logistic regression using caret. My other models using the same dataset and very similar caret models were 50MB in size so I can load them.
The caret linear model saves the training data in the model object. You could try to use returnData = FALSE in the trainControl argument to train. I don't recall if this fixed my issue in the past.
https://www.rdocumentation.org/packages/caret/versions/6.0-77/topics/trainControl
You could also try to just export the coefficients into a dataframe and use a manual formula to score new data.
Use coef(model_object)

Large neural network model size in neuralnet R

I see very large file sizes when I try to save my trained NN model using neuralnet in R. I know one (main) reason for that is that the model contains extra information that I may not need in order to reuse the model, like, the results (net$net.result), and some other elements that are saved with the network.
Anyway I can remove them from the model before saving?

Resources