Register model from Azure Machine Learning run without downloading to local file - azure-machine-learning-studio

A model was trained on a remote compute using azureml.core Experiment as follows:
experiment = Experiment(ws, name=experiment_name)
src = ScriptRunConfig(<...>)
run = experiment.submit(src)
run.wait_for_completion(show_output=True)
How can a model trained in this run be registered with Azure Machine Learning workspace without being downloaded to a local file first?

The model can be registered using register_model method available on the run object (click the link for documentation).
Example:
model = best_run.register_model(model_name='sklearn-iris', model_path='outputs/model.joblib')
The following notebook can also be used as an example for setting up training experiments and registering models obtained as a result of experiment runs.

Related

Register Trained Model in Azure Machine Learning

I'm training a Azure Machine learning model using script via python SDK. I'm able to see the environment creation and the model getting trained in std_log in output&logs folder. After the Model training I try to dump the model, but I don't see the model in any folder.
If possible I want to register the model directly into the Model section in Azure ML rather than dumping it in the pickle file.
I'm using the following link for reference https://learn.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-train
Below is the output log snapshot for the model training run

How do you reference the models repository in Azure Machine Learning from within side a python script step?

I know there's a $MODEL_VERSION variable when you create a scoring script using AKS but how about for a script task (example python script task) but I can't find documentation on how to deserialize a model into object from within script step running on a Linux AML computer cluster.
Is there a way to use models I've published to models tab in Workspace (say name is my model) from within a python script step?
For example in this code snippet:
import job lib
model = joblib.load(file_path + "mymodel")
I'm looking for what relative or absolute NIX path to use for file_path during a run where mymodel has already been published to the Workspace.
You can interact with the registered models via the AML SDK.
Considering that you have the SDK installed/authenticated, or that you are running this on an AML compute, you can use the following code to get the workspace:
from azureml.core import Workspace, Model
ws = Workspace.from_config()
Once you have the workspace, you can list all the models:
for model in Model.list(ws):
print(model.name, model.version)
And for a specific model, you can get the path and load it back to memory with joblib:
import joblib
model_path = Model.get_model_path(model_name = 'RegressionDecisionTree', version = 1, _workspace= ws)
model_obj = joblib.load(model_path)

Using Microsoft custom translation (free tier), we could build a customized model but could we test the model?

We are trying the Microsoft custom translation.
We follow the quick start document and we succeeded in building a model.
However, it seems we could train the model but not deploy the model using the free plan.
In this case, how could we use the trained model? Is it possible to download it and try it locally?
Edit:
I am using a dictionary with only one word. And I didn't see the system test result for the model. Is it expected?
The free tier is to test and view the results, by doing so you will be able to check what is to be the output of a deployed model to see if it would be efficient or not so that you'd decide whether you want to proceed in deploying it or not!
You can view the System Test Results by selecting a project, then select the models tab of that project, locate the model you want to use and finally select the test tab as stated in the documentation
This allows you to try the service and test its efficiency without deploying it, but if you want to deploy the model you will have to switch to a paid tier.

Save and deploy R model in Watson Studio

I've developed a little model in RStudio in a Watson Studio environment in IBM Cloud (https://dataplatform.cloud.ibm.com).
I'm trying to save the model in RStudio and deploy it in Watson to publish it as an API, but I'm not finding the way to save it in RStudio.
Is it possible to do what I'm trying to do in the current version?
I've found the following documentation, but I guess it refers to a different version of Watson Studio:
https://content-dsxlocal.mybluemix.net/docs/content/SSAS34_current/local-dev/ml-r-models.htm
I couldn't find a way to save the model through Watson Studio functionality.
However, I was able to export it in PMML format using the R pmml library and then deploying the PMML as a service.
install.packages("pmml")
library(pmml)
pmml(LogModel, model.name = "Churn_Logistic_Regression_Model", app.name = "Churn_LogReg", description = "Modelo de Regresion para Demo", copyright = NULL, transforms = NULL, unknownValue = NULL, weights = NULL)
Some further documentation:
https://www.rdocumentation.org/packages/pmml/versions/1.5.7/topics/pmml.glm
Adding to what Gabo has answered with perspective from Watson studio and answering the deploying part of IBM Watson Machine Learning.
what you need to do is first convert the model using pmml
Ex. Run following code in Rstudio on Watson Studio or in R Notebook in Watson Studio.
install.packages("nnet")
library(nnet)
ird <- data.frame(rbind(iris3[,,1], iris3[,,2], iris3[,,3]),
species = factor(c(rep("s",50), rep("c", 50), rep("v", 50))))
samp <- c(sample(1:50,25), sample(51:100,25), sample(101:150,25))
ir.nn2 <- nnet(species ~ ., data = ird, subset = samp, size = 2, rang = 0.1,
decay = 5e-4, maxit = 200)
install.packages("pmml")
library(pmml)
pmmlmodel <- pmml(ir.nn2)
saveXML(pmmlmodel,file = "IrisNet.xml")
The saveXML() will generate / write IrisNet.xml file to Rstudio or local space of R Notebook, you need to download this file to your local machine.
Now to deploy it to Watson machine learning service, follow following:-
Now Click Add to Project in Watson Studio Project -> Watson Machine Learning Model , Name your model and then select WML service that you want to use
Select From File Tab
Drag and drop the xml file
Click Create and it will save it to WML service that you have selected.
Now you can deploy this model to WML service using Deployment tab
Simply name your deployment and Click Save
Now you have deployed model and you can start consuming via REST API.

Upload Saved ML Model in R (local) to Azure Machine Learning Studio

I am trying to reduce my development headaches for creating a ML Webservice on Azure ML Studio. One of the things that stuck me was can we just upload .rda files in the workbench and load it via an RScript (like in the figure below).
But can't connect directly to the R Script block. There's another way to do it (works to upload packages that aren't available in Azure's R directories) -- using zip. But there isn't really any resource out there that I found to access the .rda file in .zip.
I have 2 options here, make the .zip work or any other work around where I can directly use my .rda model. If someone could guide me about how to go forward it would appreciate it.
Note: Currently, I'm creating models via the "Create RModel" block, training them and saving it, so that I can use it to make a predictive web service. But for models like Random Forest, not sure how the randomness might create models (local versions and Azure versions are different, the setting of seed also isn't very helpful). A bit tight on schedule, Azure ML seems boxed for creating iterations and automating the ML workflow (or maybe I'm doing it wrong).
Here is an example of uploading a .rda file for scoring:
https://gallery.cortanaintelligence.com/Experiment/Womens-Health-Risk-Assessment-using-the-XGBoost-classification-algorithm-1

Resources