Azure ML:- How to retrain the Azure ML model using data from third party system every time the Azure ML web service is invoked - azure-machine-learning-studio

I have a requirement wherein I need to fetch historical data from a third party system which is exposed as a web service and train the model on that data.
I am able to achieve the above requirement by using "Execute Python Script" node and invoking the web service using python.
The main problem arises when I need to fetch data from the third party system every time the Azure ML web service is invoked, since the data in the third party system keeps on changing hence my Azure ML model should be trained for new data always.
I have gone through the link (https://learn.microsoft.com/en-us/azure/machine-learning/machine-learning-retrain-a-classic-web-service) but I am not sure how we can do this for my requirement as for me the new historical data set should be obtained every time the Azure ML web service is invoked.
Please suggest.
Thanks.

I recommend that you:
look into the new Azure Machine Learning Service. Azure ML Studio (classic) is quite limited in what you can do, and
consider creating a historical training set stored in Azure blob storage for the purposes of training, so that you only need to fetch from the 3rd party system when you have a trained model and would like to score the new records. To do so, check out this high-level guidance on how to use Azure Data Factory to create datasets for Azure Machine Learning

Related

How to get Azure Analysis Service Size across subscription level

We have 60+ azure analysis services in our subscription so how can we get size of azure analysis size? So we want to automate this and publish in front end reports where users can see information.
It is difficult to get size of each cube in each azure analysis service by logging to each azure analysis service using SSMS.
Going with azure metrics memory option is also not right and accurate option.
Following below blog, but it is not allowed me run script in powershell ISE, got some error.
How we get analysis services database size in azure analysis services Tabular model
Is there any option to get all azure analysis services size using single script or any REST API ?
Thanks for your help.
Regards,
Brahma

Which API should be used for querying Application Insights trace logs?

Our ASP.NET Core app logs trace messages to App Insights. We need to be able to query them and filter by some customDimentions. However, I have found 3 APIs and am not sure which one to use:
App Insights REST API
Azure Log Analytics REST API
Azure Data Explorer .NET SDK (Preview)
Firstly, I don't understand the relationships between these options. I thought that App Insights persisted its data to Log Analytics; but if that's the case I would expect to only be able to query through Log Analytics.
Regardless, I just need to know which is the best to use and I wish that documentation were clearer. My instinct says to use the App Insights API, since we only need data from App Insights and not from other sources.
The difference between #1 and #2 is mostly historical and converging.
Application Insights existed as a product before log analytics, and were based on different underlying database technologies
Both Application Insights and Log Analytics converged to use the same underlying database, based on ADX (Azure Data Explorer), and the same exact REST API service to query either. So while your #1 and #2 links are different, they point to effectively the same service backend by the same team, but the pathing/semantics are subtly different where the service looks depending on the inbound request.
both AI and LA introduce the concept of multi-tenancy and a specific set of tables/schema on top of their azure resources. They effectively hide the entire database from you, and make it look like one giant database.
there is now the possibility (suggested) to even have your Application Insights data placed in a Log Analytics Workspace:
https://learn.microsoft.com/en-us/azure/azure-monitor/app/create-workspace-resource
this lets you put the data for multiple AI applications/components into the SAME log analytics workspace, to simplify query across different apps, etc
Think of ADX as any other kind of database offering. If you create an ADX cluster instance, you have to create database, manage schema, manage users, etc. AI and LA do all that for you. So in your question above, the third link to ADX SDK would be used to talk to an ADX cluster/database directly. I don't believe you can use it to directly talk to any AI/LA resources, but there are ways to enable an ADX cluster to query AI/LA data:
https://learn.microsoft.com/en-us/azure/data-explorer/query-monitor-data
And ways to have a LA/AI query also join with an ADX cluster using the adx keyword in your query:
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/azure-monitor-data-explorer-proxy

Is is possible to train Azure OCR

I am trying out Azure Cognitive Services OCR to scan in an identity document. It works fairly well but I was wondering if it is possible to train the OCR engine or somehow link it to a learning service to improve character recognition ?
I don't think that you can train Azure OCR, but there is one new Azure service called Form Recognizer which gives better results than the previous OCR service and also you can train it on custom data.

How do you kick off an Azure ML experiment based on a scheduler?

I created an experiment within Azure ML Studio and published as a web service. I need the experiment to run nightly or possible several times a day. I currently have azure mobile services and azure web jobs as part of the application and need to create an endpoint to retrieve data from the published web service. Obviously, the whole point is to make sure I have updated data.
I see answers like use azure data factory but I need specifics as in how to actually set up the scheduler.
I explain my dilemma further # https://social.msdn.microsoft.com/Forums/en-US/e7126c6e-b43e-474a-b461-191f0e27eb74/scheduling-a-machine-learning-experiment-and-publishing-nightly?forum=AzureDataFactory
Thanks.
Can you clarify what you mean by "experiment to run nightly"?
When you publish the experiment as a web service, it should give you and api key and the endpoint to consume the service. From that point on you should be able to call this api with the key, and it would return the result processing it tru the model you've initially trained. So all you have to do is to do the call from your web/mobile/desktop etc application in the desired times.
If the issue is to retrain the data model nightly, to improve the prediction, then this is a different process. That was only available tru the UI only, now you can achieve this programmatically by using the retraining api.
Kindly find the usage of this here.
Hope this helps!
Mert

How to use data mining feature of SQL Server 2008 with ASP.Net

How to use data mining feature of SQL Server 2008 with ASP.Net
Take a look at SqlServerDataMining.com, a site run by Microsoft's SQL Server Data Mining team.
In a nutshell, you want to:
Build cubes to model your data
Build a prediction calculator (or whatever kind of calculator you're looking to use)
Expose that via a web service
Call the web service in your app
For example, if you want to model whether or not a customer is likely to abandon their shopping card, you would figure out what characteristics of a shopper you want to capture and analyze. You set up your cubes to model what characteristics are indicative of a soon-to-be-bailing-out shopper. During the shopping process, your web app would send the shopper's characteristics to the SSAS server, which would return back a guess about whether or not the shopper is going to abandon the cart. Then your web app can take proactive measures before they leave.
All of the steps in here are kinda complicated - your best bet is probably to refine your question to focus on the areas you're responsible for.

Resources