How do you kick off an Azure ML experiment based on a scheduler? - azure-machine-learning-studio

I created an experiment within Azure ML Studio and published as a web service. I need the experiment to run nightly or possible several times a day. I currently have azure mobile services and azure web jobs as part of the application and need to create an endpoint to retrieve data from the published web service. Obviously, the whole point is to make sure I have updated data.
I see answers like use azure data factory but I need specifics as in how to actually set up the scheduler.
I explain my dilemma further # https://social.msdn.microsoft.com/Forums/en-US/e7126c6e-b43e-474a-b461-191f0e27eb74/scheduling-a-machine-learning-experiment-and-publishing-nightly?forum=AzureDataFactory
Thanks.

Can you clarify what you mean by "experiment to run nightly"?
When you publish the experiment as a web service, it should give you and api key and the endpoint to consume the service. From that point on you should be able to call this api with the key, and it would return the result processing it tru the model you've initially trained. So all you have to do is to do the call from your web/mobile/desktop etc application in the desired times.
If the issue is to retrain the data model nightly, to improve the prediction, then this is a different process. That was only available tru the UI only, now you can achieve this programmatically by using the retraining api.
Kindly find the usage of this here.
Hope this helps!
Mert

Related

How to get Azure Analysis Service Size across subscription level

We have 60+ azure analysis services in our subscription so how can we get size of azure analysis size? So we want to automate this and publish in front end reports where users can see information.
It is difficult to get size of each cube in each azure analysis service by logging to each azure analysis service using SSMS.
Going with azure metrics memory option is also not right and accurate option.
Following below blog, but it is not allowed me run script in powershell ISE, got some error.
How we get analysis services database size in azure analysis services Tabular model
Is there any option to get all azure analysis services size using single script or any REST API ?
Thanks for your help.
Regards,
Brahma

Which API should be used for querying Application Insights trace logs?

Our ASP.NET Core app logs trace messages to App Insights. We need to be able to query them and filter by some customDimentions. However, I have found 3 APIs and am not sure which one to use:
App Insights REST API
Azure Log Analytics REST API
Azure Data Explorer .NET SDK (Preview)
Firstly, I don't understand the relationships between these options. I thought that App Insights persisted its data to Log Analytics; but if that's the case I would expect to only be able to query through Log Analytics.
Regardless, I just need to know which is the best to use and I wish that documentation were clearer. My instinct says to use the App Insights API, since we only need data from App Insights and not from other sources.
The difference between #1 and #2 is mostly historical and converging.
Application Insights existed as a product before log analytics, and were based on different underlying database technologies
Both Application Insights and Log Analytics converged to use the same underlying database, based on ADX (Azure Data Explorer), and the same exact REST API service to query either. So while your #1 and #2 links are different, they point to effectively the same service backend by the same team, but the pathing/semantics are subtly different where the service looks depending on the inbound request.
both AI and LA introduce the concept of multi-tenancy and a specific set of tables/schema on top of their azure resources. They effectively hide the entire database from you, and make it look like one giant database.
there is now the possibility (suggested) to even have your Application Insights data placed in a Log Analytics Workspace:
https://learn.microsoft.com/en-us/azure/azure-monitor/app/create-workspace-resource
this lets you put the data for multiple AI applications/components into the SAME log analytics workspace, to simplify query across different apps, etc
Think of ADX as any other kind of database offering. If you create an ADX cluster instance, you have to create database, manage schema, manage users, etc. AI and LA do all that for you. So in your question above, the third link to ADX SDK would be used to talk to an ADX cluster/database directly. I don't believe you can use it to directly talk to any AI/LA resources, but there are ways to enable an ADX cluster to query AI/LA data:
https://learn.microsoft.com/en-us/azure/data-explorer/query-monitor-data
And ways to have a LA/AI query also join with an ADX cluster using the adx keyword in your query:
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/azure-monitor-data-explorer-proxy

Azure ML:- How to retrain the Azure ML model using data from third party system every time the Azure ML web service is invoked

I have a requirement wherein I need to fetch historical data from a third party system which is exposed as a web service and train the model on that data.
I am able to achieve the above requirement by using "Execute Python Script" node and invoking the web service using python.
The main problem arises when I need to fetch data from the third party system every time the Azure ML web service is invoked, since the data in the third party system keeps on changing hence my Azure ML model should be trained for new data always.
I have gone through the link (https://learn.microsoft.com/en-us/azure/machine-learning/machine-learning-retrain-a-classic-web-service) but I am not sure how we can do this for my requirement as for me the new historical data set should be obtained every time the Azure ML web service is invoked.
Please suggest.
Thanks.
I recommend that you:
look into the new Azure Machine Learning Service. Azure ML Studio (classic) is quite limited in what you can do, and
consider creating a historical training set stored in Azure blob storage for the purposes of training, so that you only need to fetch from the 3rd party system when you have a trained model and would like to score the new records. To do so, check out this high-level guidance on how to use Azure Data Factory to create datasets for Azure Machine Learning

SOAP API method for SSRS report usage statistics

I've developed a .Net web application using the SOAP API (ReportingService2010) to list details on SSRS reports.
For the next step, I need to get some usage statistics such as which reports are accessed the most, which reports are accessed most frequently etc.
I know you can get some of this from the ExecutionLog table, but I'd like to avoid the SQL approach. Is there a way to get usage statistics like this directly through the SOAP API?
Thanks.
Nope. Best you can get from the stock API is snapshot/cache history information. You could, however, extend the existing API (pulling the information from ExecutionLogStorage). Even though you'd still be building the methods yourself, at least you could wrap them up nicely within the existing webservice.

Looking for guidance on WF4

We have a rather large document routing framework that's currently implemented in SharePoint (with a large set of cumbersome SP workflows), and it's running into the edge of what SP can do easily. It's slated for a rewrite into .NET
I've spent the past week or so reading and watching WF4 discussions and demonstrations to get an idea of WF4, because I think it's the right solution. I'm having difficulty envisioning how the system will be configured, though, so I need guidance on a few points from people with experience:
Let's say I have an approval that has to be made on a document. When the wf starts, it'll decide who should approve, and send that person an email notification. Inside the notification, the user would have an option to load an ASP.NET page to approve or reject. The workflow would then have to be resumed from the send email step. If I'm planning on running this as a WCF WF Service, how do I get back into the correct instance of the paused service? (considering I've configured AppFabric and persistence) I somewhat understand the idea of a correlation handle, but don't think it's meant for this case.
Logging and auditing will be key for this system. I see the AppFabric makes event logs of this data, but I haven't cracked the underlying database--is it simple to use for reporting, or should I create custom logging activities to put around my actions? From experience, which would you suggest?
Thanks for any guidance you can provide. I'm happy to give further examples if necessary.
To send messages to a specific workflow instance you need to set up message correlation between your different Receive activities. In order to do that you need some unique value as part of your message data.
The Appfabric logging works well but if you want to create custom a custom logging solution you don't need to add activities to your workflow. Instead you create a custom TrackingParticipant to do the work for you. How you store the data is then up to you.
Your scenario is very similar to the one I used for the Introduction to Workflow Services Hands On Lab in the Visual Studio 2010 Training Kit. I suggest you take a look at the hands on lab or the Windows Server AppFabric / Workflow Services Demo - Contoso HR sample code.

Resources