How to get Azure Analysis Service Size across subscription level - azure-analysis-services

We have 60+ azure analysis services in our subscription so how can we get size of azure analysis size? So we want to automate this and publish in front end reports where users can see information.
It is difficult to get size of each cube in each azure analysis service by logging to each azure analysis service using SSMS.
Going with azure metrics memory option is also not right and accurate option.
Following below blog, but it is not allowed me run script in powershell ISE, got some error.
How we get analysis services database size in azure analysis services Tabular model
Is there any option to get all azure analysis services size using single script or any REST API ?
Thanks for your help.
Regards,
Brahma

Related

Which API should be used for querying Application Insights trace logs?

Our ASP.NET Core app logs trace messages to App Insights. We need to be able to query them and filter by some customDimentions. However, I have found 3 APIs and am not sure which one to use:
App Insights REST API
Azure Log Analytics REST API
Azure Data Explorer .NET SDK (Preview)
Firstly, I don't understand the relationships between these options. I thought that App Insights persisted its data to Log Analytics; but if that's the case I would expect to only be able to query through Log Analytics.
Regardless, I just need to know which is the best to use and I wish that documentation were clearer. My instinct says to use the App Insights API, since we only need data from App Insights and not from other sources.
The difference between #1 and #2 is mostly historical and converging.
Application Insights existed as a product before log analytics, and were based on different underlying database technologies
Both Application Insights and Log Analytics converged to use the same underlying database, based on ADX (Azure Data Explorer), and the same exact REST API service to query either. So while your #1 and #2 links are different, they point to effectively the same service backend by the same team, but the pathing/semantics are subtly different where the service looks depending on the inbound request.
both AI and LA introduce the concept of multi-tenancy and a specific set of tables/schema on top of their azure resources. They effectively hide the entire database from you, and make it look like one giant database.
there is now the possibility (suggested) to even have your Application Insights data placed in a Log Analytics Workspace:
https://learn.microsoft.com/en-us/azure/azure-monitor/app/create-workspace-resource
this lets you put the data for multiple AI applications/components into the SAME log analytics workspace, to simplify query across different apps, etc
Think of ADX as any other kind of database offering. If you create an ADX cluster instance, you have to create database, manage schema, manage users, etc. AI and LA do all that for you. So in your question above, the third link to ADX SDK would be used to talk to an ADX cluster/database directly. I don't believe you can use it to directly talk to any AI/LA resources, but there are ways to enable an ADX cluster to query AI/LA data:
https://learn.microsoft.com/en-us/azure/data-explorer/query-monitor-data
And ways to have a LA/AI query also join with an ADX cluster using the adx keyword in your query:
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/azure-monitor-data-explorer-proxy

Azure Data Explorer vs Azure Synapse Analytics (a.k.a SQL DW)

I design a data management strategy for a big IoT company. Our use case is fairly typical, we ingest large quantities of data, analyze them, and produce datasets that customers can query to learn about the insights they need.
I am looking at both Azure Data Explorer and the Data Warehouse side of Azure Synapse Analytics (a.k.a Azure SQL Data Warehouse) and find many commonalities. Yes, they use different languages and a different query engine on the backend, but both serve as a "serving layer" that customers use to query read-only data at a large scale.
I could not find any clear guidance from Microsoft about how to choose between the two, or maybe it makes sense to use them together? In that case, what is the best use case or type of data for each of the services?
If you can enlighten me please share your thoughts here. If you know about some guidance about the matter please reply with a link.
The classic and also the modern data warehouse pattern involve first designing a well curated data model, with documented entities and their attributes, creating a scheduled ETL pipeline that transforms and aggregates the raw data, big and small into the data model. Then you load and serve it. The curated data model provides stability, consistency and reliability when consuming these entities across an enterprise.
Azure Data Explorer was designed as an analytical data platform for telemetry. In this workload you do not aggregate the data first, but actually keep it close to the raw format as you do not want to lose data. It allows you to deal with the unexpected nature of security attacks, malfunctions, competitive behaviors, and in general the unknowns, as it allows looking at the fresh raw data from different angles and provide a lot flexibility.
This is why Azure Data Explorer is the storage for Microsoft Telemetry and also a growing set of analytical solutions like: Azure Monitor, Azure Security Center, Azure Sentinel, Azure Time Series Insights, IoT Central, PlayFab gaming analytics, Windows Intune Analytics, Customer Insights, Teams Education analytics and more.
Providing high performance analytics on raw data, with schema-on-read capability on textual, semi structured and structured data.
Quite a few of our partners and customers are adopting ADX for the same reasons.
Check out the overview webinar that describe these concepts in detail.
Azure Synapse Analytics packed SQL DW, ADF and Spark to have all the data warehouse pattern components highly integrated and easier to work with and manage. As we announced on the Azure Data Explorer Virtual Event, Azure Data Explorer is being integrated to Azure Synapse Analytics along side the SQL and Spark pools to cater for telemetry workloads - Real time analytics on high velocity, high volume, high variety data.
Check out some of the IoT cases Buhler, Daimler video,story, Bosch, AGL and there are more leading IoT platforms who are adopting Azure Data Explorer for this purpose. Reach out to us if you need additional help.

Azure ML:- How to retrain the Azure ML model using data from third party system every time the Azure ML web service is invoked

I have a requirement wherein I need to fetch historical data from a third party system which is exposed as a web service and train the model on that data.
I am able to achieve the above requirement by using "Execute Python Script" node and invoking the web service using python.
The main problem arises when I need to fetch data from the third party system every time the Azure ML web service is invoked, since the data in the third party system keeps on changing hence my Azure ML model should be trained for new data always.
I have gone through the link (https://learn.microsoft.com/en-us/azure/machine-learning/machine-learning-retrain-a-classic-web-service) but I am not sure how we can do this for my requirement as for me the new historical data set should be obtained every time the Azure ML web service is invoked.
Please suggest.
Thanks.
I recommend that you:
look into the new Azure Machine Learning Service. Azure ML Studio (classic) is quite limited in what you can do, and
consider creating a historical training set stored in Azure blob storage for the purposes of training, so that you only need to fetch from the 3rd party system when you have a trained model and would like to score the new records. To do so, check out this high-level guidance on how to use Azure Data Factory to create datasets for Azure Machine Learning

Deploy Azure Face API for IoT Edge

Is it possible to deploy Azure Face API trained model to IoT Edge like Custom Vision?
If it is, please answer me how to do that?
Updating this topic...
Now you can download a Docker Image with the Face API for running it on-premises.
Here you can find the documentation for testing this feature, that currently is in public preview.
Here you can see the list of all the Azure Cognitive Services that are available as Docker Containers.
This new feature basically is targeting enterprises that:
Are not willing or able to load all their data into the cloud for processing or storage;
Are subject to regulatory requirements on handling customer data;
Have data that they aren’t comfortable sharing and processing in a cloud, regardless of security;
Have weak bandwidth or disconnected environments with high latency and TPS issues.
Model export is not a feature supported by the Face API.

How do you kick off an Azure ML experiment based on a scheduler?

I created an experiment within Azure ML Studio and published as a web service. I need the experiment to run nightly or possible several times a day. I currently have azure mobile services and azure web jobs as part of the application and need to create an endpoint to retrieve data from the published web service. Obviously, the whole point is to make sure I have updated data.
I see answers like use azure data factory but I need specifics as in how to actually set up the scheduler.
I explain my dilemma further # https://social.msdn.microsoft.com/Forums/en-US/e7126c6e-b43e-474a-b461-191f0e27eb74/scheduling-a-machine-learning-experiment-and-publishing-nightly?forum=AzureDataFactory
Thanks.
Can you clarify what you mean by "experiment to run nightly"?
When you publish the experiment as a web service, it should give you and api key and the endpoint to consume the service. From that point on you should be able to call this api with the key, and it would return the result processing it tru the model you've initially trained. So all you have to do is to do the call from your web/mobile/desktop etc application in the desired times.
If the issue is to retrain the data model nightly, to improve the prediction, then this is a different process. That was only available tru the UI only, now you can achieve this programmatically by using the retraining api.
Kindly find the usage of this here.
Hope this helps!
Mert

Resources