ETA of Cosmos DB Autopilot being available for existing database/ARM template - azure-cosmosdb

When will autopilot available for existing database and containers?
When will autopilot available to be configured by ARM template?
Is there ETA for these features? We need to decide whether to wait for the official autopilot or implement it by ourselves if it takes long time before GA.

ARM support for AutoPilot, as well as migration capability for existing database and container resources will be available when this feature goes GA.
We are not disclosing the exact date for GA but it will be fairly soon.
Thanks.

This is now GA, the ARM template below works for me.
Create an Azure Cosmos DB account for Core (SQL) API with autoscale

I still get 'Error updating offer for resource dbs/... {"code":500,"body":{"code":"InternalServerError","message":"Message: {\"Errors\":[\"An unknown error occurred while processing this request. If the issue persists, please contact Azure Support:
Existing container, UK South

Related

How to operate the meta in NebulaGraph Dashboard?

I am using NebulaGraph Dashboard. Why can't I select the meta service when scaling?
The Metad service stores the metadata of the NebulaGraph database. Once the Metad service fails to function, the entire cluster may break down. Besides, the amount of data processed by the Metad service is not much, so it is not recommended to scale the Metad service. And we directly disabled operating on the Metad service in Dashboard to prevent the cluster from being unavailable due to the misoperation of users.

Which API should be used for querying Application Insights trace logs?

Our ASP.NET Core app logs trace messages to App Insights. We need to be able to query them and filter by some customDimentions. However, I have found 3 APIs and am not sure which one to use:
App Insights REST API
Azure Log Analytics REST API
Azure Data Explorer .NET SDK (Preview)
Firstly, I don't understand the relationships between these options. I thought that App Insights persisted its data to Log Analytics; but if that's the case I would expect to only be able to query through Log Analytics.
Regardless, I just need to know which is the best to use and I wish that documentation were clearer. My instinct says to use the App Insights API, since we only need data from App Insights and not from other sources.
The difference between #1 and #2 is mostly historical and converging.
Application Insights existed as a product before log analytics, and were based on different underlying database technologies
Both Application Insights and Log Analytics converged to use the same underlying database, based on ADX (Azure Data Explorer), and the same exact REST API service to query either. So while your #1 and #2 links are different, they point to effectively the same service backend by the same team, but the pathing/semantics are subtly different where the service looks depending on the inbound request.
both AI and LA introduce the concept of multi-tenancy and a specific set of tables/schema on top of their azure resources. They effectively hide the entire database from you, and make it look like one giant database.
there is now the possibility (suggested) to even have your Application Insights data placed in a Log Analytics Workspace:
https://learn.microsoft.com/en-us/azure/azure-monitor/app/create-workspace-resource
this lets you put the data for multiple AI applications/components into the SAME log analytics workspace, to simplify query across different apps, etc
Think of ADX as any other kind of database offering. If you create an ADX cluster instance, you have to create database, manage schema, manage users, etc. AI and LA do all that for you. So in your question above, the third link to ADX SDK would be used to talk to an ADX cluster/database directly. I don't believe you can use it to directly talk to any AI/LA resources, but there are ways to enable an ADX cluster to query AI/LA data:
https://learn.microsoft.com/en-us/azure/data-explorer/query-monitor-data
And ways to have a LA/AI query also join with an ADX cluster using the adx keyword in your query:
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/azure-monitor-data-explorer-proxy

Cosmos DB - Should I set CosmosClientOptions.ApplicationRegion for my application?

I have enabled multi-write enabled for the Cosmos account in azure portal. I don't understand if it is mandatory to set the ApplicationRegion using the SDK as well? If it is mandatory, what is the purpose of this property? I see the below documentation, but it is still not clear to me.
Documentation
The purpose for ApplicationRegion is in scenarios where the SDK detects the region is no longer responsive, it will use that information to know which is the next closest region to failover to. If you do not set this value the SDK will failover to the next highest region listed in the portal or by failoverPriority in your Cosmos account configuration which may not be the closest.

Deploy Azure Face API for IoT Edge

Is it possible to deploy Azure Face API trained model to IoT Edge like Custom Vision?
If it is, please answer me how to do that?
Updating this topic...
Now you can download a Docker Image with the Face API for running it on-premises.
Here you can find the documentation for testing this feature, that currently is in public preview.
Here you can see the list of all the Azure Cognitive Services that are available as Docker Containers.
This new feature basically is targeting enterprises that:
Are not willing or able to load all their data into the cloud for processing or storage;
Are subject to regulatory requirements on handling customer data;
Have data that they aren’t comfortable sharing and processing in a cloud, regardless of security;
Have weak bandwidth or disconnected environments with high latency and TPS issues.
Model export is not a feature supported by the Face API.

WSO2 API Manager as 2 instance all-in-one setup

I have recently deployed WSO2 API Manager (2.0.0) as 2 instance all-in-one clustered (using Hazelcast AWS scheme) with mysql datasource as specified in this link
Since, not able to find a complete step by step installation guide for this setup. I would like to clarify few areas that I am not too sure of.
Depsync via SVN - since this will be manger to manger nodes (instead of manager to worker nodes) both will have <AutoCommit>true</AutoCommit>. Should we have any concern on this?
DAS - Having DAS as separate node, should both WSO2AM and WSO2DAS share the same WSO2AM_STATS_DB database?
Publisher - Can we use both publishers (i.e one at a time). Noticed once we published an API, it takes time for other publisher to sync the state to published (even if the new API appears almost immediate on other publisher as created)
Thank you.
1) If you enable <AutoCommit>true</AutoCommit> in both nodes, it can cause svn conflicts if there are parallel publishing from 2 nodes. Instead, you can publish to multiple gateways from the publisher. For that, you can configure multiple environments in <Environments> section in api-manager.xml
2) Yes, DAS writes summarized data to that DB, and APIM dashboards read data from the same DB.
3) All publisher/store nodes should be in the same cluster. Then only they can communicate about API state changes etc. To be on the same cluster, all these nodes should have the same clustering domain. You can configure that in clustering section of axis2.xml.

Resources