How to reconfigure a Cosmos database to have shared container throughput - azure-cosmosdb

I have a database with several containers which have their own configured throughput. There is a new Cosmos DB feature which allows all containers within a database to share throughput. I can create a new database which has this feature enabled, however I cannot seem to change my existing database to leverage this feature. Is there a way to enable this feature on an existing database, or do I have to create a new database and migrate all containers to it?

You have to create a new database. Changing the existing database is not supported:
A container with provisioned throughput cannot be converted to shared database container. Conversely a shared database container cannot be converted to have a dedicated throughput.
Set throughput on a database and a container

Related

Does Firebase Realtime Database support auto data syncing between multiple databases?

Here is our use case:
We have way more than 200,000 clients need to connect to firebase realtime db. So we created multiple database with same data and load blance the connections.
Here is the problem:
If we update one database, we will have to initiate connection and udpate the rest of the database as well. I would like to check if there is a way to auto sync up data between multiple databases.
Docs I have went through:
https://firebase.google.com/docs/database/usage/limits
https://firebase.google.com/docs/database/usage/sharding
Also I checked rules, and it seems that rules is not meant to be used to sync data.
Thanks
firebaser here
There is nothing built into Firebase to automatically synchronize data between multiple database instances. A common way to implement this when writing through a server-side process, is to simply write to each database in turn there.
If the data you want to write comes from a client-side SDK, I'd have the client write it to a staging area (just a temporary node in the database), and then use Cloud Functions to write the data the permanent location in all database instances.

Create database inside Azure Cosmos DB account with RBAC

I use java version 4 SDK for azure cosmos db. I want to create database inside azure cosmos db account with service principal, not with masterkey.
I assigned to service principal DocumentDB Account Contributor and Cosmos DB Operator built-in-role definitions, according to this documentation:
https://learn.microsoft.com/pl-pl/azure/role-based-access-control/built-in-roles#cosmos-db-operator
I was not able to create CosmosAsyncClient, until I added new custom role, which just contains reading metadata. Above mentioned built-in-role definitions do not contain it...
TokenCredential ServicePrincipal = new ClientSecretCredentialBuilder()
.authorityHost("https://login.microsoftonline.com")
.tenantId(tenant_here)
.clientId(clientid_here)
.clientSecret(secret_from_above_client)
.build();
client = new CosmosClientBuilder()
.endpoint(AccountSettings.HOST)
.credential(ServicePrincipal)
.buildAsyncClient();
After I added this role, client was created, but I am not able to create database instance and also container inside it as next step. In access control I can see that roles are assigned so service principal is correct here.
What is more, when firstly I create database and container with master key and then I want to read/write data using service principal, it works (obviously after adding custom role for writting also).
Then I do not know why DocumentDB Account Contributor and Cosmos DB Operator does not work for creation database.
Looks it is a bug in java SDK, the DocumentDB Account Contributor role is enough to create the database and container as it has the Microsoft.DocumentDb/databaseAccounts/* permission(* is a wildcard, it also includes the Microsoft.DocumentDB/databaseAccounts/readMetadata you mentioned).
When I test to use a service principal with this role to create the database with the powershell New-AzCosmosDBSqlDatabase, it works fine. When using the service principal to run this command, it essentially uses the Azure AD client credential flow to get the token, then uses the token to call the REST API - PUT https://management.azure.com/subscriptions/xxxx/resourceGroups/xxxx/providers/Microsoft.DocumentDB/databaseAccounts/xxxx/sqlDatabases/testdb1?api-version=2020-04-01 to create the database, the java SDK essentially also does the same thing, so it should also work.

When to create multiple containers in Azure Cosmos db

I am creating multiple micro service APIs. Can we use one single Container to store all the different schemas from the different APIs? If so how to differentiate the schemas while retrieving documents? When to create multiple containers in Azure Cosmos dB? is that any disadvantages/caveats of using multiple containers?
Please Explain.

Can Application Insights automatically track queries to Cosmos Db with connection mode Direct?

Currently in Application Insights we are only seeing these operations between our .Net Core application and Cosmos db
but queries to actually query and insert data is not seen. We are using Direct connection mode as per the performance tips https://learn.microsoft.com/en-us/azure/cosmos-db/performance-tips.
Do we have to manually track these queries or can this be done automatically, like when using Sql Server.
I turns out that queries to cosmos db using Direct connection mode are not automatically tracked, as stated here: https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-dependencies#automatically-tracked-dependencies
So all queries to Cosmos db must be tracked manually.

Any problems accessing sqlite files directly from Azure file storage?

We have a legacy system we're planning on migrating to Azure. The system uses sqlite files to store the data we need to access. After bouncing around with numerous solutions, we've decided to store the sqlite files in Azure file storage and access them via a UNC path from a cloud worker role (we can't use Azure functions or app services as they don't have the ability to use SMB).
This all seems to work ok, but what I'm nervous about is how sqlite is likely to react when trying to access a large file (effectively over a network) this way.
Does anyone have experience with this sort of thing and if so did you come across any problems?
The alternative plan was to use a web worker role and to store the sqlite files in blob storage. In order to access the data though, we'd have to copy the blob to a temp file on the web server machine.
You can certainly use Azure File Storage, since it's effectively an SMB share, backed by blob storage (which means it's durable). And also, since it's an SMB share, you can then access it from your various worker role instances.
As for your alternate choice (storing in blob and copying to temporary storage) - that won't work, since each worker role instance is independent, and you'd then have multiple, unsynchronized copies of your database on each VM. And if a VM rebooted, you would immediately lose all data on that temporary drive.
Note: With Web/worker role instances, as well as VM's, you can attach a blob-backed disk and store content durably there. However, you'd still have the issue of dealing with multiple instances (because attached disks cannot be attached to multiple VMs).

Resources