Does Cosmos DB automatically set PreferredLocations? For example when new regions are added/deleted.
Or users have to set it themselves?
According to the docs, the most optimal endpoint will be chosen by the SQL SDK to perform write and read operations: https://learn.microsoft.com/en-us/azure/cosmos-db/tutorial-global-distribution-sql-api#connecting-to-a-preferred-region-using-the-sql-api
This is based on the configuration of your account, region availability. If you don't specify the PreferredLocation property, then all requests (read and write) will be served from your account's current write region.
Hope this helps :)
PreferredLocations must be specified for high availability. EnableEndpointDiscovery along with PreferredLocations allows you to leverage Cosmos DB failover capabilities.
When the value of this EnableEndpointDiscovery is true, the SDK will automatically discover the current write and read regions to ensure requests are sent to the correct region based on the regions specified in the PreferredLocations property. Default value is true indicating endpoint discovery is enabled.
Related
Currently I am trying to design an application where we have a CosmosDB account representing a group of customers with:
One container is used an overall Metadata store that contains all customers
Other containers will containers will contain data specific to one customer where data will be partitioned on according to different categories of customer history etc.
When we onboard a new customer (which will not happen too often and once) we'd like to make sure that we create an row in the Overall customer Metadata and then provision the customer specific container if fail rollback the transaction if it fails. (In the future we'd like to remove customers as well.)
Unfortunately the Cosmosdb Nosql only supports transactions in one container within the same logical partition, and does not support multi-container transactions. Our own POC indicates the MongoDB api does support this but unfortunately MongoDB does not fit our use case as we need support for Azure Functions.
The heart of the problem here isn't whether Cosmos DB supports distributed transactions. The core problem is you can't enlist an Azure Control Plane action (in this case, creating a container resource) into a transaction.
Since you're building in the cloud, my recommendation would be to employ the outbox pattern to manage your provisioning state for your customers. There's an easy to understand example here you can read.
Given you are building a multi-tenant application for Cosmos DB and using containers as your tenant boundary, please note that the maximum number of databases and/or containers in an account is 500. Please see Service Quotas for more information.
I have enabled multi-write enabled for the Cosmos account in azure portal. I don't understand if it is mandatory to set the ApplicationRegion using the SDK as well? If it is mandatory, what is the purpose of this property? I see the below documentation, but it is still not clear to me.
Documentation
The purpose for ApplicationRegion is in scenarios where the SDK detects the region is no longer responsive, it will use that information to know which is the next closest region to failover to. If you do not set this value the SDK will failover to the next highest region listed in the portal or by failoverPriority in your Cosmos account configuration which may not be the closest.
in my Azure Cosmos DB account, I can add multiple databases (containing multiple collections).
However, I only seem to find account-level connection strings (secrets), that are valid for each database. Differing only in the database name section.
I find this odd. Is this expected? If I want more granular control do I need to create separate accounts for each database?
PS: I'm using the Mongo API if it's somehow relevant.
Cheers
The account-level connection strings you mentioned in the question is master key.Based on this document, Azure Cosmos DB uses two types of keys to authenticate users and provide access to its data and resources.
Master keys cannot be used to provide granular access to containers and documents.
If you want more granular control,please get an idea of Resource Tokens which provides access to specific containers, partition keys, documents, attachments, stored procedures, triggers, and UDFs.More details,please refer to this link.
I want to enable encryption in transit and encryption at rest for all the content stored in media storage.
As I read here,
No encryption is used. This is the default value. When using this
option your content is not protected in transit or at rest in storage.
But since all my data resides in storage itself, won't it be encrypted by default at rest because of the SSE in Azure Storage ? Am I missing something here ? Also, how is the metadata (Asset information, Locator information etc.) about the content stored ?
with Media Services v2, you can use AssetCreationOptions.StorageEncrypted option, as shown here: https://learn.microsoft.com/en-us/azure/media-services/previous/media-services-dotnet-upload-files#upload-files-using-net-sdk-extensions.
See, the "Storage side encryption" table for more information: https://learn.microsoft.com/en-us/azure/media-services/previous/media-services-rest-storage-encryption#considerations
Please, let us know if you have other questions.
In Cosmos Db, I am using a document level Time to Live (TTL) and Cosmos does not appear to be expiring documents. Does this feature work in Cosmos Db using MongoDB API? If it does, what am I missing?
I am using Cosmos Db with the MongoDB API.
A "ttl" field is set in each document for my collection.
In Azure, Time to Live is set to "On (no default)" for my collection.
I am doing this without the emulator because the emulator defaults to the SQL API. In the emulator, I see "_ts" set and I do not see this field in Azure.
I can switch to collection level expiration by setting Time to Live to "On" and documents expire as expected. When I do this, my "ttl" field is ignored and the value I set for "second(s)" in Azure is followed. I still see my "ttl" field in the document.
Although I don't see a "_ts" field in my documents, an article about indexing mentions that it is a reserved property. This makes be think that it is set behind the scenes and it is not returned in queries.
https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb-indexing
"_ts is a Cosmos DB-specific field and is not accessible from MongoDB clients. It is a reserved (system) property that contains the timestamp of the document's last modification."
Update:
I checked the MongoDB support page (https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb-feature-support) and it indicates that collection level TTL is available and says nothing about document level.
Azure Cosmos DB supports a relative time-to-live (TTL) based on the timestamp of the document. TTL can be enabled for MongoDB API collections through the Azure portal.
Update:
My Azure Portal Preview Features now show this:
I got document level Time to Live working in Cosmos Db using MongoDb API. I had to ask help from Microsoft support to get this working. Response from Microsoft Big data team was following.
Before enabling Document level TTL feature , I would like to
clarify following about Document TTL feature details here.
The TTL feature is controlled by TTL properties at two levels -
Collection level and the Document level.
Right now per Document level TTL for MongoDB accounts are not
available by default. However, we can enable this feature for specific
customers and this feature is set at an account level.
TTL is at a document level but the feature is enabled at an account
level which means for all collections under the account, if there is a
document with a TTL set, it will take effect. For other collections,
if the TTL value is not set for each document, it would not be
affected.
You needs to have an index on the _ts field for this to work.
To summarize this : - This feature works at Cosmos DB account level.
We need to enable Document TTL feature in Cosmos DB backend on our
side.