Partitioned Collection paritionkey - azure-cosmosdb

I´m confused what to choose for PartitionKey and what effect it has. If I use Partitioned Collection then I must define a Partition Key that can be used by DocumentDB to distribute the data among multiple servers. But lets say that I choose a partitionKey that is always the same for all documents. Will I still be able to get up to 250k RU/s for a single Partitioned Collection?
In my case the main query is get all documents with paging but in a timeline (newest first)
SELECT TOP 10 c.id, c.someValue, u.id FROM c
JOIN u IN c.users ORDER BY c.createdDate DESC
A minified version of the document looks like this
{
id: "1",
someValue: "Foo"
createdDate: "2016-14-4-14:38:00.00"
//Max 100 users
users: [{id: "1", id: "2"}]
}

No, you need to have multiple distinct partition key values in order to achieve high throughput levels in DocumentDB.
A partition in DocumentDB supports up to 10,000 RU/s, so you need at least 25* distinct partition key values to reach 250 RU/s. DocumentDB divides the partition keys evenly across the available partitions, i.e. a partition might contain documents with multiple partition keys, but the data for a partition key is guaranteed to stay within a single partition. You must also structure your workload in a manner that distributes reads/writes across these partition keys.
*You may need a slightly higher number of partition keys than 25 (50-100) in practice since some of the partition keys might hash to the same partition

So, we have a partitioned (10 partitions) collection with a throughput of 10000 RU/s. Partition Key is CountryCode and we only have data for 5 countries. Data for two countries were hashed into the same physical partition. As per documentation found in the following link, we were expecting data to be reorganized to the empty partitions once the 10GB limit was hit for the said partition. That didn't happen and we could no longer add data for those two countries.
Obviously, the right thing to do would be to choose a partition key that ensures low cardinality, but the documentation is misleading.
https://learn.microsoft.com/en-us/azure/cosmos-db/partition-data
When a physical partition p reaches its storage limit, Cosmos DB seamlessly splits p into two new partitions p1 and p2 and distributes values corresponding to roughly half the keys to each of the partitions. This split operation is invisible to your application.

Related

Azure Cosmos DB Partition

I have a collection which will store 8 million records monthly in cosmos collection which comes to 5GB of data monthly.
I want to allow a partition key datewise.
So the question is, should I keep the partition key as Year_Month or dividing it further to Year_Month_Day?
How many logical partitions are supported by cosmos db? is there any limit to it
There is no limit to the logical partition in Cosmos DB. It will keep on scaling and splitting those underlying physical partitions to support as many as you need.
The only limitation is that each logical partition can hold up to 10GB of data. Once that amount is reached you can not add more data in this logical partition and you have to migrate in a collection with a different key.
So with that in mind the decision should be like this.
Will you ever have 10GB worth of documents with the same Year_Month value? If not then that should be your partition key. If yes then you should widen the scope and add day in there. Again, will you ever have 10GB worth of documents with the same Year_Month_Day value? If yes then you need a different key definition.

Differentiate between partition keys & partition key ranges in Azure Cosmos DB

I'm having difficulty understanding the difference between the partition keys & the partition key ranges in Cosmos DB. I understand generally that a partition key in cosmos db is a JSON property/path within each document that is used to evenly distribute data among multiple partitions to avoid any uneven "hot partitions" -- and partition key decides the physical placement of documents.
But its not clear to me what the partition key range is...is this just a range of literal partition keys starting from first to last grouped by each individual partition in the collection? I know the ranges can be found by performing a GET request to the endpoint https://{databaseaccount}.documents.azure.com/dbs/{db-id}/colls/{coll-id}/pkranges but just conceptionally want to be sure I understand. Also still not clear on how to granularly view the specific partition key that a specific document belongs to.
https://learn.microsoft.com/en-us/rest/api/cosmos-db/get-partition-key-ranges
You define property on your documents that you want to use as a partition key.
Cosmos db hashes value of that property for all documents in collection and maps different partition keys to different physical partitions.
Over time, your collection will grow and you might end up having, for example, 100 logical partition distributed over 5 physical partitions.
Partition key ranges are just collections of partition keys grouped by physical partitions they are mapped to.
So, in this example, you would get 5 pkranges with min/max partition key value for each.
Notice that pkranges might change because in future, as your collection grows, physical partitions will get split causing some partition keys to be moved to new physical partition causing part of the previous range to be moved to new location.

DocumentDB partitions sizes

According to docs, documents with different partitionKey may end up in same partition but documents with same partitionKey are guaranteed to end up in same partition.
Now, lets consider a case where you have partitionKey with cardinality=100 (for example 100 tenants).
Initially, all data is roughly equally distributed across partitions.
Lety say you end up with partitions of about 50GB size. I would assume in that case you might have a few partition keys contained within same partition. Then, all of the sudden your 2 tenants grow exponentially and they go to 200GB size.
Since partition have 250GB limit, now you're in problem.
Questions:
How is this being solved?
Is DocumentDB partitioning handling this moving to separate partitions?
Should we (and are we even able to) view data/storage consumption per partitionKey (not partition)?
If someone could shed a bit of light to these dilemas as i couldnt find answers to these specific questions in docs.
Currently, the logical partition for Single partition key cannot exceed 10GB. It means you have to ensure that at any given point of the time your logical partition does not exceed 10GB.
Source MSDN
A logical partition is a partition within a physical partition that stores all the data associated with a single partition key value. A logical partition has a 10 GB max.
On your question.
How is this being solved?
Choosing the appropriate partition key and ensure it is well balanced. If you anticipate that a tenant data might grow beyond 10GB, then having tenant id as partition key is not an option. You have to have something else as a partition key which can be scalable.
Is DocumentDB partitioning handling this moving to separate partitions?
Yes, CosmosDB will take care of Physical Partition handling.
Should we (and are we even able to) view data/storage consumption per partitionKey (not partition)?
Yes, In the Azure portal, go to Azure Cosmos DB account and click on Metrics in Monitoring section and then on right pane click on storage tab to see how your data is partitioned in different physical partition

Can the wrong partition-key cause excessive partitioning in CosmosDb?

The Microsoft advice for partition-key selection encourages the selection of a key that will lead to 100's or 1000's of partitions. The general theme is "more is better".
My question is, can a CosmosDb suffer from a partition key that leads to an excessive number of highly fragmented logical partitions?
I am considering using a partition-key that defines a team-workgroup id and which also equates to a customer tenant boundary. This partition-key maps very well onto data query and transaction boundary access patterns in my application. However, I am concerned that with just 100 stored docs per tenant and an estimated 50 kb of storage per tenant, by the time my CosmosDb collection reaches 10Gb the collection would have 200,000 partitions.
Please note: I already understand that a logical partition does not
map 1:1 to a physical CosmosDb partition and in my proposed case a
physical partition is likely to contain 1000+ logical partitions.
There is no practical limit to the number of logical partitions you are allowed to have. The system can scale to millions or billions of logical partitions. It's just a simple hash operation on your partition key to determine which physical partition holds the logical partition that your document lives in.

How many partitions Azure Document DB collection can have?

I am planning to create a partitioned collection. I am working on identifying the correct partition key for collection.
However, I am not sure how many partitions partitioned collection can have? Is there any limit?
There is no hard limit on partition count. Document DB is positioned as infinitely scalable.
Your partition key should be diverse enough so that no single partition key has to store too much data (10 GB seems to be the limit per partition) and to match your query patterns.
As this official document states about Single partition and partitioned collections:
Partitioned collections can span multiple partitions and support unlimited storage and throughput. You must specify a partition key for the collection.
Partitioning in DocumentDB:
The number of partitions is determined by DocumentDB based on the storage size and the provisioned throughput of the collection. Every partition in DocumentDB has a fixed amount of SSD-backed storage associated with it, and is replicated for high availability. Partition management is fully managed by Azure DocumentDB, and you do not have to write complex code or manage your partitions. DocumentDB collections are practically unlimited in terms of storage and throughput.
For identifying the correct partition key for collection, I recommend that you could refer to Designing for partitioning.

Resources