DynamoDB partition splitting for high throughput table - amazon-dynamodb

I am trying to understand DynamoDB partitioning behaviour in a specific circumstance. I'd like to know what will happen to my partitions if my read/write throughput exceeds 3000 RCU or 1000 WCU for a single partition (assuming I have very popular item(s) getting queried/written). Say on this partition, only a single partition key is present (with many values holding different sort keys). I'd like to know what Dynamo's behaviour is when my usage rises above 3000 / 1000. Will DDB automatically split the partitions into two smaller ones? Where can I find documentation about this specific circumstance?
Thanks

DynamoDB automatically supports your access patterns using the throughput you have provisioned, as long as the traffic against a given partition key does not exceed 3000 read capacity units or 1000 write capacity units. (Source)
It does not support more than 3000 RCU or 1000 WCU per partition key, so if you are exceeding that, some of your requests for that partition key will be throttled.
If you need to write more than 1000 WCU, you can use write sharding. If you need to read more than 3000 RCU, you can create a GSI that is an exact copy of the table to distribute your reads, or it’s a good use case for using DAX.

Related

How the partition limit of DynamoDB works for small databases?

I have read that a single partition of DynamoDB has a size limit of 10GB. This means if all my data are smaller as 10GB then I have only one partition?
There is also a limit of 3000 RCUs or 1000 WCUs on a single partition. This means this is also the limit for a small database which has only one partition?
I use the billing mode PAY_PER_REQUEST. On the database there are short usage peaks of approximate 50MB data. And then there is nothing for hours. How can I design the database to get the best peak performance? Or is DynamoDB a bad option for this use case?
How to design a database to get best performance and picking the right database... these are deep questions.
DynamoDB works well for a wide variety of use cases. On the back end it uses partitions. You rarely have to think about partitions until you're at the high-end of scale. Are you?
Partition keys are used as a way to map data to partitions but it's not 1 to 1. If you don't follow best practice guidance and use one PK value, the database may still split the items across back-end partitions to spread the load. Just don't use a Local Secondary Index (LSI) or it prohibits this ability. The details of the mapping depend on your usage pattern.
One physical partition will be 10 GB or less, and has the 3,000 Read units and 1,000 Write units limit, which is why the database will spread load across partitions. If you use a lot of PK values you make it more straightforward for the database to do this.
If you're at a high enough scale to hit the performance limits, you'll have an AWS account manager you can ask to hook you up with a DynamoDB specialist.
A given partition key can't receive more than 3k RCUs/1k WCUs worth of requests at any given time and store more than 10GB in total if you're using an LSI (if not using an LSI, you can store more than 10GB assuming you're using a Sort Key). If your data definitely fits within those limits, there's no reason you can't use DDB with a single partition key value (and thus a single partition). It'd still be better to plan on a design that could scale.
The right design for you will depend on what your data model and access patterns look like. Given what you've described of some kind of periodic job, a timestamp could be used (although it has issues with hotspots you should be careful of). If you've got some kind of other unique id, like user_id or device_id, etc. that would be a better choice. There is some great documentation on that here.

How do you synchronize related collections in Cosmos Db?

My application need to support lookups for invoices by invoice id and by the customer. For that reason I created two collections in which I store the (exact) same invoice documents:
InvoicesById, with partition key /InvoiceId
InvoicesByCustomerId, with partition key /CustomerId
Apparently you should use partition keys when doing queries and since there are two queries I need two collections. I guess there may be more in the future.
Updates are primarily done to the InvoicesById collection, but then I need to replicate the change to InvoicesByCustomer (and others) as well.
Are there any best practice or sane approaches how to keep collections in sync?
I'm thinking change feeds and what not. I want avoid writing this sync code and risk inconsistencies due to missing transactions between collections (etc). Or maybe I'm missing something crucial here.
Change feed will do the trick though I would suggest to take a step back before brute-forcing the problem.
Please find detailed article describing split issue here: Azure Cosmos DB. Partitioning.
Based on the Microsoft recommendation for maintainable data growth you should select partition key with highest cardinality (in your case I assume it will be InvoiceId). For the main reason:
Spread request unit (RU) consumption and data storage evenly across all logical partitions. This ensures even RU consumption and storage distribution across your physical partitions.
You don't need creating separate container with CustomerId partition key as it won't give you desired, and most importantly, maintainable performance in future and might result in physical partition data skew when too many Invoices linked to the same customer.
To get optimal and scalable query performance you most probably need InvoiceId as partition key and indexing policy by CustomerId (and others in future).
There will be a slight RU overhead (definitely not multiplication of RUs but rather couple additional RUs per request) in consumption when data you're querying is distributed between number of physical partitions (PPs) but it will be neglectable comparing to issues occurring when data starts growing beyond 50-, 100-, 150GB.
Why CustomerId might not be the best partition key for the data sets which are expected to grow beyond 50GB?
Main reason is that Cosmos DB is designed to scale horizontally and provisioned throughput per PP is limited to the [total provisioned per container (or DB)] / [number of PP].
Once PP split occurs due to exceeding 50GB size your max throughput for existing PPs as well as two newly created PPs will be lower then it was before split.
So imagine following scenario (consider days as a measure of time between actions):
You've created container with provisioned 10k RUs and CustomerId partition key (which will generate one underlying PP1). Maximum throughput per PP is 10k/1 = 10k RUs
Gradually adding data to container you end-up with 3 big customers with C1[10GB], C2[20GB] and C3[10GB] of invoices
When another customer was onboarded to the system with C4[15GB] of data Cosmos DB will have to split PP1 data into two newly created PP2 (30GB) and PP3 (25GB). Maximum throughput per PP is 10k/2 = 5k RUs
Two more customers C5[10GB] C6[15GB] were added to the system and both ended-up in PP2 which lead to another split -> PP4 (20GB) and PP5 (35GB). Maximum throughput per PP is now 10k/3 = 3.333k RUs
IMPORTANT: As a result on [Day 2] C1 data was queried with up to 10k RUs
but on [Day 4] with only max to 3.333k RUs which directly impacts execution time of your query
This is a main thing to remember when designing partition keys in current version of Cosmos DB (12.03.21).
What you are doing is a good solution. Different queries requires different Partition Keys on different Cosmos DB Containers with same data.
How to sync the two Containers: use Triggers from the firs Container.
https://devblogs.microsoft.com/premier-developer/synchronizing-azure-cosmos-db-collections-for-blazing-fast-queries/
Cassandra has a Feature called Materialized Views for this exact problem, abstracting the sync problem. Maybe some day same Feature will be included on Cosmos DB.

DocumentDB partitions sizes

According to docs, documents with different partitionKey may end up in same partition but documents with same partitionKey are guaranteed to end up in same partition.
Now, lets consider a case where you have partitionKey with cardinality=100 (for example 100 tenants).
Initially, all data is roughly equally distributed across partitions.
Lety say you end up with partitions of about 50GB size. I would assume in that case you might have a few partition keys contained within same partition. Then, all of the sudden your 2 tenants grow exponentially and they go to 200GB size.
Since partition have 250GB limit, now you're in problem.
Questions:
How is this being solved?
Is DocumentDB partitioning handling this moving to separate partitions?
Should we (and are we even able to) view data/storage consumption per partitionKey (not partition)?
If someone could shed a bit of light to these dilemas as i couldnt find answers to these specific questions in docs.
Currently, the logical partition for Single partition key cannot exceed 10GB. It means you have to ensure that at any given point of the time your logical partition does not exceed 10GB.
Source MSDN
A logical partition is a partition within a physical partition that stores all the data associated with a single partition key value. A logical partition has a 10 GB max.
On your question.
How is this being solved?
Choosing the appropriate partition key and ensure it is well balanced. If you anticipate that a tenant data might grow beyond 10GB, then having tenant id as partition key is not an option. You have to have something else as a partition key which can be scalable.
Is DocumentDB partitioning handling this moving to separate partitions?
Yes, CosmosDB will take care of Physical Partition handling.
Should we (and are we even able to) view data/storage consumption per partitionKey (not partition)?
Yes, In the Azure portal, go to Azure Cosmos DB account and click on Metrics in Monitoring section and then on right pane click on storage tab to see how your data is partitioned in different physical partition

What is number of partitions after DynamoDB restore?

There is back-up and restore feature for DynamoDB. Documentation says that when you restore table read and write capacity will remain same as source table when you did back-up.
The destination table is set with the same provisioned read capacity
units and write capacity units as the source table, as recorded at the
time the backup was requested.
But what is total number of partitions for destination table in this case? Your original source table can have lot of partitions with small Read and Write capacity. How is this going to be reflected?
An interesting side effect of this is that you can use it to reduce your partition count. For instance if you did a rapid table load with a high high WCU count and now need to reduce the partition count to improve performance. You can reduce your WCR & RCU levels to what you need, do the Backup and then Restore it. This will reset your partition count.

Physical partitions - Azure CosmosDB

We are evaluating Azure Cosmos DB for a MongoDB replacement. We have a huge collection of 5 million documents and each document is about 20 KB in size. The total size of the collection in Mongo is around 50 GB and we expect it to be 15% more in Cosmos because of JSON size. Also, there is an early increase of 1.6 million documents. Our throughput requirement is around 10000 queries per second. The queries can be for a single document, group of documents. Query for a single document takes around 5 RU and multiple documents around 10 to 20 RU. 
To get the required throughput, we need to partition the collection. 
Would like to get answers for the below questions?
How many physical partitions are used by Cosmos DB internally? The portal metrics shows only 10 Partitions. Is this always the case?
What is the maximum size of each physical partition? Portal metrics say it as 10 GB. How can we store more than 100 GB of data?
What is the maximum RU per partition? Do we get throttled, when a single partition becomes very hot to query?
These are the starting hurdles we wanted to overcome, before we can actually proceed doing further headway into Cosmos DB adoption. 
The number of physical partitions is managed by the Cosmos service. Generally you start out with 10 but if more are required the system will add them for you transparently.
The maximum size of a physical partition shouldn't be a concern of your application. When you create a partitioned collection you are dealing with "logical partitions" not physical ones. Cosmos will ensure that all documents that are part of a logical partition (have the same partition key) will always be placed together on one of the physical partitions. However as indicated in part 1 Cosmos will take care of ensuring that you have an appropriate number of physical partitions to store your data. Put another way, any given physical partition will be home to many logical partitions and these can be load balanced and moved around as needed.
Maximum RU per physical partition is your total RU/s divided by the number of physical partitions. So if you have a 10000 RU collection with 10 physical partitions you're actually limited to 1000 RU per physical partition. For this reason it is important to pick appropriate logical partition keys for your documents. If you create hot spots you can be throttled below your total provisioned RUs.
I recommend that you spend some time reading about partitioning and scale with Cosmos. The documentation and video available on this page are quite helpful. Here is some additional information copied directly from that page:
You provision a Cosmos DB container with T requests/s throughput
Behind the scenes, Cosmos DB provisions partitions needed to serve T requests/s. If T is higher than the maximum throughput per partition t, then Cosmos DB provisions N = T/t partitions
Cosmos DB allocates the key space of partition key hashes evenly across the N partitions. So, each partition (physical partition) hosts 1-N partition key values (logical partitions)
When a physical partition p reaches its storage limit, Cosmos DB seamlessly splits p into two new partitions p1 and p2 and distributes values corresponding to roughly half the keys to each of the partitions. This split operation is invisible to your application.
Similarly, when you provision throughput higher than t*N throughput, Cosmos DB splits one or more of your partitions to support the higher throughput

Resources