Impact of Decrease of dynamodb WCU - amazon-dynamodb

I have a requirement where I need to initialise my dynamodb table with large volumne of data. Say around 1M in 15 min so I ll have to provision WCU to 10k but after that my load is ~1k per second so I ll decrease WCU to 1k from 10k . Is there any performance drawback or issues in decreasing WCU.
Thanks

In general, assuming the write request doesn't exceed the write capacity units (i.e. as you have not mentioned the item size), there should not be any performance issue.
If at any point you anticipate traffic growth that may exceed your
provisioned throughput, you can simply update your provisioned
throughput values via the AWS Management Console or Amazon DynamoDB
APIs. You can also reduce the provisioned throughput value for a table
as demand decreases. Amazon DynamoDB will remain available while
scaling it throughput level up or down.
Consider this scenario:-
Assume the item size is 1.5KB in size.
First, you would determine the number of write capacity units required per item, rounding up to the nearest whole number, as shown following:
1.5 KB / 1 KB = 1.5 --> 2
The result is two write capacity units per item. Now, you multiply this by the number of writes per second (i.e. 1K per second).
2 write capacity units per item × 1K writes per second = 2K write capacity units
In this scenario, the DynamoDB would throw error code 400 on your extra requests.
If your application performs more reads/second or writes/second than
your table’s provisioned throughput capacity allows, requests above
your provisioned capacity will be throttled and you will receive 400
error codes. For instance, if you had asked for 1,000 write capacity
units and try to do 1,500 writes/second of 1 KB items, DynamoDB will
only allow 1,000 writes/second to go through and you will receive
error code 400 on your extra requests. You should use CloudWatch to
monitor your request rate to ensure that you always have enough
provisioned throughput to achieve the request rate that you need.

Yes, there is a potential impact.
Once you write at high TPS, the more no. of partitions gets created which cannot be reduced later on.
If this number was higher than what is needed eventually for application to run well, this can cause problems.
Read more about DDB partitions for same.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.Partitions.html

Related

Azure Data Factory "write throughput budget" setting in Cosmos sink always caps RU usage to ~10%

We are using a Cosmos dataset as a sink for a data flow in our pipeline. The Cosmos database is configured to use 400 RUs during normal operations, but we upscale it during the pipeline run. The upscaling works flawlessly.
The pipeline consumes 100% of the provisioned throughput, as is expected. We would like to limit this to about 80%, so that our customers don't experience delays and timeout exceptions. According to the documentation the "Write throughput budget" setting in the Cosmos sink is suppose to be "An integer that represents the RUs you want to allocate for this Data Flow write operation, out of the total throughput allocated to the collection". Unless I am mistaken, this means that you can set a limit to how many RUs the pipeline is allowed to consume.
However, no matter what value we use for "Write throughput budget", the pipeline will always consume ~10% of the total provisioned throughput. We have tested with a wide range of values, and the result is always the same. If we do not set a value 100% of RUs are consumed, but ~10% is always used whether we set the value to 1, 500, 1000, or even 1200 (of a total 1000).
Does anyone know if this is a bug with the ADF Cosmos sink, or have I misunderstood what this setting is supposed to be? Is there any other way of capping how many Cosmos RUs an ADF pipeline is allowed to use?
EDIT:
This is definitely related to data size. Setting provisioned throughput to 10000 RUs and write throughput budget to 7500 uses ~85% of total RUs when we test with 300 000 documents. Using the same settings, but 10 000 000 documents, we see a consistent ~10% RU usage for the pipeline run.
The solution to our problem was to set "write throughput budget" to a much higher value than provisioned throughput. Data size and number of partitions used in the pipeline is definitely have an effect on what settings you should use. For reference we had ~10 000 000 documents of 455 bytes each. Setting throughput to 10 000 and write throughput budget to 60 000, ADF used on average ~90% of the provisioned throughput.
I recommend trial and error for your specific case, and to not be afraid to set the write budget to a much higher value than you think is necessary.
It will depend on the number of partitions. Check how many partitions you have at the sink.

Dynamodb Autoscaling not working fast enough

I'm running a simple api that gets an item from a dynamodb table on each call, I have auto scaling set to a minimum of 25 and a maximum of 10 000.
However if I send 15 000 requests with a tool like wrk or hey, I get about 1000 502s,
dynamodb's metrics show that reads are throttled
the scaling activities log on the table shows that the RCUs were scaled to 99 but not more than that
lambda logs show that the function starts to take longer, it usually takes about 20ms to run, but the function starts to run for 500.1500,3000 ms and start timing out (I'm assuming that's caused by the throttling)
Why isn't the autoscaling working better? It only scales upto 99RCUs but my max is 10, 000.
We ran into the same problem when testing DynamoDB autoscaling for short amounts of time, and it turns out the problem is that the scaling events only happen after 5 minutes of elevated throughput (you can see this by inspecting the CloudWatch alarms the autoscaling sets up)
This excellent blog post helped us solve this by creating a Lambda that responds to the CloudWatch API events and improves the responsiveness of the alarms to one minute: https://hackernoon.com/the-problems-with-dynamodb-auto-scaling-and-how-it-might-be-improved-a92029c8c10b
from: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html
What you defined as "target utilization"?
Target utilization is the ratio of consumed capacity units to provisioned capacity units, expressed as a percentage. Application Auto Scaling uses its target tracking algorithm to ensure that the provisioned read capacity of ProductCatalog is adjusted as required so that utilization remains at or near 70 percent.
also, i think that the main reason that autoscale not works for you, is because your work might not stay elevated for a long time:
"DynamoDB auto scaling modifies provisioned throughput settings only when the actual workload stays elevated (or depressed) for a sustained period of several minutes"
DynamoDB auto scaling modifies provisioned throughput settings only when the actual workload stays elevated (or depressed) for a sustained period of several minutes. The Application Auto Scaling target tracking algorithm seeks to keep the target utilization at or near your chosen value over the long term.
Sudden, short-duration spikes of activity are accommodated by the table's built-in burst capacity. For more information, see Use Burst Capacity Sparingly.

Is there is a chance to overcome the scan data of 1 mb in parallel asynchronous in dynamodb

When I am scanning the data, it is having a limit of 1MB for 1 segment. To display all the data in one segment is possible in asynchronous parallel in dynamodb.
When doing a parallel scan, each Scan API call will return at most 1MB of data. So, if you increase the number of segments enough, you will be able to get all of your data in one trip. According to the documentation, the maximum number of segments is 1 million. That means that as long as you have enough provisioned read throughput on your table, and the size of your table is less than 976GB, reading the entire table in one round trip is possible. Each 1MB page will incur 128 RCU if ConsistentRead=false. Therefore, if each partition supports up to 3000 Reads per second, each partition can support reading up to 23 segments in parallel. Dividing 1 million segments by 23 segments per partition yields 43479 partitions required to support a simultaneous read of 976GB.
To create a table with 43479 partitions or more, find the next largest power of 2. In this case, the next largest power of 2 to 43479 is 2^16=65536. Provision a table with 65536*750 = 49152000 WPS and 49152000 RPS to create it with 65536 partitions. The instantaneous RCU consumption of this parallel scan will be 128 * 1000000 = 128 million RCU, so you would need to re-provision your table at 128 million RPS before performing the parallel scan.
For any of this to work, you will need to request increases to your account-level and table level provisioned capacity limits. Otherwise, in us-east-1, default table provisioning limits are 40k rps and 40k wps per table. If you provision a table at 40k reads per second, you can read a maximum of 312 segments in parallel, or 312 MB.

How to calculate Read Capacity Unit and Write Capacity Unit for DynamoDB

How to calculate RCU and WCU with the data given as: reading throughput of 32 GB/s and writing throughput of 16 GB/s.
DynamoDB Provisioned Throughput is based upon a certain size of units, and the number of items being written:
In DynamoDB, you specify provisioned throughput requirements in terms of capacity units. Use the following guidelines to determine your provisioned throughput:
One read capacity unit represents one strongly consistent read per second, or two eventually consistent reads per second, for items up to 4 KB in size. If you need to read an item that is larger than 4 KB, DynamoDB will need to consume additional read capacity units. The total number of read capacity units required depends on the item size, and whether you want an eventually consistent or strongly consistent read.
One write capacity unit represents one write per second for items up to 1 KB in size. If you need to write an item that is larger than 1 KB, DynamoDB will need to consume additional write capacity units. The total number of write capacity units required depends on the item size.
Therefore, when determining your desired capacity, you need to know how many items you wish to read and write per second, and the size of those items.
Rather than seeking a particular GB/s, you should be seeking a given number of items that you wish to read/write per second. That is the functionality that your application would require to meet operational performance.
There are also some DynamoDB limits that would apply, but these can be changed upon request:
US East (N. Virginia) Region:
Per table – 40,000 read capacity units and 40,000 write capacity units
Per account – 80,000 read capacity units and 80,000 write capacity units
All Other Regions:
Per table – 10,000 read capacity units and 10,000 write capacity units
Per account – 20,000 read capacity units and 20,000 write capacity units
At 40,000 read capacity units x 4KB x 2 (eventually consistent) = 320MB/s
If my calculations are correct, your requirements are 100x this amount, so it would appear that DynamoDB is not an appropriate solution for such high throughputs.
Are your speeds correct?
Then comes the question of how you are generating so much data per second. A full-duplex 10GFC fiber runs at 2550MB/s, so you would need multiple fiber connections to transmit such data if it is going into/out of the AWS cloud.
Even 10Gb Ethernet only provides 10Gbit/s, so transferring 32GB would require 28 seconds -- and that's to transmit one second of data!
Bottom line: Your data requirements are super high. Are you sure they are realistic?
if you click on capacity tab of your dynamodb table there is a capacity calcuator link next to Estimated cost. you can use that to determine the read and write capacity units along with estimated cost.
read capacity units are dependent on the type of read that you need (strongly consistent/eventually consistent), item size and throughput that you desire.
write capacity units are determined by throughput and item size only.
for calculating item size you can refer this and below is a screenshot of the calculator

Read throughput in DynamoDB

Ok, so my understanding of read units is that it costs 1 read unit per item, unless the item exceeds 4KB in which case read units = ceiling(item size/4).
However when I submit a scan asking for 80 items (provisioned throughput is 100), the response returns a ConsumedCapacity of either 2.5 or 3 read units. This is frustrating because 97% of the provisioned hardware is not being used. Any idea why this might be the case?
What is your item size for the 80 items? Looking at the documentation here: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ProvisionedThroughputIntro.html
You can use the Query and Scan operations in DynamoDB to retrieve
multiple consecutive items from a table in a single request. With
these operations, DynamoDB uses the cumulative size of the processed
items to calculate provisioned throughput. For example, if a Query
operation retrieves 100 items that are 1 KB each, the read capacity
calculation is not (100 × 4 KB) = 100 read capacity units, as if those
items were retrieved individually using GetItem or BatchGetItem.
Instead, the total would be only 25 read capacity units ((100 * 1024
bytes) = 100 KB, which is then divided by 4 KB).
So if your items are small, that would explain why Scan is not consuming as much capacity as you would expect. Also, note Scan uses eventually consistent reads, which consume half of the read capacity units.

Resources