We had a period of latency in our application that was directly correlated with latency in DynamoDB and we are trying to figure out what caused that latency.
During that time, the consumed reads and consumed writes for the table were normal (much below the provisioned capacity) and the number of throttled requests was also 0 or 1. The only thing that increased was the SuccessfulRequestLatency.
The high latency occurred during a period where we were doing a lot of automatic writes. In our use case, writing to dynamo also includes some reading (to get any existing records). However, we often write the same quantity of data in the same period of time without causing any increased latency.
Is there any way to understand what contributes to an increase in SuccessfulRequest latency where it seems that we have provisioned enough read capacity? Is there any way to diagnose the latency caused by this set of writes to dynamodb?
You can dig deeper by checking the Get Latency and Put Latency in CloudWatch.
As you have already mentioned, there was no throttling, and your writes involve some reading as well, and your writes at other period of time don't cause any latency, you should check for what exactly in read operation is causing this.
Check SuccessfulRequestLatency metric while including the Operation dimension as well. Start with GetItem and BatchGetItem. If that doesn't
help include Scan and Query as well.
High request latency can sometimes happen when DynamoDB is doing an internal failover of one of its storage nodes.
Internally within Dynamo each storage partition has to be replicated across multiple nodes to provide a high level of fault tolerance. Occasionally one of those nodes will fail and a replacement node has to be introduced, and this can result in elevated latency for a subset of affected requests.
The advice I've had from AWS is to use a short timeout and a fast retry (e.g. 100ms) if your use-case is latency-sensitive. It's my understanding that only requests that hit the affected node experience increased latency, so within one or two retries you'll hit a different node and get a successful response, with minimal impact on your overall latency. Obviously it's hard to verify this, because it's not a scenario you can reproduce!
If you've got a support contract with AWS, it's well worth submitting a support ticket from the AWS console when events like this happen. They are usually able to provide an insight into what actually happened.
Note: If you're doing retries, remember to use exponential backoff to reduce the risk of throttling.
Related
Note: There are similar SO questions related to costs associated with writing to Firebase Realtime Database (such as here), but I have a question that specifically relates to the relationship between write volume and the resulting connection cost.
So according to the Firebase documentation, it looks like the Realtime Database does not directly charge for writing data, although "writes can lead to connection costs on your bill".
But I would like to know, are these connection costs variable - dependent on the amount of data being written? Or is it the same no matter the size.
For example, if I update 10 MB of data once every minute, will this result in the same cost as writing only 10KB of data once every minute?
I'm trying to figure out what degree of "write-efficiency" is worth my time.
The documentation you linked suggests an answer. Under "Protocol overhead" it states:
Each time a connection is established, this overhead, combined with any SSL encryption overhead, contributes to the connection costs. Although this isn't a lot of bandwidth for a single request, it can be a substantial part of your bill if your payloads are tiny or you make frequent, short connections.
This is pretty clearly saying that, due to necessary protocol overhead, frequent connections incur more expense than a single connection. It doesn't say that the connection costs are variable based on the size of the payload being written, so it stands to reason that the connection cost is fairly static. Think HTTP headers that are essentially the same for each connection.
I am seeing some throttles on my updates on DynamoDB table. I know that throttle work on per second basis, that peaks above provisioned capacity can be sometimes absorbed, but not guaranteed. I know that one is supposed to evenly distribute the load, which I have not done.
BUT please look at the 1 minute average graphs from metrics; attached. The utilized capacity is way below the provisioned capacity. Where are these throttles coming from? Because all writes went to a particular shard?
There are no batch writes. The workload distribution is something that cannot, easily, control.
DynamoDB is built on the assumption that to get the full potential out of your provisioned throughput your reads and writes must be uniformly distributed over space (hash/range keys) and time (not all coming in at the exact same second).
Based on the allocated throughput on your graphs you are still most likely at one shard, but it is possible that there are two or more shards if you have previously raised the throughput above the current level and lowered it down to what it is at now. While this is something to be mindful of, it likely is not what is causing this throttling behavior directly. If you have a lot of data in your table, over 10 GB then you definitely will have multiple shards. This would mean you likely have a lot of cold data in your table and that may be causing this issue, but that seems less likely.
The most likely issue is that you have some hot keys. Specifically, you have one or just a few records that are receiving a very high number of read or write requests and this is resulting in throttling. Essentially DynamoDB can support massive IOPS for both writes and reads, but you can't apply all of those IOPS to just a few records, they need to be distributed among all of the records uniformly in an ideal situation.
Since the number of throttles you were showing is in the order of magnitude of 10s to 100s it may not be something to worry about. As long as you are using the official AWS SDK it will automatically take care of retries with exponential backoff to retry requests several times before completely giving up.
While it is difficult in many circumstances to control the distribution of reads and writes to a table, it may be worth taking another look at your hash/range key design to make sure it is really optimal for your pattern of reads and writes to the table. Also, for reads you may employ caching through Memcached or Redis, even if the cache expired in just a few minutes or a few seconds to help reduce the impact of hot keys. For writes you would need to look at the logic in the application to make sure there are not any unnecessary writes being performed that could be causing this issue.
One last point related to batch writes: A batch operation in DynamoDB does not reduce the consumed amount of read or writes the different child requests consume, it simply reduces the overhead of making multiple HTTP requests. While batch requests generally help with throughput, they are not useful at reducing the likelihood of throttling in DynamoDB.
In the DynamoDB documentation and in many places around the internet I've seen that single digit ms response times are typical, but I cannot seem to achieve that even with the simplest setup. I have configured a t2.micro ec2 instance and a DynamoDB table, both in us-west-2, and when running the command below from the aws cli on the ec2 instance I get responses averaging about 250 ms. The same command run from my local machine (Denver) averages about 700 ms.
aws dynamodb get-item --table-name my-table --key file://key.json
When looking at the CloudWatch metrics in the AWS console it says the average get latency is 12 ms though. If anyone could tell me what I'm doing wrong or point me in the direction of information where I can solve this on my own I would really appreciate it. Thanks in advance.
The response times you are seeing are largely do to the cold start times of the aws cli. When running your get-item command the cli has to get loaded into memory, fetch temporary credentials (if using an ec2 iam role when running on your t2.micro instance), and establish a secure connection to the DynamoDB service. After all that is completed then it executes the get-item request and finally prints the results to stdout. Your command is also introducing a need to read the key.json file off the filesystem, which adds additional overhead.
My experience running on a t2.micro instance is the aws cli has around 200ms of overhead when it starts, which seems inline with what you are seeing.
This will not be an issue with long running programs, as they only pay a similar overhead price at start time. I run a number of web services on t2.micro instances which work with DynamoDB and the DynamoDB response times are consistently sub 20ms.
There are a lot of factors that go into the latency you will see when making a REST API call. DynamoDB can provide latencies in the single digit milliseconds but there are some caveats and things you can do to minimize the latency.
The first thing to consider is distance and speed of light. Expect to get the best latency when accessing DynamoDB when you are using an EC2 instance located in the same region. It is normal to see higher latencies when accessing DynamoDB from your laptop or another data center. Note that each region also has multiple data centers.
There are also performance costs from the client side based on the hardware, network connection, and programming language that you are using. When you are talking millisecond latencies the processing time on your machine can make a difference.
Another likely source of the latency will be the TLS handshake. Establishing an encrypted connection requires multiple round trips and computation on both sides to get the encrypted channel established. However, as long as you are using a Keep Alive for the connection you will only pay this overheard for the first query. Successive queries will be substantially faster since they do not incur this initial penalty. Unfortunately the AWS CLI isn't going to keep the connection alive between requests, but the AWS SDKs for most languages will manage this for you automatically.
Another important consideration is that the latency that DynamoDB reports in the web console is the average. While DynamoDB does provide reliable average low double digit latency, the maximum latency will regularly be in the hundreds of milliseconds or even higher. This is visible by viewing the maximum latency in CloudWatch.
They recently announced DAX (Preview).
Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second. For more information, see In-Memory Acceleration with DAX (Preview).
My application requires periodic updates in the background (Android system). To avoid having continuous connections (which would very quickly saturate the concurrent connections limit), I have my app call goOffline() after an update.
However, doing calculations on my reported bandwidth usage (in the analytics area) is showing much more than I would have expected for the few objects I have listeners registered on. Is the data usage being dominated by calling goOnline() due to extra data being transferred when re-registering the active listeners when my app updates? If this is true I'm going to have problems scaling this app.
Yes, but it's almost certainly negligible.
For example, there's going to be a little overhead in establishing the new TCP session, and even more involved in the SSL negotiation. There's a little more overhead that's specific to Firebase, but nothing beyond the order of magnitude involved in the lower level networking stuff.
So, if you're counting individual bytes, yes you're going to notice. However, in the vast majority of use cases aren't going to notice.
we're currently having issue in our production servers and would like to try to replicate the issue in our dev. I'm currently awaiting access to our Performance Monitoring Tool, and while waiting would like to play with it a little.
I'm thinking of, since I suspect a host throttling in prod, forcing hosts to throttle in dev and see if it will recreate the issue.
Is there a way to do this?
As others have mentioned, monitoring of the throttling counters and other counters like memory and WIP messages is a must to see what is going on in your production server. Also would recommend that set up a SCOM alert on throttling states of 3+ (publishing + delivery states), if you have SCOM.
Message throughput can grind to a halt on especially the memory (4, 5) and Queue Size (6) states. States 1+2 are generally short lived (e.g. arrival of a large batch of messages) and Biztalk recovers within a few seconds.
Simulating the memory state in your Dev environment should be straightforward by tweaking the throttling thresholds (obviously not something to be taken lightly in production!)
e.g. to trigger the Memory threshold states - AFAIK the lowest memory usage threshold you can set is 101MB. Running a load test in dev should then be able reproduce the throttle.
There is also apparantly a user-based throttling override to set states 10 and 11 although haven't actually tried this.
Some other experience on avoiding throttling:
(Caveat - I don't have an active BizTalk 2006/R2 setup - this is for 2009 / 2010)
If you do a lot of asynchronous processing (e.g. Queue receives), ensure that you have split functionality into separate Hosts for Receive, Processing and Send hosts. This way you can adjust the throttling for asynch Receive hosts to trigger much earlier than the processing and sending hosts - this should have the effect of constricting new incoming messages to the messagebox but allowing existing messages to complete processing.
On 64 bit hosts, the default 25% memory host usage throttling level is usually an unnecessary liability - we increased this using Yossi Dahan's recommendation of 50% on a 4GB server
Note that suspended messages count toward throttling state 6 - ensure that you have a strategy for dealing with suspended messages (and obviously ensure that the Sql Agent jobs are running!).