Billing in CosmosDb when increasing RU for 5 minutes - azure-cosmosdb

How would billing work in Azure CosmosDb if I use the SDK to increase the throughput for a small amount of time, like 5 minutes?
Will I be charged one hour of the max RU or just a fraction of the hour?

Indeed CosmosDB charges you for the highest provisioned throughput within an hour. It is also cycle based so if you increase at 01:58 and decrease at 02:03 (o'clock might not be the actual cycle time) you could be charged for 2 hours.
Reserved RUs/second (per 100 RUs, 400 RUs minimum) £0.006/hour
"You're billed the flat rate for each hour the container or database exists, regardless of usage or if the container or database is active for less than an hour. For example, if you create a container or database and delete it 5 minutes later, your bill will reflect a 1 hour."
More info here: https://azure.microsoft.com/en-us/pricing/details/cosmos-db/

Related

DynamoDB ConsumedWriteCapacityUnits vs Consumed CloudWatch Metrics for 1 second Period

I am confused by this live chart, ConsumedWriteCapacityUnits is exceeding the provisioned units, while "consumed" is way below. Do I have a real problem or not?
This seems to only show for the
One Second Period
One Minute Period
Your period is wrong for the metrics. DynamoDB emits metrics at the following periods:
ConsumedCapacity: 1 min
ProvisionedCapacity: 5 min
For ConsumedCapacity you should divide the metric by the period but only at a minimum of 1min.
Exceeding provisioned capacity for short periods of time is fine, as burst capacity will allow you to do so. But if you exceed it for long periods it will lead to throttling.

am I going to be charged for 8GB memory for cloud function even if the function used one or two times per month?

I have cloud function and I reserve 8GB RAM for it , but I call it a few times per month , is that will charge me for whole month or just for every time I hit the function
The Cloud Functions pricing page says:
Fees for compute time are variable based on the amount of memory and CPU provisioned for the function. Units used in this calculation are:
GB-Seconds
1 GB-second is 1 second of wallclock time with 1GB of memory provisioned
GHz-Seconds
1 GHz-second is 1 second of wallclock time with a 1GHz CPU provisioned
So you're charged per second that the memory and CPU are active. While the Cloud Function is not active, you are not charged for that time.
If your function is only active twice per month, you will only be charged for the time period it is active those two times.

What is the total number of tables that you can use in dynamodb free tier?

Ignoring capacity as my tables won't exceed 25 GB, since I have 25 RCU and 25 WCU free per month, does that mean I can have a maximum of 25 tables with 1RCU and 1WCU per table?
For my own development and learning purposes I may not need more tables, but for the sake of understanding, if I create the 26th table, then would I exceed the free tier?
From AWS Free Tier:
Amazon DynamoDB
25 GB of storage
25 Units of Write Capacity
25 Units of Read Capacity
Enough to handle up to 200M requests per month
The free tier is applied as a pricing discount. The first usage of the above quantities each month has no cost.
So, yes, you could create 25 tables with 1 RCU, 1 WCU and 1GB of storage each and this would stay in the free tier. Any usage beyond this amount would be charged a normal rates.

Why is COSMOS is throttling when RU is not crossed

From my understanding if the RUs exceeds more than what is set there will be throttling. but from what I see below this is has not even crossed the RU threshold (actual value 2457) but I am getting HTTP429.
But in another collection I see that it has cross the RU threshold but there is no throttling.
I guess "Max consumed RU/s per partion key range" is not aggregated over 1 minute. So you have more than one request in that exact throttled minute and it sums up with the request for 2457.

Impact of Decrease of dynamodb WCU

I have a requirement where I need to initialise my dynamodb table with large volumne of data. Say around 1M in 15 min so I ll have to provision WCU to 10k but after that my load is ~1k per second so I ll decrease WCU to 1k from 10k . Is there any performance drawback or issues in decreasing WCU.
Thanks
In general, assuming the write request doesn't exceed the write capacity units (i.e. as you have not mentioned the item size), there should not be any performance issue.
If at any point you anticipate traffic growth that may exceed your
provisioned throughput, you can simply update your provisioned
throughput values via the AWS Management Console or Amazon DynamoDB
APIs. You can also reduce the provisioned throughput value for a table
as demand decreases. Amazon DynamoDB will remain available while
scaling it throughput level up or down.
Consider this scenario:-
Assume the item size is 1.5KB in size.
First, you would determine the number of write capacity units required per item, rounding up to the nearest whole number, as shown following:
1.5 KB / 1 KB = 1.5 --> 2
The result is two write capacity units per item. Now, you multiply this by the number of writes per second (i.e. 1K per second).
2 write capacity units per item × 1K writes per second = 2K write capacity units
In this scenario, the DynamoDB would throw error code 400 on your extra requests.
If your application performs more reads/second or writes/second than
your table’s provisioned throughput capacity allows, requests above
your provisioned capacity will be throttled and you will receive 400
error codes. For instance, if you had asked for 1,000 write capacity
units and try to do 1,500 writes/second of 1 KB items, DynamoDB will
only allow 1,000 writes/second to go through and you will receive
error code 400 on your extra requests. You should use CloudWatch to
monitor your request rate to ensure that you always have enough
provisioned throughput to achieve the request rate that you need.
Yes, there is a potential impact.
Once you write at high TPS, the more no. of partitions gets created which cannot be reduced later on.
If this number was higher than what is needed eventually for application to run well, this can cause problems.
Read more about DDB partitions for same.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.Partitions.html

Resources