Cosmos DB pricing [closed] - azure-cosmosdb

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I do not understand the logic of pricing of Cosmos DB. Let's say I have Provisioned Throughput type of database. Now the price is measured in RU/s. Let's say I run the specific query three times a day. Totally per month I will read the data of the size of 10GBs. So I will be reading 90 times per month and each read will be ~111 MBs - let's assume that is 2423 request units. Does that mean that my cost will be 2423 RU / 100 units * 0.008 * 3 hours * 30 days = $17? Do I need to make more requests so it would be cheaper? Thanks.

Based on the pricing calculator, your pricing of a provisioned throughput of 2500 RU/s will be $146.00 = 2500 RU / 100 * 0.008 * 730 where 730 is the hours in a month.
Please note that with provisioned throughput model it doesn't matter how many times you query your container. Once the throughput is provisioned, you will pay for it 24x7 whether you use it or not.
If you think that you will be querying your Cosmos DB occasionally, a better pricing model for you would be Serverless model where you only pay for Cosmos DB resources based on when you actually use it.
More information on difference between Provisioned Throughput and Serverless model can be found here: https://learn.microsoft.com/en-us/azure/cosmos-db/throughput-serverless.

Related

Understanding Azure Cosmos DB Free Tier

Azure Cosmos DB free tier give us 1000 RU/s and 25 GB of storage.
I created a database with one container, added some data to it and executed some queries.
Looking at the insights, I can see this:
Does that mean I used 99.68 RUs out of the 1000 RUs from the free tier?
Are those 1000 RUs in a month?
Does that mean I used 99.68 RUs out of the 1000 RUs from the free
tier?
It means that you used 99.68 request units in total. It is unclear over what time period you used these but even if that was all in one second this is well within the 1,000 RU/S limit.
Are those 1000 RUs in a month?
No. When you remember that RU/s is "Request Units per Second" this doesn't make sense. It is an ongoing allowance and based on RU provisioned (whether manually or via auto scale) not RU used.
I imagine that you may be concerned that you have already used 10% of your free allowance? This is absolutely not the case. For a container provisioned at 1,000 RU/s you can use 1,000 RU for each second and the allowance starts fresh each second. Any historic request units used have no importance here.
In terms of actual request units the free account means that potentially you could use 2,592,000,000 per month without paying any extra if you were somehow able to keep the collection constantly processing operations worth 1,000 request units every second in that month.
If you exceed the limits for the free tier by provisioning more than 1,000 RU/s you will be billed accordingly at hour granularity.
With the free account if you want to be certain to avoid additional charges you should currently steer clear of autoscale. The minimum autoscale range that can be configured is now 100-1,000 RU/s (was previously 400-4,000) - but autoscaled RUs are charged at a 50% premium so 1,000 autoscaled RU/s is billed as 1,500.
With the free account you can provision without charge up to 1,000 RU/s via fixed provisioning across all containers and databases in the account - with fixed provisioning you are charged based on what you provisioned rather than what you used (the system will throttle you if your usage would exceed provisioned limit).

dynamodb efficient read capacity

For dynamodb databases with only a couple of requests per hour, what is the most efficient provisioning.
What is more expensive "On-Demand" capacity or a fixed capacity ?
By default AWS will propose a capacity of 5 reads, 5 writes for a table. See image below.
Right now, I find it hard to estimate the costs or to compare the costs of different tables. Is there a way to get an overview of the costs per table, or perhaps some kind of trick just to make an estimate ?
Edit:
I wanted to add some charts to show and quantify how little this database is actually used.
For my users table:
For the vouchers table:
https://aws.amazon.com/dynamodb/pricing/on-demand/
For US East (Ohio)
1.25 /million writes
0.25 /million reads
https://aws.amazon.com/dynamodb/pricing/provisioned/
For US East (Ohio)
$0.00065 per WCU per hour
$0.00013 per RCU per hour
So assuming a minimally provisioned table (1 RCU & 1 WCU) 24 hrs a day, 30 days a month..
$0.57 /per month...
That 57 cents would get you about 2 million on demands reads in a month...or 500K writes.
You don't mention the ratio of reads/writes.
One last thing to consider, AWS Free tier for DDB allows for 25GB of storage, 25 RCU and 25 WCU per month. So assuming all your DDB fits there...it's free.

Azure CosmosDB AutoPilot Cost calculation and estimation by hour basis?

I am running an experiment on two cosmosDB with fixed and autoPilot R/U respectively. The request load and R/U consumption is exactly the same as well as all the other parameters except for the throughput setting. But there is big leap in hourly costing chart (autopilot is consuming one dollar whereas fixed is consuming 7 dollars per hour for the same throughput). I have checked all the parameters multiple times and both the experiments have exactly the same settings, however the costing chart is not making any sense.
It would be really helpful if someone can shed some light on this.
autopilot is renamed to autoscale now,and it has been changed.
I read an official blog and there is a word in it:
"Billing is done on a per-hour basis, for the highest RU/s the system scaled to within the hour."
The reason why autoscale only consumes one dollar may be your consumption of RU/s is low in that hour.
Here is the pricing page.
Hope these can help you.

Here API Request Per Second limits

I'm testing out the Here API for geocoding purposes. Currently in the evaluation period, some of my tests include geocoding as many as 400 addresses at a time (later I may rarely hit 1000). When I tried this with google maps, they would give me an error indicating I'd gone over the rate limit, but I have not gotten such an error from Here API despite not limiting the rate of my requests (beyond waiting for one to finish before sending the next).
But in the Developer FAQ the Requests Per Second limit is given as:
Public Plans Business Plans
Basic 1 N/A
Starter 1 1
Standard 2 2
Pro 3 3
Which seems ridiculously slow. 1 request per second? 3 per second on the highest plan? Is this chart a typo? If so, what are the actual limits? If not, what kind of error should I expect if I exceed that limit?
Their documentation states that the RPS means "for each Application the number of Requests per second to HERE Services calculated as an average (number of Requests during a period of 5 minutes) to all of the APIs used to access the features listed for each subscription plan".*
They say later in the documentation that quota is calculated monthly: "When a usage record is loaded into our billing system that results in a plan crossing its monthly quota, the price applied to that usage record is pro-rated to account for the portion that is included in your monthly quota for free and the portion that is billable. Subsequent usage records above your monthly quota will show at the per transaction prices listed on this website."*
Overages are billed at 200/$1 USD for Business or 2000/$1 USD for Public plans. So for the Pro plan, you will hit your limit if you use more than 7.779 million API requests in any given month, any usage beyond that would be billed at the rates above.
Excerpts taken from Developer FAQ linked above.

Gracenote (GNSDK): what are the rate limits after upgrading to Accelerator plan?

What would be the rate limits for GNSDK after joining to the Accelerator Beta plan? (max number of calls per second per user, and max number of calls per day per user)
Since Accelerator plan is for commercial use, we try to make the query limit flexible to meet your needs. Typically, the limit begins from several thousand calls per end user per day, and can adjust accordingly. Of course, Accelerator plan developers can notify us in advance to set a query limit that fit their use case.

Resources