For dynamodb databases with only a couple of requests per hour, what is the most efficient provisioning.
What is more expensive "On-Demand" capacity or a fixed capacity ?
By default AWS will propose a capacity of 5 reads, 5 writes for a table. See image below.
Right now, I find it hard to estimate the costs or to compare the costs of different tables. Is there a way to get an overview of the costs per table, or perhaps some kind of trick just to make an estimate ?
Edit:
I wanted to add some charts to show and quantify how little this database is actually used.
For my users table:
For the vouchers table:
https://aws.amazon.com/dynamodb/pricing/on-demand/
For US East (Ohio)
1.25 /million writes
0.25 /million reads
https://aws.amazon.com/dynamodb/pricing/provisioned/
For US East (Ohio)
$0.00065 per WCU per hour
$0.00013 per RCU per hour
So assuming a minimally provisioned table (1 RCU & 1 WCU) 24 hrs a day, 30 days a month..
$0.57 /per month...
That 57 cents would get you about 2 million on demands reads in a month...or 500K writes.
You don't mention the ratio of reads/writes.
One last thing to consider, AWS Free tier for DDB allows for 25GB of storage, 25 RCU and 25 WCU per month. So assuming all your DDB fits there...it's free.
Related
Azure Cosmos DB free tier give us 1000 RU/s and 25 GB of storage.
I created a database with one container, added some data to it and executed some queries.
Looking at the insights, I can see this:
Does that mean I used 99.68 RUs out of the 1000 RUs from the free tier?
Are those 1000 RUs in a month?
Does that mean I used 99.68 RUs out of the 1000 RUs from the free
tier?
It means that you used 99.68 request units in total. It is unclear over what time period you used these but even if that was all in one second this is well within the 1,000 RU/S limit.
Are those 1000 RUs in a month?
No. When you remember that RU/s is "Request Units per Second" this doesn't make sense. It is an ongoing allowance and based on RU provisioned (whether manually or via auto scale) not RU used.
I imagine that you may be concerned that you have already used 10% of your free allowance? This is absolutely not the case. For a container provisioned at 1,000 RU/s you can use 1,000 RU for each second and the allowance starts fresh each second. Any historic request units used have no importance here.
In terms of actual request units the free account means that potentially you could use 2,592,000,000 per month without paying any extra if you were somehow able to keep the collection constantly processing operations worth 1,000 request units every second in that month.
If you exceed the limits for the free tier by provisioning more than 1,000 RU/s you will be billed accordingly at hour granularity.
With the free account if you want to be certain to avoid additional charges you should currently steer clear of autoscale. The minimum autoscale range that can be configured is now 100-1,000 RU/s (was previously 400-4,000) - but autoscaled RUs are charged at a 50% premium so 1,000 autoscaled RU/s is billed as 1,500.
With the free account you can provision without charge up to 1,000 RU/s via fixed provisioning across all containers and databases in the account - with fixed provisioning you are charged based on what you provisioned rather than what you used (the system will throttle you if your usage would exceed provisioned limit).
I am running an experiment on two cosmosDB with fixed and autoPilot R/U respectively. The request load and R/U consumption is exactly the same as well as all the other parameters except for the throughput setting. But there is big leap in hourly costing chart (autopilot is consuming one dollar whereas fixed is consuming 7 dollars per hour for the same throughput). I have checked all the parameters multiple times and both the experiments have exactly the same settings, however the costing chart is not making any sense.
It would be really helpful if someone can shed some light on this.
autopilot is renamed to autoscale now,and it has been changed.
I read an official blog and there is a word in it:
"Billing is done on a per-hour basis, for the highest RU/s the system scaled to within the hour."
The reason why autoscale only consumes one dollar may be your consumption of RU/s is low in that hour.
Here is the pricing page.
Hope these can help you.
Firebase spark allows a project to have 50K firestore reads a day. We are planning on moving to the blaze plan, because we expect to exceed that limit on high traffic days. My question is how these extra reads work. It's $0.06 per 100K documents, but are these daily? What I am asking is that if I use 50K of my free reads halfway through the day, and then I am incurred a charge of 6 cents to read more, do I get to continue using these reads the next day if I only used a couple of these, or will I get charged 6 cents again if I run through the daily 50K and need more reads?
You don't pay 6 cents to get 100K reads after your initial free tier quota. You pay 6/100K cents per read (that is to say, 6 divided by 100K, a very tiny amount per read). Stating it in terms of 100K reads is just easier to reason about than dealing with extremely tiny numbers.
I am trying to get my mind around metrics charts in azure portal for CosmosDB and i find it a bit confusing.
For example, i get charts like this:
What confuses me in particular is how to read combination of charts 1 and 3?
chart 1 shows a spike of roughly 100RU. That would mean, if there would be 4 times more, it would start with requests throttling.
On the other hand, chart 3 suggests that there is still alot of capacity left untill provisioned 400RU limit is met.
So, what should be concluded here about when will the first throttled request occur? in 3x more as with spike or in ~100x more as suggested by chart 3?
Graph 3 shows the average, which is pretty flat. Graph 1 shows actual RU/s consumed. It looks as though you had a temporary spike in RU consumption - perhaps even one query. Throttling is performed on a per second basis. To answer your question, if you had 3x more consumption in a single second, you'd be throttled.
I'm testing out the Here API for geocoding purposes. Currently in the evaluation period, some of my tests include geocoding as many as 400 addresses at a time (later I may rarely hit 1000). When I tried this with google maps, they would give me an error indicating I'd gone over the rate limit, but I have not gotten such an error from Here API despite not limiting the rate of my requests (beyond waiting for one to finish before sending the next).
But in the Developer FAQ the Requests Per Second limit is given as:
Public Plans Business Plans
Basic 1 N/A
Starter 1 1
Standard 2 2
Pro 3 3
Which seems ridiculously slow. 1 request per second? 3 per second on the highest plan? Is this chart a typo? If so, what are the actual limits? If not, what kind of error should I expect if I exceed that limit?
Their documentation states that the RPS means "for each Application the number of Requests per second to HERE Services calculated as an average (number of Requests during a period of 5 minutes) to all of the APIs used to access the features listed for each subscription plan".*
They say later in the documentation that quota is calculated monthly: "When a usage record is loaded into our billing system that results in a plan crossing its monthly quota, the price applied to that usage record is pro-rated to account for the portion that is included in your monthly quota for free and the portion that is billable. Subsequent usage records above your monthly quota will show at the per transaction prices listed on this website."*
Overages are billed at 200/$1 USD for Business or 2000/$1 USD for Public plans. So for the Pro plan, you will hit your limit if you use more than 7.779 million API requests in any given month, any usage beyond that would be billed at the rates above.
Excerpts taken from Developer FAQ linked above.