I am running an experiment on two cosmosDB with fixed and autoPilot R/U respectively. The request load and R/U consumption is exactly the same as well as all the other parameters except for the throughput setting. But there is big leap in hourly costing chart (autopilot is consuming one dollar whereas fixed is consuming 7 dollars per hour for the same throughput). I have checked all the parameters multiple times and both the experiments have exactly the same settings, however the costing chart is not making any sense.
It would be really helpful if someone can shed some light on this.
autopilot is renamed to autoscale now,and it has been changed.
I read an official blog and there is a word in it:
"Billing is done on a per-hour basis, for the highest RU/s the system scaled to within the hour."
The reason why autoscale only consumes one dollar may be your consumption of RU/s is low in that hour.
Here is the pricing page.
Hope these can help you.
Related
I'm reading my reports from firebase web performance and I don't understand what means the percentage value in the distribution graph.
Bellow a print of the loadEventEnd with the graph.
loadEventEnd-report
Is the value 95% represents users affected with the load time of 11.85s?
Thanks
What you're seeing there is the 95th Percentile number. That means that if you ordered all events from fastest to slowest, five percent of all requests took 11.85s or longer.
Percentile metrics are very useful for measuring performance problems in the margins -- your median load time may be great, but if 5% of users are experiencing a very long load time, it might be worth trying to dig into why and optimizing it more.
A great exercise might be to use the filters that are available to try to find a cohort of users for whom the median number is closer to that 95p number -- for instance, a specific browser or geographical region. That will give you more insight into who exactly is having a slower experience on your site.
Firebase spark allows a project to have 50K firestore reads a day. We are planning on moving to the blaze plan, because we expect to exceed that limit on high traffic days. My question is how these extra reads work. It's $0.06 per 100K documents, but are these daily? What I am asking is that if I use 50K of my free reads halfway through the day, and then I am incurred a charge of 6 cents to read more, do I get to continue using these reads the next day if I only used a couple of these, or will I get charged 6 cents again if I run through the daily 50K and need more reads?
You don't pay 6 cents to get 100K reads after your initial free tier quota. You pay 6/100K cents per read (that is to say, 6 divided by 100K, a very tiny amount per read). Stating it in terms of 100K reads is just easier to reason about than dealing with extremely tiny numbers.
I am trying to get my mind around metrics charts in azure portal for CosmosDB and i find it a bit confusing.
For example, i get charts like this:
What confuses me in particular is how to read combination of charts 1 and 3?
chart 1 shows a spike of roughly 100RU. That would mean, if there would be 4 times more, it would start with requests throttling.
On the other hand, chart 3 suggests that there is still alot of capacity left untill provisioned 400RU limit is met.
So, what should be concluded here about when will the first throttled request occur? in 3x more as with spike or in ~100x more as suggested by chart 3?
Graph 3 shows the average, which is pretty flat. Graph 1 shows actual RU/s consumed. It looks as though you had a temporary spike in RU consumption - perhaps even one query. Throttling is performed on a per second basis. To answer your question, if you had 3x more consumption in a single second, you'd be throttled.
When using https://www.linkedin.com/countserv/count/share?format=json&url= to access an article's sharecount, is there an api daily limit?
We noticed that the time it was taking to retrieve count data was taking as much as 20 seconds on our production server. We added logic to cache the number of counts, and the 20 second delay stopped the next day. We are left wondering though what the limit might be (we can't seem to find it in your documentation).
I have been a happy user of Graphite+Grafana for a few months now and I have been advocating it around my firm.
My approach has been to measure data of interest and collect them into 1-minute or 5-minute buckets and send that information to Graphite. I was recently contacted by a group that processes quotes (billions a day!) and their approach has been to create a log line each time their applications process 1 million quotes. The problem is that the interval between 2 log lines can be highly erratic from 1 second to a few hours.
The dilemma is then: should I set my retention policy to a 1-second bucket so that I can see all measurements associated with spikes or should I use say a 1-minute bucket so that the number of data points to be saved and later on queried is much more manageable. FYI, when I set it to 1-second, showing the data for 8 or 10 charts, for a few days was bringing the system (or at least my browser) to a crawl because of the numbers of data points (mostly NULL) being pushed around from Graphite to Grafana
Here's my retention policy: 1s:10d,1m:36d,5m:180d
Alternatively, is there a way to configure Grafana+Graphite to only retrieve non-NULL data points?
What do you recommend?
You can always specify a lower retention period for 1s metrics so when you show a longer range Graphite will send you only the more coarse level.
For example, you can specify: 1s:2d, 1m:7d, 5m:180d
This way, if you show a range more than 2 days in the past you will get 1m resolution (and so on), which won't make your browser crawl, while you will still be able to inspect spikes in the last 2 days.