Here API Request Per Second limits - here-api

I'm testing out the Here API for geocoding purposes. Currently in the evaluation period, some of my tests include geocoding as many as 400 addresses at a time (later I may rarely hit 1000). When I tried this with google maps, they would give me an error indicating I'd gone over the rate limit, but I have not gotten such an error from Here API despite not limiting the rate of my requests (beyond waiting for one to finish before sending the next).
But in the Developer FAQ the Requests Per Second limit is given as:
Public Plans Business Plans
Basic 1 N/A
Starter 1 1
Standard 2 2
Pro 3 3
Which seems ridiculously slow. 1 request per second? 3 per second on the highest plan? Is this chart a typo? If so, what are the actual limits? If not, what kind of error should I expect if I exceed that limit?

Their documentation states that the RPS means "for each Application the number of Requests per second to HERE Services calculated as an average (number of Requests during a period of 5 minutes) to all of the APIs used to access the features listed for each subscription plan".*
They say later in the documentation that quota is calculated monthly: "When a usage record is loaded into our billing system that results in a plan crossing its monthly quota, the price applied to that usage record is pro-rated to account for the portion that is included in your monthly quota for free and the portion that is billable. Subsequent usage records above your monthly quota will show at the per transaction prices listed on this website."*
Overages are billed at 200/$1 USD for Business or 2000/$1 USD for Public plans. So for the Pro plan, you will hit your limit if you use more than 7.779 million API requests in any given month, any usage beyond that would be billed at the rates above.
Excerpts taken from Developer FAQ linked above.

Related

Resolve stackdriver incident when no more timeseries with available data violate the policy

I have stackdriver alerts/incidents on metrics like cloud run revision request latencies.
If there were a few calls a long time ago that had high latency, but there have not been any new requests since then which had a low latency, the incident will be permanently firing. This is because when there are no new requests coming in, there are no data points for the metric.
Is there a way to automatically stop an incident from firing when there are no recent data points for the underlying metrics? Or is there an alternative way to have alerts on high request latencies in cloud run that automatically switches off the alarm again when no new requests are coming that have a high latency?
The solution of https://stackoverflow.com/a/63997540/6473907 does not work as-is, because the google cloud run built-in metric for the request count does not go to zero when there are no more requests coming in. Instead, it just stops providing any data points. The solution for us was to create a custom logs-based metric that counts the log entries written for every request by cloud run, because the logs-based metric does indeed go to zero, then combine it with the AND_WITH_MATCHING_RESOURCE as described in https://stackoverflow.com/a/63997540/6473907
The chart compares the request count as obtained from the google pre-defined metric run.googleapis.com/request_count (in violet) with the metric generated by a custom logs-based metric (in blue). Only the latter goes to zero when no more requests are coming in.
Edit: This solution will not work because the request count stops being sent to Stackdriver instead of dropping to zero. As explained in the other (more correct) answer, the solution is to create a logs-based metric for the requests, and this will properly drop to zero when there are no additional requests.
This behaviour is documented in the alerting docs:
If measurements are missing (for example, if there are no HTTP
requests for a couple of minutes), the policy uses the last recorded
value to evaluate conditions.
There are a few recommendations in there to mitigate this issue, but all the suggestions assume you're actually collecting metrics, not your situation where there are no metrics at all (because you stopped receiving requests).
This is probably by design: even if you are not receiving additional requests, you might still want to check why all the latest requests had this increased latency.
To work around this feature, you could try to use multiple conditions in your alert policy:
One condition related to the latency: if latency > X
One condition related to the existence of requests: if request count > 1
If you combine those with AND_WITH_MATCHING_RESOURCE, it should only trigger if there's high latency and there are requests. The incident should be resolved when one of the 2 conditions are not met. Even if no new metrics are ingested related to the latency (so the alerting policy still thinks the latency is high), the request count will stop matching after the duration period specified.

Is there a call limit to accessing LinkedIn share counts?

When using https://www.linkedin.com/countserv/count/share?format=json&url= to access an article's sharecount, is there an api daily limit?
We noticed that the time it was taking to retrieve count data was taking as much as 20 seconds on our production server. We added logic to cache the number of counts, and the 20 second delay stopped the next day. We are left wondering though what the limit might be (we can't seem to find it in your documentation).

Hits Processed Per Month?

If you refer to http://www.google.com/intl/en_uk/analytics/premium/features.html, you will notice that Standard allows for 10 million hits processed per month and Premium allows for 1 billion.
I have a website on an account, with multiple "folders" for different sub-domains, and also different "Views" or dashboards for some of these sub-domains.
The website I am on recently lost tracking for conversion rates, and everything has plummeted to near 0%, which is an incorrect statistic. I am curious as to how I can figure up if this account is reaching the 10 million limit on the standard version. Or at least how to figure actual hits processed a day, week, or month?
Any ideas?
Thanks!
I don't know how Google enforces hit limits in 2015. However in 2013 a Google representative sent one of our bigger clients a document (answering a question about data limits) that contained the following paragraph:
How do data limits impact sampling? Google Analytics does not sample
your clients data at the point of collection or processing, regardless
of how far they exceed our stated limits. So no hits are discarded.
The only way to sample data at the point of collection is for clients
to use_setSampleRate in their tracking code.
[...]
[...] we reserve the right to shutdown their account [sc. if limits are exceeded], but it won't
happen before we have attempted to contact the account Admins multiple times
and we have exhausted all other options.
Unless Google has changed it's policy in the last 1,5 years I would say not, unprocessed hits are not your problem; it seems Google would have contacted you with an request to limit your hits or upgrade to Analytics Premium before problems occurr.
Plus, since you mentioned that you have several views - views do not count towards your quota (they display the same data in different ways). However properties (I think that is what you mean by "folders") do.
Updated 2017: It seems that Google intends to enforce limits more strictly. One of my clients now has the following warning in his GA interface:
Your data volume (XXX hits) exceeds the limit of 10M hit per month as
outlined in our terms of service. If you continue to exceed the limit
you will lose access to future data.
You can create a database table, like this:
visits(
id bigint primary key auto_increment,
ip text,
visit_date timestamp default current_timestamp
)
Upon each page visit, you can insert a record into the table. Later you can view statistics. For instance, visit count in a given day would look like:
select id, ip, visit_date
from visits
where visit_date >= '2015-07-21 00:00:00' and visit_date < '2015-07-22 00:00:00'

Firebase connections mean

After reading this thread I still not very clear.
I use Candle plan for my example.
If every user in my app have 1 browser tab open mean I can only have 200 users in the same time?
If you're running the free plan, firebase will cut you off at 50 connections. This means, user 51 will be unable to connect to firebase. Or, if you open 50 tabs that are all identified as unique firebase connections, tab 51 will not connect.
If you're using any paid plan, your connections will expand and scale automatically, this means that users will never be cut off.
"Because we use 95th percentile billing, you won't be charged for your overages 5% of the time (about 1.5 days each month). If you exceed your limits for more than 5% of the month, the following overage fees will be added to the base price of your monthly bill:"
So, even if you exceed your number of active connections, it will not count for billing unless the number of active connections exceeded your maximum for over 5% of the time covered in the billing period.
Any paid plan will never be cut off (as long as you continue to pay your bills and overages!) so you can have more than 200 users simultaneously!
Source: Firebase

Unsampled reports automation for historical data

We have a client who receives 2-4 million visits a day, so off the bat we can only get unsampled reports because it exceeds google's limit :
500,000 maximum sessions for special queries where the data is not already stored.
We are attempting to collect Unique Visitors and Visits for a 1 day period. Using the Google API has proved frivolous as the data is sampled.
We have set up Unsampled reports on a daily basis that get dumped into Google Drive and our application picks up the new files and downloads them just fine. The problem we are running into is that we need 2 years worth of daily data for 20 reports. The maximum range we can run an unsampled report using google analytics web interface is 1 week before we exceed a query limit. So 52 weeks of reports x 2 years x 20 different reports to set up is 2080 scheduled unsampled reports and this is for 1 client only.
EDIT: Can we automate unsampled reports using GA API or any programming method to pull historical data with the constraints previously mentioned? Also we do have Google Analytics Premium
Cris G, the only way to avoid data-sampling in Google Analytics without having an access to Premium is day-parting technique = you split a data-request for selected time period into shorter period queries (typically days) and then add all the numbers up. If your profiles/views are not sampled if you look at daily numbers, this could solve you issue.
However, this doesn't work on Unique Visitors, since they will be unique every single time (you are running data requests on daily basis), so there will be most likely duplicates and inflated totals if your site is attracting lots of returning visitors.
To automate some of the work, I suggest using tools like Analytics Canvas. It can make your life much easier and I think it could be the perfect tool for what you need to. Bear in mind the limitations about unique visitors (and some other metrics).
Having said that, I still think the best choice would be to use the benefits of Premium and the ability to get unsampled data for your reports.

Resources