User Rate Limit Exceeded - freebase

As I am not coming close to 100000 queries per day I am assuming that Google is referring to the Freebase 10 requests per second per user limit. (I am passing in my Goggle Key)
If i am running a query that crosses multiple Freebase domains is that considered more than one request? Or is a single query considered one request regardless of it size?
thanks
Scott

Yes, it sounds like you're exceeding the per/second rate limit. You'll need to introduce some delays in your application so that you don't exceed the limit. The rate limit only applies to HTTP requests so you can query as much data as you like as long as it fits in one request.

Related

Resolve stackdriver incident when no more timeseries with available data violate the policy

I have stackdriver alerts/incidents on metrics like cloud run revision request latencies.
If there were a few calls a long time ago that had high latency, but there have not been any new requests since then which had a low latency, the incident will be permanently firing. This is because when there are no new requests coming in, there are no data points for the metric.
Is there a way to automatically stop an incident from firing when there are no recent data points for the underlying metrics? Or is there an alternative way to have alerts on high request latencies in cloud run that automatically switches off the alarm again when no new requests are coming that have a high latency?
The solution of https://stackoverflow.com/a/63997540/6473907 does not work as-is, because the google cloud run built-in metric for the request count does not go to zero when there are no more requests coming in. Instead, it just stops providing any data points. The solution for us was to create a custom logs-based metric that counts the log entries written for every request by cloud run, because the logs-based metric does indeed go to zero, then combine it with the AND_WITH_MATCHING_RESOURCE as described in https://stackoverflow.com/a/63997540/6473907
The chart compares the request count as obtained from the google pre-defined metric run.googleapis.com/request_count (in violet) with the metric generated by a custom logs-based metric (in blue). Only the latter goes to zero when no more requests are coming in.
Edit: This solution will not work because the request count stops being sent to Stackdriver instead of dropping to zero. As explained in the other (more correct) answer, the solution is to create a logs-based metric for the requests, and this will properly drop to zero when there are no additional requests.
This behaviour is documented in the alerting docs:
If measurements are missing (for example, if there are no HTTP
requests for a couple of minutes), the policy uses the last recorded
value to evaluate conditions.
There are a few recommendations in there to mitigate this issue, but all the suggestions assume you're actually collecting metrics, not your situation where there are no metrics at all (because you stopped receiving requests).
This is probably by design: even if you are not receiving additional requests, you might still want to check why all the latest requests had this increased latency.
To work around this feature, you could try to use multiple conditions in your alert policy:
One condition related to the latency: if latency > X
One condition related to the existence of requests: if request count > 1
If you combine those with AND_WITH_MATCHING_RESOURCE, it should only trigger if there's high latency and there are requests. The incident should be resolved when one of the 2 conditions are not met. Even if no new metrics are ingested related to the latency (so the alerting policy still thinks the latency is high), the request count will stop matching after the duration period specified.

How to construct google analytics query to avoid quota limits?

This is one part of my request. And I have 725 those kind of requests for each day in 2 year span.
I am getting analytics for 30 days traffic for certain dataset I am creating.
When I try to query the analytics for all 725 datasets I get quota error Requests per user per 100 seconds even though I put time.pause(2) before each request.
Is there something else I can do to avoid hitting the API quota?
{
"reportRequests":[
{
"viewId":"104649158",
"dateRanges":[
{
"startDate":"2017-12-01",
"endDate":"2017-12-31"
}
],
"metrics":[
{
"expression":"ga:pageviews"
},
{
"expression":"ga:uniquePageviews"
},
{
"expression":"ga:pageviewsPerSession"
},
{
"expression":"ga:timeOnPage"
},
{
"expression":"ga:avgTimeOnPage"
},
{
"expression":"ga:entrances"
},
{
"expression":"ga:entranceRate"
},
{
"expression":"ga:exitRate"
},
{
"expression":"ga:exits"
}
],
"dimensions":[
{
"name":"ga:pagePathLevel2"
}
],
"dimensionFilterClauses":[
{
"filters":[
{
"dimensionName":"ga:pagePathLevel2",
"operator":"REGEXP",
"expressions":[
"23708|23707|23706|23705|23704|23703|23702|23701|23700|23699|23698|23697|23696|23695|23694|23693|23692"
]
}
]
}
]
}
]
}
1) You should increase the user quota to 1000 requests (if not done already) by going into your Coogle Cloud Console -> Top-left Menu -> APIs & Services -> Analytics Reporting API -> Quota:
https://console.cloud.google.com/apis/api/analyticsreporting.googleapis.com/quotas
2) You could increase the time range and use the ga:yearMonth dimension to still get your monthly breakdown. However you might face sampling issues: since your query is "custom" (you use a filter + dimension), sampling will apply if for the given time range the total number of sessions at property level exceeds 500K (regardless of how many are actually included in the response). In this case there is no absolute answer, you have to find the time ranges that suit you best. samplesReadCounts / samplingSpaceSizes will help you detect sampling, and if required you will need to handle pagination.
While it is correct that you can request a quota increase. Which will increase the total number of requests that you can make this is still limited.
In the API Console, there is a similar quota referred to as Requests per 100 seconds per user. By default, it is set to 100 requests per 100 seconds per user and can be adjusted to a maximum value of 1,000. But the number of requests to the API is restricted to a maximum of 10 requests per second per user.
more info
requests per user per 100 seconds
Is a user based quota it is linked to the maximum of 10 requests per second per user quota (10 queries per second (QPS) per IP address). This is basically flood protection. It prevents a single user from making to many requests against the api and there by making it hard for the rest of use to use the the api.
What you need to understand first is that 100 requests per user per second is very subjective. When you run your request there is really no way for you to know what server your request will run on if your the only one running on that server then its possible you could kick off 100 request in 10 seconds and then be blocked for the next 90 seconds.
quotaUser
The second thing you need to know is that user based normally means ip based so if these request may be going against different views but if its all running from the same Ip address this can cause some confusion and it assumes you are the same user. To get around that you can use an alternate parameter called quota User which you can send a random string to this with every request and it can help it wont completely reduce it google tends to catch on to what you are doing eventually.
quotaUser An arbitrary string that uniquely identifies a user.
Lets you enforce per-user quotas from a server-side application even in cases when the user's IP address is unknown. This can occur, for example, with applications that run cron jobs on App Engine on a user's behalf.
You can choose any arbitrary string that uniquely identifies a user, but it is limited to 40 characters.
Learn more about Capping API usage.
Implementing exponential backoff
Google normally recommends that you implement something called exponential backoff this basically means that you try a request if it fails then you wait a few seconds and try again if that fails then you wait twice as long as you waited before and then try again you do this about 10 times and normally you are able to get though.
If you are using one of the official google client libraries most of them have exponential backoff already implemented
flood buster
I while ago i wrote an article about something i called flood buster its a way of keeping track of how fast i was going to try and prevent the user quota error the code is in C# you may find it useful flood buster
not really an issue
while getting these errors may be ugly it doesn't really matter you should just make the request again. Google does not count this error against you unless you do it constantly for hours at a time.
2000 requests per project 100 s
You need to remember that the number of requests your project can make In total per 100 seconds is 2000. This can not be increased.
So if you have two users each eating up 1000 requests per 100 seconds your going to hit the project based quota and there is nothing you can do about that. Allowing a single user to eat all your quota is IMO not a very good idea unless this is a single user application.

Google Analytics data limits

We are using Google Analytics on a webshop. Recently we have added enhanced ecommerce to measure more events so we can optimize the webshop. But now we are experiencing less pageviews and other data is missing.
I don't know what it is, but on a specific page we are nog measuring anymore, I removed some items from the ga:addImpression data, and now the pageview is measured again.
I can find limits for GA, but I can't find anything for the amount of data that can be send to GA. Because is this seems to be related to the amount of data that is send to GA. If I shorten the name of a product, the pageview is also measured again. GA is practically broken now for us because we are missing huge numbers of pageviews.
Where can I find these limits, or how will I ever know when I'm running into these limits?
In one hand, im not sure how are you building your hits but maybe you should keep in mind the payload limits to send information to GA. (The limit is 8Kb)
In the other hand there is a limit in fact that you should consider (Docs)
This applies to analytics.js, Android iOS SDK, and the Measurement Protocol.
200,000 hits per user per day
500 hits per session
If you go over either of these limits, additional hits will not be processed for that session / day, respectively. These limits apply to Analytics 360 as well.
My best advise is to regulate the amount of events you send really considering which information has value. No doubt EE data is really important so you should partition productImpression hits in multiple ones of the problem is the size. (As shown in the screenshot)
And finally, migrate to GTM.
EDIT: Steps to see what the dataLayer has in it (in a given moment)
A Google Analytics request can send max about 8KB of data:
POST:
payload_data – The BODY of the post request. The body must include
exactly 1 URI encoded payload and must be no longer than 8192 bytes.
URL Endpoint
The length of the entire encoded URL must be no longer than 8000
Bytes.
If your hit exceeds that limit (happens e.g. with large product lists in EEC tracking) it is not (as far as I can tell) processed.
There are also restrictions to field length for some fields (e.g. custom dimension with max 150 bytes, others are detailed in the parameter reference ).
In some cases the data type is relevant, e.g. if in your event tracking the event value is set to a string the call might fail.
I think this is the page you are looking for Quota and limits page can help
These limits apply to the Web Property / Property / Tracking ID.
10 million hits per month per property
If you go over this limit, the Google Analytics team might contact you and ask you upgrade to Analytics 360 or implement client sampling to reduce the amount of data being sent to Google Analytics.

Here API Request Per Second limits

I'm testing out the Here API for geocoding purposes. Currently in the evaluation period, some of my tests include geocoding as many as 400 addresses at a time (later I may rarely hit 1000). When I tried this with google maps, they would give me an error indicating I'd gone over the rate limit, but I have not gotten such an error from Here API despite not limiting the rate of my requests (beyond waiting for one to finish before sending the next).
But in the Developer FAQ the Requests Per Second limit is given as:
Public Plans Business Plans
Basic 1 N/A
Starter 1 1
Standard 2 2
Pro 3 3
Which seems ridiculously slow. 1 request per second? 3 per second on the highest plan? Is this chart a typo? If so, what are the actual limits? If not, what kind of error should I expect if I exceed that limit?
Their documentation states that the RPS means "for each Application the number of Requests per second to HERE Services calculated as an average (number of Requests during a period of 5 minutes) to all of the APIs used to access the features listed for each subscription plan".*
They say later in the documentation that quota is calculated monthly: "When a usage record is loaded into our billing system that results in a plan crossing its monthly quota, the price applied to that usage record is pro-rated to account for the portion that is included in your monthly quota for free and the portion that is billable. Subsequent usage records above your monthly quota will show at the per transaction prices listed on this website."*
Overages are billed at 200/$1 USD for Business or 2000/$1 USD for Public plans. So for the Pro plan, you will hit your limit if you use more than 7.779 million API requests in any given month, any usage beyond that would be billed at the rates above.
Excerpts taken from Developer FAQ linked above.

Is there a call limit to accessing LinkedIn share counts?

When using https://www.linkedin.com/countserv/count/share?format=json&url= to access an article's sharecount, is there an api daily limit?
We noticed that the time it was taking to retrieve count data was taking as much as 20 seconds on our production server. We added logic to cache the number of counts, and the 20 second delay stopped the next day. We are left wondering though what the limit might be (we can't seem to find it in your documentation).

Resources