Google clearly explains that :
Use of the Google Geocoding API is subject to a query limit of 2,500 geolocation requests per day. (User of Google Maps API for Business may perform up to 100,000 requests per day.) This limit is enforced to prevent abuse and/or repurposing of the Geocoding API, and this limit may be changed in the future without notice. Additionally, we enforce a request rate limit to prevent abuse of the service. If you exceed the 24-hour limit or otherwise abuse the service, the Geocoding API may stop working for you temporarily. If you continue to exceed this limit, your access to the Geocoding API may be blocked.
Let say i call it as client side
<script type="text/javascript" src="http://maps.googleapis.com/maps/api/geocode/json"></script>
and I call it as serverside
<?php
$mapdata = file_get_contents('http://maps.googleapis.com/maps/api/geocode/json');
?>
What is the difference in query limit count?
This means what? I am not clear here. Per day it will count per domain or server ip or client ip?
If your code is running client side, then they will use the requesting client IP, if your code is server side, they will use the server request IP.
In other words: If you are making your requests from your server, you will be more likely to hit that limit if you are not cacheing the results.
What you need to watch out for is the request rate limit - if you make too many requests within a very short amount of time, they block you.
Related
We provide a traditional Client Server software package.
We want to build in a feature that will allow us pass 2 addresses to the Google Maps Distance API and get a time back to travel between the 2 addresses.
2 Questions:
Would each request be ONE of the 2500 free requests per day?
Could each of my customers get their own API Key so that they would
have their own 2500 requests per day?
Yes if it is using your API key or client ID then that would be one of your requests for the 24 hour period.
If you mean the client is passing their own API key to the server for each request. I think you could technically do this, but I think it would violate the google terms of service - https://developers.google.com/maps/terms. If you mean you would provision a server instance for each client with their own API key, then I believe that would be acceptable. There may also be some applicable terms of use in regards to how this API key is provided.
When working with Firebase (Firebase cloud function in this case), we have to pay for every byte of bandwidth.
So, i wonder how can we deal with case that someone who somehow find out our endpoint then continuous request intentionally (by a script or tool)?
I did some search on the internet but don't see anything can help.
Except for this one but not really useful.
Since you didn't specify which type of request, I'm going to assume that you mean http(s)-triggers on firebase cloud functions.
There are multiple limiters you can put in place to 'reduce' the bandwidth consumed by the request. I'll write a few that comes to my mind
1) Limit the type of requests
If all you need is GET and say for example you don't need PUT you can start off by returning a 403 for those, before you go any further in your cloud function.
if (req.method === 'PUT') { res.status(403).send('Forbidden!'); }
2) Authenticate if you can
Follow Google's example here and allow only authorized users to use your https endpoints. You can simply achieve this by verifying tokens like this SOF answer to this question.
3) Check for origin
You can try checking for the origin of the request before going any further in your cloud function. If I recall correctly, cloud functions give you full access to the HTTP Request/Response objects so you can set the appropriate CORS headers and respond to pre-flight OPTIONS requests.
Experimental Idea 1
You can hypothetically put your functions behind a load balancer / firewall, and relay-trigger them. It would more or less defeat the purpose of cloud functions' scalable nature, but if a form of DoS is a bigger concern for you than scalability, then you could try creating an app engine relay, put it behind a load balancer / firewall and handle the security at that layer.
Experimental Idea 2
You can try using DNS level attack-prevention solutions to your problem by putting something like cloudflare in between. Use a CNAME, and Cloudflare Page Rules to map URLs to your cloud functions. This could hypothetically absorb the impact. Like this :
*function1.mydomain.com/* -> https://us-central1-etc-etc-etc.cloudfunctions.net/function1/$2
Now if you go to
http://function1.mydomain.com/?something=awesome
you can even pass the URL params to your functions. A tactic which I've read about in this medium article during the summer when I needed something similar.
Finally
In an attempt to make the questions on SOF more linked, and help everyone find answers, here's another question I found that's similar in nature. Linking here so that others can find it as well.
Returning a 403 or empty body on non supported methods will not do much for you. Yes you will have less bandwidth wasted but firebase will still bill you for the request, the attacker could just send millions of requests and you still will lose money.
Also authentication is not a solution to this problem. First of all any auth process (create token, verify/validate token) is costly, and again firebase has thought of this and will bill you based on the time it takes for the function to return a response. You cannot afford to use auth to prevent continuous requests.
Plus, a smart attacker would not just go for a req which returns 403. What stops the attacker from hitting the login endpoint a millions times?? And if he provides correct credentials (which he would do if he was smart) you will waste bandwidth by returning a token each time, also if you are re-generating tokens you would waste time on each request which would further hurt your bill.
The idea here is to block this attacker completely (before going to your api functions).
What I would do is use cloudflare to proxy my endpoints, and in my api I would define a max_req_limit_per_ip and a time_frame, save each request ip on the db and on each req check if the ip did go over the limit for that given time frame, if so you just use cloudflare api to block that ip at the firewall.
Tip:
max_req_limit_per_ip and a time_frame can be custom for different requests.
For example:
an ip can hit a 403 10 times in 1 hour
an ip can hit the login successfully 5 times in 20 minutes
an ip can hit the login unsuccessfully 5 times in 1 hour
There is a solution for this problem where you can verify the https endpoint.
Only users who pass a valid Firebase ID token as a Bearer token in the Authorization header of the HTTP request or in a __session cookie are authorized to use the function.
Checking the ID token is done with an ExpressJs middleware that also passes the decoded ID token in the Express request object.
Check this sample code from firebase.
Putting access-control logic in your function is standard practice for Firebase, BUT the function still has to be invoked to access that logic.
If you don't want your function to fire at all except for authenticated users, you can take advantage of the fact that every Firebase Project is also a Google Cloud Project -- and GCP allows for "private" functions.
You can set project-wide or per-function permissions outside the function(s), so that only authenticated users can cause the function to fire, even if they try to hit the endpoint.
Here's documentation on setting permissions and authenticating users. Note that, as of writing, I believe using this method requires users to use a Google account to authenticate.
I tried searching but I didn't find any useful resource that would answer my question.
I'm trying to develop a service for my costumers where I will need to connect to their analytics data and combine with information of other services that I already provide. However, with the quota of the API request, how can I get it to work with several costumers?
I mean, the limitation is 10.000 requests per month, and I will probably make around 40-50 requests per day per costumer. That means that if I get more than 7 costumers to use it I would reach the monthly quota. What is the best approach to make this scalable?
Thank you in advance!
I think you are confused a little about the Google Analtyics api limits.
Managment api, and metadata api have a limit of 10,000 requests per day. 10 requests per second.
The Core reporting api is 10,000 requests per day per User and or (View (used to be profile)) and 50,000 requests per application. You can request that that 50k be extended. But you need to show that there arnt a lot of errors comming from your application.
It might be a good idea to also send send either Userip or quotaUser with all over your requests this will ensure that each of your users gets 10k requests each day. If you dont send quotaUser or UserIp then google lumps them all under the same quota user and they they are as a group limited to the 10k. This may or may not be a problem if you can ensure that sevral users wont be requesting the same data from the same view (used to be profile)
Another thing you should remember is that nextlinks count twards the limit as well so you should either try refine your requests so that you dont get to many rows back or set max-results high enough that you dont get to many nextlinks.
You can read more about how and why you should use QuotaUser here Google Analtyics QuotaUser
The quota is 10,000 per day per profile.
You should be fine especially if each of your clients has a separate profile.
https://developers.google.com/analytics/devguides/reporting/core/v2/limits-quotas#core_reporting
I'm investigating using Akamai or others CDNs to deliver video.
It's still unclear to me whether in my Akamai account I can limit the number of redirects Akamai will do for me, for example in a given month.
I wish I could limit having Akamai (or any other CDN) handle more than 1 million requests per month.
Of course, beyond quotas queries would be rejected by the CDN, this is what I want.
Besides, since the redirection is distributed amongs the CDNs points of presence, how precise will be the quota ? Can it be precise up to the unit (eg. if quotas is 1,000,000, is it guaranted i'll not get even 1,000,001 requests ?)
Thank you !
Akamai has no quotas or caps on your service.
What would you like to have happen when your limit is reached?
Your best option may be to use some type of origin auth to gate the type of response served and keep a count at your origin.
Recently I was developing an application using Linkedin people-search API. Documentation says that a developer registration has 1 lac API calls per day, but when I have registered this API, and ran a python script, after some 300 calls it says throttle limit exceeds.
Did anyone face such kind of issue using Linkedin API, comments are appreciated.
Thanks in advance.
It's been a while but the stats suggest people still look at this and I'm experimenting with the LinkedIn API and can provide some more detail.
The typical throttles are stated as both a max (e.g. 100K) and a per-user-token number (e.g. 500). Those numbers together mean you can get up to a maximum of 100,000 calls per day to the API but even as a developer a single user token means a maximum of 500 per day.
I ran into this, and after setting up a barebones app and getting some users I can confirm a daily throttle of several thousands of API calls. [Deleted discussion of what was probably, upon further consideration, an accidental back door in the LinkedIn API.]
As per the Throttle Limits published by LinkedIn:
LinkedIn API keys are throttled by default. The throttles are designed
to ensure maximum performance for all developers and to protect the
user experience of all users on LinkedIn.
There are three types of throttles applied to all API keys:
Application throttles: These throttles limit the number of each API call your application can make using its API key.
User throttles: These throttles limit the number of calls for any individual user of your application. User-level throttles serve
several purposes, but in general are implemented where there is a
significant potential impact to the user experience for LinkedIn
users.
Developer throttles: For people listed as developers on their API keys, they will see user throttles that are approximately four times
higher than the user throttles for most calls. This gives you extra
capacity to build and test your application. Be aware that the
developer throttles give you higher throttle limits as a developer of
your application. But your users will experience the User throttle
limits, which are lower. Take care to make sure that your application
functions correctly with the User throttle limits, not just for the
throttle limits for your usage as a developer.
Note: To view current API usage of your application and to ensure you haven't hit any throttle limits, visit
https://www.linkedin.com/developer/apps and click on "Usage & Limits".
The throttle limit for individual users of People Search is 100, with 400 being the limit for the person that is associated with the Application as the developer:
https://developer.linkedin.com/documents/throttle-limits
When you run into a limit, view the api usage for the application on the application page to see which throttle you are hitting.