I'm placing up to 25 markers on a map but when I hit 12 I get an error of "OVER_QUERY_LIMIT".
I have hit nowhere near the 2,500 hits a day limit.
If I try and plot only 11 markers I have no problem.
Any one know why this is?
edit
Ok, a lot of testing and I have determined that I can't call geocoder.geocode more than a certain number of times before I have to wait until the calls are done.
I have implemented a version that sends a bunch of requests, waits and then sends more and it's working but it's a total fudge.
Is there a way to geocode a bunch of addresses at once without this limitation?
My client does not store the latlng of the addresses so I need to get that from the address.
The JS geocoder is rate limited:
"The Google Maps API provides a geocoder class for geocoding addresses dynamically from user input. These requests are rate-limited to discourage abuse of the service. If instead, you wish to geocode static, known addresses, see the Geocoding web service documentation."
From http://code.google.com/apis/maps/documentation/javascript/geocoding.html#Geocoding
The web service documentation also mentions a rate limit, but presumably it's higher:
http://code.google.com/apis/maps/documentation/geocoding/#Limits
You can also cache addresses for a limited amount of time for performance purposes. That would allow you to check your cache first before running into the problem.
Related
My team and I have been at this for 4 full days now, analyzing every log available to us, Azure Application Insights, you name it, we've analyzed it. And we can not get down to the cause of this issue.
We have a customer who is integrated with our API to make search calls and they are complaining of intermittent but continual 502.3 Bad Gateway errors.
Here is the flow of our architecture:
All resources are in Azure. The endpoint our customers call is a .NET Framework 4.7 Web App Service in Azure that acts as the stateless handler for all the API calls and responses.
This API app sends the calls to an Azure Service Fabric Cluster - that cluster load balances on the way in and distributes the API calls to our Search Service Application. The Search Service Application then generates and ElasticSearch query from the API call, and sends that query to our ElasticSearch cluster.
ElasticSearch then sends the results back to Service Fabric, and the process reverses from there until the results are sent back to the customer from the API endpoint.
What may separate our process from a typical API is that our response payload can be relatively large, based on the search. On average these last several days, the payload of a single response can be anywhere from 6MB to 12MB. Our searches simply return a lot of data from ElasticSearch. In any case, a normal search is typically executed and returned in 15 seconds or less. As of right now, we have already increased our timeout window to 5 minutes just to try to handle what is happening and reduce timeout errors for the fact their searches are taking so long. However, we increased the timeout via the following code in Startup.cs:
services.AddSingleton<HttpClient>(s => {
return new HttpClient() { Timeout = TimeSpan.FromSeconds(300) };
});
I've read in some places that you actually have to do this in the web.config file as opposed to here, or at least in addition to it. Not sure if this is true?
So The customer who is getting the 502.3 errors have significantly increased the volumes they are sending us over the last week, but we believe we are fully scaled to be able to handle it. They are still trying to put the issue on us, but after many days of research, I'm starting to wonder if the problem is actually on their side. Could it be possible that they are not equipped to take the increased payload on their side. Can it be that their integration architecture is not scaled enough to take the return payload from the increased volumes? When we observe our resources usages (CPU/RAM/IO) on all of the above applications, they are all normal - all below 50%. This also makes me wonder if this is on their side.
I know it's a bit of a subjective question, but I'm hoping for some insight from someone who may have experienced this before, but even more importantly, from someone who has experience with a .Net API app in Azure which return large datasets in it's responses.
Any code blocks of our API app, or screenshots from Application Insights are available to post upon request - just not sure what exactly anyone would want to see yet as I type this.
In the Documentation for Google Analytics Collection Limits and Quotas
It gives the rate limits that are implemented by the various Google-provided libraries. I can't seem to find a published rate limit for users that are POSTing directly to measurement protocol (https://www.google-analytics.com/collect).
Is there one and if so what is it?
Edit on 10 July 2015 -
A few commenters asked for an example of the kind of data I am sending in.
Using a series of calls to wget with a sleep of one second between each call.
Here is an example with the app name and tracking code removed:
wget -nv --post-data 'ul=en&qt=7150000&av=0.0.1&ea=PLET&v=1&tid=<my_tracking_code>&ec=Move+to+Object&cid=1434738538-738-654031&an=<my_app_name>&t=event' -O /dev/null 'https://www.google-analytics.com/collect'
I've tried sending these queries to the /debug endpoint and all of them are valid. My first upload worked as expected and reports looked good. Subsequent uploads of the same data set to different GA properties have had mixed results. Sometimes no data appears in reports. Sometimes partial data appears in reports. During upload, realtime reports always show activity, though.
Directly from the documentation Google Analytics Collection Limits and Quotas
These limits apply to the Web Property / Property / Tracking ID.
10 million hits per month per property
Measurement protocol
Universal Analytics Enabled
This applies to analytics.js, Android iOS SDK, and the Measurement
Protocol.
200,000 hits per user per day 500 hits per session not including
ecommerce (item and transaction hit types). If you go over either of
these limits, additional hits will not be processed for that session /
day, respectively. These limits apply to Premium as well.
Now I agree it doesn't specifically state the per second it rate for measurement protocol but the above one dumped Measurement in with analytics.js so I think we can assume its
analytics.js:
Each analytics.js tracker object starts with 20 hits that are
replenished at a rate of 2 hit per second. Applies to all hits except
for ecommerce (item or transaction).
But just to make sure I am sending an email off to the development team they should make it more clear where the per second rate of the measurement protocol lies. I will repost here when I hear from them
Response from Google
The Measurement Protocol does not do any kind of rate limiting or
quota-ing by IP address or tracking ID or anything like that. However,
most of the client libraries do rate limit in some form or another.
As Linda points out in her answer, there are various limits and quotas
imposed by the back end, but those are done at processing time, not
collection time.
Conclusion
There is no limit to sending data through the measurement protocol. But when the data is processed limit may be applied. I think they may be referring to the max 2 million hits a month. It seems it's the libraries that apply limits on how fast you can send data not the measurement protocol directly.
Last Update: Please watch this video which explains all GA quotas policies:
https://youtu.be/1UfER93ALxo
In particular, your issue might be result of 10 requests / 1 second limitation:
https://youtu.be/1UfER93ALxo?t=5m27s
I can confirm the same thing. In my case I had own buildHitTask which constructs URL for a measurement protocol request (MPR) and stores it in the hitPayload field. But instead of original GA reporting - I was saving those URLs into cookies for delayed reporting.
In my experiment, only 10-20% of 2,000 measurement protocol requests were actually "stored".
Rest of hits are not available in GA Reporting UI, neither API or BigQuery. Each request was sent with 2 seconds delay via new Image() method, and slowdown in case of errors. Received results are not consistent. Both success and failed hits are randomly distributed across whole time period.
Please let me know in case if you find more details on this constraint!
I use calls like
http://maps.googleapis.com/maps/api/geocode/xml?address=Switzerland+Bern+&components=country:CH&sensor=false
to get geocoodinates. This works for some hours an after that there is a response
<?xml version="1.0" encoding="UTF-8"?>
<GeocodeResponse> <status>OVER_QUERY_LIMIT</status> </GeocodeResponse>
I am pretty shure that I dont exceed 2500 calls I can make with my server api key and in the API Konsole I only have reached 10% of my quota.
I even have billing enabled. Is there a way to debug this further or is there something I am missing?
This is my API Konsole:
This is the usage data
I have the same issue. From research I understand the geocoder has a request limit of 2,500 per day and 20 per second. These are on an IP address basis.
See https://developers.google.com/maps/documentation/geocoding/index#Limits
That means if your application is on a shared server, and there are other applications sharing the IP address and using the geocode service, their requests will be bundled with yours to provide the total.
I do not know of any way of seeing the request report for the geocoder, the report in the API console is for Maps, which is for displaying a map not for geocoding an address.
As far as I'm aware, there does not seem to be any way round the geocoder limit, simply having a standard paid API account does not seem to make any difference to the request limit.
Maps for Business accounts however allow a usage limit of up to 100,000 requests per day.
The best option may be for you to change the architecture of your application (if possible) from server side geocoding to client side geocoding, which would negate the restriction on your servers IP. See https://developers.google.com/maps/articles/geocodestrat
I think it's a problem with Google. Try to avoid server side geocode. Instead use a client side geocode.
I am using Google Maps API to geocode locations from asp.net - doing a few queries per day (well under the 2500 limit). This has worked fine for a year and still works on my development server but now I am constantly getting the status returned 'OVER_QUERY_LIMIT'.
I assume, but have no way of knowing for sure, that this is because someone else on the same IP address of my host is doing the same and using all the limit.
The geocoding api does not seem to let me use an API key. If I try to add &key=xxx I get a REQUEST_DENIED error.
How can I identify myself to google as separate?
You can't identify yourself as separate from your co-hosted competitor (for example, by using a key). The reason for this is probably because people would simply use different keys to get multiple allowances.
The only solution is to change your hosting, or switch to client-side geocoding.
Google also limits on the number of request per second, this may be causing the error. Try switching to http://developer.yahoo.com/geo/placefinder/ usage limit is 50,000 requests per day
I am working on a project which requires a server side access to google map api. i want to calculate distance (actual distance, not straight line). google map api supports javascript and not asp.net. please give suggestions ...!
you specified google maps in your question - but have you looked at Virtual Earth? Specifically this routing with Virtual Earth Web Service example sounds exactly like what you want:
server-side access (just Add Service Reference inside visual studio)
actual distance (not straight line) since it is using a route
The concerns raised by others about T&Cs for 'internal/intranet use' apply to VE as well as Google I think - you'll have to read up about whether your application needs licensing or not.
p.s. if you did just want to calculate straight-line distance, I have instructions using SQL Server 2008; which also links to some straight c# code that does it too.
The Google API allows you to Geocode via a server side call:
http://code.google.com/apis/maps/documentation/services.html#Geocoding_Direct
This would allow you to get the longitude and latitude of the locations. You can then cache these and use them to calculate distance using the techniques CMS suggests.
You will need to be careful of the Google T&C's though as you are only allowed to store the geocoding data for use on a Google map which is publicly available.
You would probably also run into limitations on the number of requests you could make from a single IP.
However I think what you mean by non-straight line distance is distance taking into account roads and one way streets etc.
If this is the case I think a commercial service is your only option. Although theoretically you could do it all via screen scraping, I'm almost certain that this would break Google's T&C's.
The simplest solution would probably be just to embed a Google map on a page of your application and let the user calculate the distance. You could pre-fill the to and from fields if required.
Again if this is for an internal app i.e. Not publicly available "my understanding" of the Google T&C's would forbid this.
Use something like firebug or fiddler to look at the requests that are being sent to Google from javascript you should then beable to build the request using that information and an HTTPWebRequest in .net and retrieve the same information.
HTH
You can calculate the distance of two geographical coordinates (latitude, longitude) using the Great-circle distance algorithm.
Here you can find some other formulas for distance calculation.
Well, you've pretty much identified the key issue, the Gmaps API is a browser resident javascript API and there's not much getting around that. Most of the API is executed in the browser so there's not much network traffic to spy on.
As tsaunders mentions there is a geocoding API call that is restfully accessible, but it only does reverse/geocoding and if you have lat/lng's already you can use the calculations the rms suggested, but they are as tsaunders points out 'as the crow flies' distance.
If indeed you are looking for road taken distance, the API does do routing but you are back in the browser to get the start/end points from the user.
Perhaps you can be a little more specific about what you are trying to do and why you feel this requires you to to access the API from your server. My application for instance has features that gather information from the user and sends requests back to my server to work on, some of that data are processed by the Gmaps API first.
If I were to use a API platform, I certinaly would not use Google as the free one does not include advances Geocoding menaing the accuracy is poor. There is also no sla , support or rights of service.
The directions are poor, the coverage for Ireland and Geocoding is almost childlike and the privacy stinks. No professional business would use a google mapping solution.
They copy everyone else's idea, say they are there own and get loads of press (they only added tube stations in 2006) an dcyclc lanes (2010), viamichelin added these 2006 and Traffic in 2009 !
Any agency or developers looking for an API should stick to Bing or ViaMichelin for better customisation and user experience which is killer !