I use calls like
http://maps.googleapis.com/maps/api/geocode/xml?address=Switzerland+Bern+&components=country:CH&sensor=false
to get geocoodinates. This works for some hours an after that there is a response
<?xml version="1.0" encoding="UTF-8"?>
<GeocodeResponse> <status>OVER_QUERY_LIMIT</status> </GeocodeResponse>
I am pretty shure that I dont exceed 2500 calls I can make with my server api key and in the API Konsole I only have reached 10% of my quota.
I even have billing enabled. Is there a way to debug this further or is there something I am missing?
This is my API Konsole:
This is the usage data
I have the same issue. From research I understand the geocoder has a request limit of 2,500 per day and 20 per second. These are on an IP address basis.
See https://developers.google.com/maps/documentation/geocoding/index#Limits
That means if your application is on a shared server, and there are other applications sharing the IP address and using the geocode service, their requests will be bundled with yours to provide the total.
I do not know of any way of seeing the request report for the geocoder, the report in the API console is for Maps, which is for displaying a map not for geocoding an address.
As far as I'm aware, there does not seem to be any way round the geocoder limit, simply having a standard paid API account does not seem to make any difference to the request limit.
Maps for Business accounts however allow a usage limit of up to 100,000 requests per day.
The best option may be for you to change the architecture of your application (if possible) from server side geocoding to client side geocoding, which would negate the restriction on your servers IP. See https://developers.google.com/maps/articles/geocodestrat
I think it's a problem with Google. Try to avoid server side geocode. Instead use a client side geocode.
Related
My team and I have been at this for 4 full days now, analyzing every log available to us, Azure Application Insights, you name it, we've analyzed it. And we can not get down to the cause of this issue.
We have a customer who is integrated with our API to make search calls and they are complaining of intermittent but continual 502.3 Bad Gateway errors.
Here is the flow of our architecture:
All resources are in Azure. The endpoint our customers call is a .NET Framework 4.7 Web App Service in Azure that acts as the stateless handler for all the API calls and responses.
This API app sends the calls to an Azure Service Fabric Cluster - that cluster load balances on the way in and distributes the API calls to our Search Service Application. The Search Service Application then generates and ElasticSearch query from the API call, and sends that query to our ElasticSearch cluster.
ElasticSearch then sends the results back to Service Fabric, and the process reverses from there until the results are sent back to the customer from the API endpoint.
What may separate our process from a typical API is that our response payload can be relatively large, based on the search. On average these last several days, the payload of a single response can be anywhere from 6MB to 12MB. Our searches simply return a lot of data from ElasticSearch. In any case, a normal search is typically executed and returned in 15 seconds or less. As of right now, we have already increased our timeout window to 5 minutes just to try to handle what is happening and reduce timeout errors for the fact their searches are taking so long. However, we increased the timeout via the following code in Startup.cs:
services.AddSingleton<HttpClient>(s => {
return new HttpClient() { Timeout = TimeSpan.FromSeconds(300) };
});
I've read in some places that you actually have to do this in the web.config file as opposed to here, or at least in addition to it. Not sure if this is true?
So The customer who is getting the 502.3 errors have significantly increased the volumes they are sending us over the last week, but we believe we are fully scaled to be able to handle it. They are still trying to put the issue on us, but after many days of research, I'm starting to wonder if the problem is actually on their side. Could it be possible that they are not equipped to take the increased payload on their side. Can it be that their integration architecture is not scaled enough to take the return payload from the increased volumes? When we observe our resources usages (CPU/RAM/IO) on all of the above applications, they are all normal - all below 50%. This also makes me wonder if this is on their side.
I know it's a bit of a subjective question, but I'm hoping for some insight from someone who may have experienced this before, but even more importantly, from someone who has experience with a .Net API app in Azure which return large datasets in it's responses.
Any code blocks of our API app, or screenshots from Application Insights are available to post upon request - just not sure what exactly anyone would want to see yet as I type this.
I have some webservices which are called by some clients and that includes through mobile and web. I have no control on the clients code.
But, I need to identify who is calling my web services, via the IP address or something else.
Is there any way to identify that?
A better approach to tracking this sort of thing is to introduce the notion of an API key. That way you know exactly who is using your service and you can track their usage etc.
On every call to your service the user would have to provide their key as a means of authorisation (not authentication). This sort of approach can generally help avoid misuse of an API, however, it can't eradicate it completely. At least with this approach if you do find malicious user it's as simple as disabling that particular API key.
You should check your IIS Logs, these will list (if you have them turned on, default they are on) all the requests made to your server.
So search through the log for the URL of the service and check the logs around the time of requests you are having issues with and it will list the IP address.
Your logs can generally be found at: C:\inetpub\logs\LogFiles
If the folder is empty then you are out of luck currently, you will need to turn logging on in IIS and then you will be able to check them after a few hours and start seeing where requests are coming from.
E.g a sample from a log.
2012-10-29 04:49:44 129.35.250.132 GET /favicon.ico/sign-in returnUrl=%252ffavicon.ico 82 - 27.x.x.x Mozilla/5.0+(Windows+NT+6.1;+rv:16.0)+Gecko/20100101+Firefox/16.0 200 0 0 514
So the first highlighted item is the date and time, and the second highlighted item is the IP address (redacted as it's a real log.)
I have multiple AJAX requests going out of my browser.
My UI is comprised of multiple views and the AJAX requests are trying to populate those views simultaneously. In some cases I require more than 10 simultaneous requests to be sent from client and processed concurrently at the server.
But due to browser limitations on max concurrent requests to a single domain and because of HTTP's "A server MUST send its responses to requests in the same order that the requests were received" constraint, I am not deriving as much concurrency in request processing as I would want.
From my application's standpoint, I dont need responses to come in the order in which I sent the request. I am ok if view8 gets populated before view1, for example.
Async processing using Servlet 3.0 constructs seems to address only one-side of the problem (the Server-side) and hence cannot be fully exploited for maximizing application concurrency.
My question is:
Am I missing out on some proper constructs ? ('proper' in contrast to workarounds like "host your images from a different sub domain") that can yield me more concurrency ?
This seems like something many web UIs would need ! If not, then I am designing my UI the wrong way. In either case, I would appreciate your inputs.
Edit1: To my advantage, I dont have to support a huge number of concurrent clients. The maximum number of concurrent clients accessing the app would be < 100. Given that fact, basically am trying to enhance the experience of these clients when I have the processing power available aplenty on my server-side.
Edit2: Our application/API is not for 'public' consumption. For ex: It is like my company's webmail app. It is hosted on the internet but it is not meant for everyone's consumption. Only meant for consumption by the relevant few.
The reason why am giving that info, is to differentiate my app from SO/Twitter, which seem to differentiate their (REST) API users from their normal website users. In our case, we think we should not differentiate that way and want to provide single-set of REST endpoints for both.
The reason behind the limitation in the spec (RFC2616) seems to be : "These guidelines are intended to improve HTTP response and avoid congestion.". However, intranet web apps have more luxuries and should not have to be so constrained !?
The server is exposing REST API and hence the UI makes specific GETs
for various resource catogories (ex: blogs, videos, news, articles).
Since each resource catogory has its exclusive view it all fits in
nicely. It feels wrong to collate requests to get blogs and videos
together in one request. Isnt it ?
Well, IMHO being pragmatic is more important. Sure, it makes sense for a service to expose RESTful API but it's not always necessary to expose the entire API to the browser. Your API can be separate from your server side web app. You can always make those multiple API requests on the server side, collate the results and send them back to the client. For e.g. look at the SO home page. The StackOverflow API does expose a RESTful API but when loading the home page the browser doesn't send across multiple requests just to populate the tags, thread listing etc.
Thanks Sanjay for the suggestion. But we wanted to have a single-API
for both REST clients and Browser clients. Interestingly, the root URI
"stackoverflow.com" is not mentioned in SO's REST API, but the browser
client uses it. I suppose if they had exposed the root URI, their
response would be difficult to process (as it would be a mixture of
data). Their REST API is granular (as is in my application), but their
javascript code uses some other doors(APIs) to decrease no. of
round-trips to the server! Somehow that doesnt feel right (Am a novice
in this field though). Feel free to correct me
SO doesn't use any "other doors". It's just that they simply don't send across 10 concurrent requests for populating something on the page. They make XHR request when you vote, mark thread as favorite, comment etc. For loading the page itself, there are no multiple requests. If you want to directly hit your RESTful API from the browser, you'll have to honor the limitations. Either that or go the desktop way which allows you virtually unlimited connections to your server but I guess you don't want to go that route...
I am using Google Maps API to geocode locations from asp.net - doing a few queries per day (well under the 2500 limit). This has worked fine for a year and still works on my development server but now I am constantly getting the status returned 'OVER_QUERY_LIMIT'.
I assume, but have no way of knowing for sure, that this is because someone else on the same IP address of my host is doing the same and using all the limit.
The geocoding api does not seem to let me use an API key. If I try to add &key=xxx I get a REQUEST_DENIED error.
How can I identify myself to google as separate?
You can't identify yourself as separate from your co-hosted competitor (for example, by using a key). The reason for this is probably because people would simply use different keys to get multiple allowances.
The only solution is to change your hosting, or switch to client-side geocoding.
Google also limits on the number of request per second, this may be causing the error. Try switching to http://developer.yahoo.com/geo/placefinder/ usage limit is 50,000 requests per day
I'm placing up to 25 markers on a map but when I hit 12 I get an error of "OVER_QUERY_LIMIT".
I have hit nowhere near the 2,500 hits a day limit.
If I try and plot only 11 markers I have no problem.
Any one know why this is?
edit
Ok, a lot of testing and I have determined that I can't call geocoder.geocode more than a certain number of times before I have to wait until the calls are done.
I have implemented a version that sends a bunch of requests, waits and then sends more and it's working but it's a total fudge.
Is there a way to geocode a bunch of addresses at once without this limitation?
My client does not store the latlng of the addresses so I need to get that from the address.
The JS geocoder is rate limited:
"The Google Maps API provides a geocoder class for geocoding addresses dynamically from user input. These requests are rate-limited to discourage abuse of the service. If instead, you wish to geocode static, known addresses, see the Geocoding web service documentation."
From http://code.google.com/apis/maps/documentation/javascript/geocoding.html#Geocoding
The web service documentation also mentions a rate limit, but presumably it's higher:
http://code.google.com/apis/maps/documentation/geocoding/#Limits
You can also cache addresses for a limited amount of time for performance purposes. That would allow you to check your cache first before running into the problem.