We provide a traditional Client Server software package.
We want to build in a feature that will allow us pass 2 addresses to the Google Maps Distance API and get a time back to travel between the 2 addresses.
2 Questions:
Would each request be ONE of the 2500 free requests per day?
Could each of my customers get their own API Key so that they would
have their own 2500 requests per day?
Yes if it is using your API key or client ID then that would be one of your requests for the 24 hour period.
If you mean the client is passing their own API key to the server for each request. I think you could technically do this, but I think it would violate the google terms of service - https://developers.google.com/maps/terms. If you mean you would provision a server instance for each client with their own API key, then I believe that would be acceptable. There may also be some applicable terms of use in regards to how this API key is provided.
Related
I have setup APIM (2.5.0)and Analytics. I have some users and few of the APIs created by the user.
For instance, one of the user onboarded one of the free API which is providing the weather information.
Another user subscribed it from store and started consuming it. At the end of the day, there were around 20 hits from the user to consume weather API.
But, among this 20 hits, 2 hits didnot fetch any result from the weather API URL (URL which we configure in Production and Sandbox URL of the publisher) as the connection was down but the as the APIM was running and allowed the user to consume, the logging continued and produced 20 hits for the user but technically it should be 18 hits only ( as 2 hits were not completed).
How to overcome this case, where the APIs are external and they onboarded in WSO2 APIM (using swagger or manually created) and logging is maintained so that TOTAL_REQUEST_COUNT should come out to be same of the positive hits/response.
Any information on this topic would be helpful.
Thanks
If you are maintaining TOTAL_REQUEST_COUNT it means the request you are getting which is 20 & it's correct. There should be another parameter you should have which will be TOTAL_RESPONSE_COUNT it will maintain the response count. So that you will get to know how many request are coming & how many got responded. Also if you want to see successful response count, have something `TOTAL_SUCCESS_RESPONSE_COUNT.
If you are having backend services in tomcat, then always make sure that thread count in WSO2 should be always greater than thread count of tomcat. WSO2 threads can be configured in <WSO2AM_HOME>/repository/conf/nhttp.properties. Look at last couple of lines in this file.
Also, you can change your retry policy by making changes into API publisher :9443/publisher by editing existing API.
Wanting to validate my ARM template was deployed ok and to get an understanding of the telemetry options...
Under what circumstances do the following get logged to Log Analytics?
DataPlaneRequests
MongoRequests
QueryRuntimeStatistics
Metrics
From what I can tell arduously in the last few days connecting in different ways.
DataPlaneRequests are logged for:
SQL API calls
Table API calls even when the account was setup for SQL API
Graph API calls against an account setup for Graph API
Table API calls against an account setup for Table API
MongoRequests are logged for:
Mongo requests even when the account was setup for SQL API
However I haven't been able to see anything for QueryRuntimeStastics (even when turning on PopulateQueryMetrics) nor have I seen any AzureMetrics appear?
Thanks Alex for spending time and trying out different options of logging for Azure Cosmos DB.
There are primarily two types of monitoring paths for Azure Cosmos DB.
Metrics: These are low latency (<5 min) and aggregated metrics which are exposed on Azure Monitor API for consumption. THese metrics are primarily used for diagnosis of the app for any live site issues.
Logs: These are raw request logs coming at 2hours+ latency and are used for customer for primarily audit scenarios to understand who accessed the data.
Depending on your need you can choose either of the approaches.
DataPlaneRequests by default shows all the requests across all the API's and Mongo Requests only show Mongo specific calls. Please note Mongo requests would also be seen in Data Plane requests.
Metrics would not be see in Log Analytics due to a knowwn which our partner team is fixing.
Let me know if you have any further questions here.
I have reviewed every topic that seems relevant and I believe I am having a problem because the configuration in which I am attempting to use this service is different from any of the other postings.
I can get acceptable Reverse GeoCode results only without a Key.
But acceptable is not optimal. The Guide documents filtering which would be applied on the server side to reduce the number of results I would receive to check to determine which result is 'best'.
I do not believe that the ability to get server-side filtering is a Premier Service; I do not have a Premier License.
No matter whether I use a current Browser Key or Server Key, every request will result in REQUEST_DENIED status.
At console.cloud.google.com/apis I have enabled "Google maps JavaScript" and just by reading all the other postings, I have added, probably unnecessarily, and with not change in the result: "Google Place API Web Service".
My only remaining guess is that my request is being denied in relationship to the terminology of the service agreement requiring that this service include the display of a Google Map. My application DOES display a Google Map, but I do not see how to let the Google Maps Server know that. May API stack is using the Javascript API with XML results requested via this URL: "http://maps.googleapis.com/maps/api/js?language=en&libraries=places", and the GeoCoding requests [forward and reverse] work fine via this URL:
http://maps.googleapis.com/maps/api/geocode/xml? but adding a key="" in order to take advantage of server-side filtering is always denied.
What am I missing that needs to be passed in the request in order to have my api key honored and for me to get a better result set consuming less network bandwidth?
As you use Geocoding API you have to enable it in your project. You have to generate a Server API key and use it with your request.
The official documentation covers this subject:
https://developers.google.com/maps/documentation/geocoding/get-api-key
For Maps JavaScript API you have to use a Browser API key:
https://developers.google.com/maps/documentation/javascript/get-api-key
Recently I was developing an application using Linkedin people-search API. Documentation says that a developer registration has 1 lac API calls per day, but when I have registered this API, and ran a python script, after some 300 calls it says throttle limit exceeds.
Did anyone face such kind of issue using Linkedin API, comments are appreciated.
Thanks in advance.
It's been a while but the stats suggest people still look at this and I'm experimenting with the LinkedIn API and can provide some more detail.
The typical throttles are stated as both a max (e.g. 100K) and a per-user-token number (e.g. 500). Those numbers together mean you can get up to a maximum of 100,000 calls per day to the API but even as a developer a single user token means a maximum of 500 per day.
I ran into this, and after setting up a barebones app and getting some users I can confirm a daily throttle of several thousands of API calls. [Deleted discussion of what was probably, upon further consideration, an accidental back door in the LinkedIn API.]
As per the Throttle Limits published by LinkedIn:
LinkedIn API keys are throttled by default. The throttles are designed
to ensure maximum performance for all developers and to protect the
user experience of all users on LinkedIn.
There are three types of throttles applied to all API keys:
Application throttles: These throttles limit the number of each API call your application can make using its API key.
User throttles: These throttles limit the number of calls for any individual user of your application. User-level throttles serve
several purposes, but in general are implemented where there is a
significant potential impact to the user experience for LinkedIn
users.
Developer throttles: For people listed as developers on their API keys, they will see user throttles that are approximately four times
higher than the user throttles for most calls. This gives you extra
capacity to build and test your application. Be aware that the
developer throttles give you higher throttle limits as a developer of
your application. But your users will experience the User throttle
limits, which are lower. Take care to make sure that your application
functions correctly with the User throttle limits, not just for the
throttle limits for your usage as a developer.
Note: To view current API usage of your application and to ensure you haven't hit any throttle limits, visit
https://www.linkedin.com/developer/apps and click on "Usage & Limits".
The throttle limit for individual users of People Search is 100, with 400 being the limit for the person that is associated with the Application as the developer:
https://developer.linkedin.com/documents/throttle-limits
When you run into a limit, view the api usage for the application on the application page to see which throttle you are hitting.
Google clearly explains that :
Use of the Google Geocoding API is subject to a query limit of 2,500 geolocation requests per day. (User of Google Maps API for Business may perform up to 100,000 requests per day.) This limit is enforced to prevent abuse and/or repurposing of the Geocoding API, and this limit may be changed in the future without notice. Additionally, we enforce a request rate limit to prevent abuse of the service. If you exceed the 24-hour limit or otherwise abuse the service, the Geocoding API may stop working for you temporarily. If you continue to exceed this limit, your access to the Geocoding API may be blocked.
Let say i call it as client side
<script type="text/javascript" src="http://maps.googleapis.com/maps/api/geocode/json"></script>
and I call it as serverside
<?php
$mapdata = file_get_contents('http://maps.googleapis.com/maps/api/geocode/json');
?>
What is the difference in query limit count?
This means what? I am not clear here. Per day it will count per domain or server ip or client ip?
If your code is running client side, then they will use the requesting client IP, if your code is server side, they will use the server request IP.
In other words: If you are making your requests from your server, you will be more likely to hit that limit if you are not cacheing the results.
What you need to watch out for is the request rate limit - if you make too many requests within a very short amount of time, they block you.