I'm developing a Web Application (based on Google Maps API V3).
Whenever an user clicks on a map, a marker is placed on that point and
an "human readable" address is resolved (by the geocoding service). In
this way I can put in an infowindow, attached to that marker, the
corresponding address.
The question is: Can I resolve just one time that address and store it
on an external DB? Is this practice compliant with your terms of
service?
Regards
The relevant section of the TOS is
10.1.3 Restrictions against Data Export or Copying.
(b) No Pre-Fetching, Caching, or Storage of Content. You must not pre-fetch, cache, or store any Content, except that you may store: (i) limited amounts of Content for the purpose of improving the performance of your Maps API Implementation if you do so temporarily, securely, and in a manner that does not permit use of the Content outside of the Service; and (ii) any content identifier or key that the Maps APIs Documentation specifically permits you to store. For example, you must not use the Content to create an independent database of “places.”
This precludes the use of reverse geocoding.
no, this is forbidden by the terms of use by google.
we would have done it your way in another project, but we had to change because of the terms of use.
Related
Imagine the following situation. I have an API and a developer builds an application that retrieves new content from it on a daily base. She stores this content and provides this data to all the instances of an app she developed. In this way these apps do not have to call the API directly.
Is there a way to prevent this and force the apps (and therefore the end users) to use the API and not only the application on the server.
I found many questions about how to cache API data but not how to prevent that. I am fairly new to this, so maybe I am overlooking something or maybe it is not possible to prevent this.
Thank you in advance!
Assuming you are using Apigee for API-management, you have some options. First, consider the options available to you contractually, if this is that sort of business relationship and you can impose certain API behavior with a business partner through a contract.
Separate from the legal side of things, we remember that you control your API and the credentials you issue for use by your API clients. You cannot though control, practically, what a client developer does with the credentials you issue: she could promise to embed the credentials in the mobile apps' API client, but change her mind and use it centrally, and then design her mobile client to call into her central cache. If though you really insist that only mobile app clients should be calling your API and not a hub/cache server, then you could consider applying constraint policies on your API (within the Apigee proxy, such as Access Control). For instance, you could blacklist your partner's hub/cache server IP address, although that is weak security at best. Or, you could apply a constraint that only clients with certain identifying User-Agent strings (mobile OS, client) are allowed to connect to your API. Or use GeoIP filtering to allow only clients from certain regions, if that applies to your use-case.
Finally, depending on the data model, you might be able to rate-limit such that a bulk cache becomes impractical: if your edge-client use-cases is to fetch a single record, but a cache would have to hold thousands of records, then you could impose a per-client rate limit (Quota policy) which is no bother to individual mobile clients, but makes the work of a hub/cache server untenable.
I've looked at a few places, Including this post and the firebase panel
Is there no way to use these api's to secure these endpoints using an api key you create per client who uses your cloud functions?
I'm able to block every one putting a restriction on the Browser key, but I would like to create a new api key, and use that as a way to authenticate my endpoint for various clients.
Creating a new api key, and using that as a parameter in my query doesn't work (don't now if I'm doing anything wrong)
Is there a way to do this?
Option 1: handle authentication within the function
https://github.com/firebase/functions-samples/tree/master/authorized-https-endpoint
Adapt above to use clients/keys stored in firestore
Option 2: Use an an API Gateway
Google Cloud Endpoints (no direct support for functions yet, need to implement a proxy)
Apigee (higher cost, perhaps more than you need)
Azure API Management (lower entry cost + easy to implement as a facade for services hosted outside Azure)
there are more..
The above gateways are probably best for your use case in that the first two would let you keep everything within Google, albeit with more complexity/cost -- hopefully Endpoints will get support for functions soon. Azure would mean having part of your architecture outside Google, but looks like an easy way to achieve what your after (api key per client for your google cloud / firebase functions)
Here's a good walkthrough of implementing Azure API Management:
https://koukia.ca/a-microservices-implementation-journey-part-4-9c19a16385e9
Not to achieve what you are after, as far as firebase and GCP is concerned your clients is your specific business problem.
One way you could tackle this (with the little information that is provided);
You need somewhere to store a list of clients + their API key (I would use firestore)
For the endpoints you want to secure with a client-specific API key you can include a check to confirm the header exists and also exists in your firestore client record.
Considerations:
Depending on your expected traffic loads and the the number of firestore reads you'll be adding, you might want to double check this kind of solution will work for your budget.
Is the API-key type solution the only option you must go for? You Could probably get pretty far using the https://github.com/firebase/firebaseui-web and doing user checks in your function with no extra DB read required. If you go down this path most of the user signup/ emails / account creation logic is ready to go.
https://firebase.google.com/docs/auth/web/password-auth#before_you_begin
Curious to see what some other firebase users suggest.
I am selling stuff online and I would like to geocode my customer delivery addresses before delivery to make the delivery address is correct to avoid wrong delivery. If I use Google Map API, after I query an address, can I save the returned attribute in the own storage (such as building and street names, lat/lon) so that I don't need to re-query every time? Some customers addresses are repeating or written in incorrect format. If I can search it from my own archieve before Google Map API query, it can save the amount of time/queries required?
The Terms of service allow a temporary caching up to 30 days with a purpose of improving the performance of your application. The permanent storage is prohibited.
For further details refer to section 10.5 of Terms of service:
No caching or storage. You will not pre-fetch, cache, index, or store any Content to be used outside the Service, except that you may store limited amounts of Content solely for the purpose of improving the performance of your Maps API Implementation due to network latency (and not for the purpose of preventing Google from accurately tracking usage), and only if such storage:
is temporary (and in no event more than 30 calendar days);
is secure;
does not manipulate or aggregate any part of the Content or Service; and
does not modify attribution in any way.
I have reviewed every topic that seems relevant and I believe I am having a problem because the configuration in which I am attempting to use this service is different from any of the other postings.
I can get acceptable Reverse GeoCode results only without a Key.
But acceptable is not optimal. The Guide documents filtering which would be applied on the server side to reduce the number of results I would receive to check to determine which result is 'best'.
I do not believe that the ability to get server-side filtering is a Premier Service; I do not have a Premier License.
No matter whether I use a current Browser Key or Server Key, every request will result in REQUEST_DENIED status.
At console.cloud.google.com/apis I have enabled "Google maps JavaScript" and just by reading all the other postings, I have added, probably unnecessarily, and with not change in the result: "Google Place API Web Service".
My only remaining guess is that my request is being denied in relationship to the terminology of the service agreement requiring that this service include the display of a Google Map. My application DOES display a Google Map, but I do not see how to let the Google Maps Server know that. May API stack is using the Javascript API with XML results requested via this URL: "http://maps.googleapis.com/maps/api/js?language=en&libraries=places", and the GeoCoding requests [forward and reverse] work fine via this URL:
http://maps.googleapis.com/maps/api/geocode/xml? but adding a key="" in order to take advantage of server-side filtering is always denied.
What am I missing that needs to be passed in the request in order to have my api key honored and for me to get a better result set consuming less network bandwidth?
As you use Geocoding API you have to enable it in your project. You have to generate a Server API key and use it with your request.
The official documentation covers this subject:
https://developers.google.com/maps/documentation/geocoding/get-api-key
For Maps JavaScript API you have to use a Browser API key:
https://developers.google.com/maps/documentation/javascript/get-api-key
I'm working on building my first web/mobile app with Meteor, using Javascript for both the client and server.
Essentially, the app will allow users to rate restaurants based on a variety of factors, such as how loud it is or how nice it smells. The averages of each of these attributes would then be stored in my database along with the Google ID of the associated restaurant. Other users can then search for places near them and sort the results based on any of the rated attributes.
So if a user requests a list of places and a request is made to the Google places library API, and then those places are matched against data in my database, how are the limits applied? Since the server is also running with Javascript, can I call the API with the server? And if I do, is the API able to distinguish between different users and apply the individual limits? Or if it's all coming from a single server will it give me a total limit equivalent to a single user?
Thanks for any help and guidance.
The Google Maps JavaScript Places Library does not have a documented limit. However, if you perform request that have gone over its request quota, you will get OVER_QUERY_LIMIT. So maybe the Javascript API Usage Limits can help you to know more about limits by using this API.
Check also this related SO ticket.