This question already has answers here:
Google Maps API - Getting closest points to zipcode
(2 answers)
Closed 2 years ago.
I'm working on the delivery web app and can't figure out how can I reduce the number of requests app sends to google distance matrix to calculate a distance between requested delivery and store address.
I have a page-catalogue that has around 1000 stores. Each time user enters delivery address I send 1000 requests to google maps API to check if user's delivery address is within each store delivery range. Hence google charges me for 1000 requests every time users enter new delivery address.
Any suggestions on how to optimise usage of Google API and show only those stores that deliver to the selected address, as the current way is the way too expensive. I'm wondering how large on-demand delivery services that have tens of thousands of stores deal with this?
You could calculate the direct-line distance (using a formula) and only request stores whose direct line distance is less than the allowed range, since the travel distance can't be shorter than the direct line.
If you don't care about getting exactly the shortest travel, you can also sort the candidates, request them in order and stop as soon as you get an acceptable one. That will occasionally give a store that's physically closer but further away by road, which may or may not be acceptable.
In most programming languages, the direct-line distance will be available in a "geo" library or similar, under the name "great-circle distance". You can also search for it here on SO.
Related
Update: I have switched to HERE Routing API because I was told that it would be faster. I am trying to use HerePy to get a routing matrix, but I am getting the following error message:
AttributeError: 'RoutingApi' object has no attribute 'matrix'
Regardless of whether I find out how to move past this error, it's also not clear whether this API could accept multiple departure times (Each of my origins has its own departure time.). I have a feeling I will also run into the matrix size issue again. Does anyone know how to fix this error and/or know more about what I'm able to do? I had a phone call with someone from the sales department, but they didn't know the answers to these questions.
Original Question: I am trying to use the Google Maps Distance Matrix API. I have an array or origins, an array of destinations, and then an array of arrival times. Each destination has its own arrival time. From what I have read in the documentation, it is not clear whether I can use an array of arrival times, or just one arrival time per request. Does anyone know?
I suppose if I can only do one arrival time per request, then I would just group together the destinations with the same arrival times into one request. I will need to do multiple requests anyway due to the 100 maximum of elements/25 maximum of origins or destinations per request.
Thanks!
You can check the Matrix Routing API offered by Here API.
The Matrix Routing service is an HTTP JSON API that calculates routing matrices, travel times and/or distances, of up to 10,000 origins and 10,000 destinations. A routing matrix is a matrix with rows labeled by origins and columns by destinations. Each entry of the matrix is the travel time or distance from the origin to the destination.
For more information , please do visit to the following link
I know that the durationIntraffic is depracated now.
I just cant find a way to get the duration without traffic using google api.
already tried the matrix using DrivingOptions and trafficModel but the duration and duration_in_traffic did not match the duration without traffic in Maps.Google.com
Any help on how to get the duration without traffic data using API?
Image
Duration gives average time and duration in Traffic gives time in traffic .sometimes you run googleApi at midnignt you will find duration in Traffic time is lesser than duration time means duration is not without traffic time.
According to this Google Maps API should return an element called duration as well as an element called duration_in_traffic. Duration will be time without traffic while duration_in_traffic is time with traffic conditions.
Just to set the correct expectations, you shouldn't expect the Web Services API and the Google Maps website to work in the exact same way. These are different products managed by different teams at Google. The search stack is also different, so results may differ.
The result you get in duration field is an approximation of the average travel time for the route. This takes into account average traffic conditions of the last several weeks for a time when you execute a request. That means that the duration can change during the day.
Applying the departure time and traffic_model you can add further information to your requests to get the travel time in the current traffic, this will be provided in a duration_in_traffic field.
Resuming, currently there is no way to get duration without traffic. You will get the average travel time via the API.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'd like to fetch results from Google using curl to detect potential duplicate content.
Is there a high risk of being banned by Google?
Google disallows automated access in their TOS, so if you accept their terms you would break them.
That said, I know of no lawsuit from Google against a scraper.
Even Microsoft scraped Google, they powered their search engine Bing with it. They got caught in 2011 red handed :)
There are two options to scrape Google results:
1) Use their API
UPDATE 2020: Google has reprecated previous APIs (again) and has new
prices and new limits. Now
(https://developers.google.com/custom-search/v1/overview) you can
query up to 10k results per day at 1,500 USD per month, more than that
is not permitted and the results are not what they display in normal
searches.
You can issue around 40 requests per hour You are limited to what
they give you, it's not really useful if you want to track ranking
positions or what a real user would see. That's something you are not
allowed to gather.
If you want a higher amount of API requests you need to pay.
60 requests per hour cost 2000 USD per year, more queries require a
custom deal.
2) Scrape the normal result pages
Here comes the tricky part. It is possible to scrape the normal result pages.
Google does not allow it.
If you scrape at a rate higher than 8 (updated from 15) keyword requests per hour you risk detection, higher than 10/h (updated from 20) will get you blocked from my experience.
By using multiple IPs you can up the rate, so with 100 IP addresses you can scrape up to 1000 requests per hour. (24k a day) (updated)
There is an open source search engine scraper written in PHP at http://scraping.compunect.com
It allows to reliable scrape Google, parses the results properly and manages IP addresses, delays, etc.
So if you can use PHP it's a nice kickstart, otherwise the code will still be useful to learn how it is done.
3) Alternatively use a scraping service (updated)
Recently a customer of mine had a huge search engine scraping requirement but it was not 'ongoing', it's more like one huge refresh per month.
In this case I could not find a self-made solution that's 'economic'.
I used the service at http://scraping.services instead.
They also provide open source code and so far it's running well (several thousand resultpages per hour during the refreshes)
The downside is that such a service means that your solution is "bound" to one professional supplier, the upside is that it was a lot cheaper than the other options I evaluated (and faster in our case)
One option to reduce the dependency on one company is to make two approaches at the same time. Using the scraping service as primary source of data and falling back to a proxy based solution like described at 2) when required.
Google will eventually block your IP when you exceed a certain amount of requests.
Google thrives on scraping websites of the world...so if it was "so illegal" then even Google won't survive ..of course other answers mention ways of mitigating IP blocks by Google. One more way to explore avoiding captcha could be scraping at random times (dint try) ..Moreover, I have a feeling, that if we provide novelty or some significant processing of data then it sounds fine at least to me...if we are simply copying a website.. or hampering its business/brand in some way...then it is bad and should be avoided..on top of it all...if you are a startup then no one will fight you as there is no benefit.. but if your entire premise is on scraping even when you are funded then you should think of more sophisticated ways...alternative APIs..eventually..Also Google keeps releasing (or depricating) fields for its API so what you want to scrap now may be in roadmap of new Google API releases..
This is a question specifically for the Google Developer Relations team. I have read the Geocode API T&Cs and I am aware that I am not allowed to store data except by way of a temporary cache (e.g. for performance). Is this the end of the matter? I am developing a product which requires a search with results sorted by distance from a place, meaning that all my records need a lat/long. I was intending to use the Geocode API to get the lat/long when a user adds a record, and then adding that lat/long info to the record. We would then use the Haversine formula to calculate the relative distances and sort the results.
If I follow this approach, will I be in breach of the T&Cs? If so, is there another approach using the Geocode API which will allow me to hold onto lat/long data so that I can sort my results by distance, within the letter of the T&Cs?
For anyone else commenting, please observe the following restrictions: (1) we don't have a budget to buy a postcode-lat/long dataset; (2) we don't want to use a static dataset of our own, eg GeoNames, because we don't want to have to maintain data which is, effectively, public; (3) we have to support users who have javascript disabled.
To be absolutely clear, what I need here is to have the lat/long for all of my records in hand so that I can do effective searching and sorting by distance relative to another lat/long as provided, e.g. by a user searching.
Google Team, please respond to this message with contact details so we can speak.
One way to get this detail is to fire the query on Visitor Location Register(VLR) which keeps track of all the active mobiles present in its area at a given time. But frequently triggering this database might hinder the performance of the network system. Is there any other alternative to get total number of active mobile counts data? Can i get this data directly from base stations?
Actually, each cell is not identified by a "unique Base Station", but by it's CGI (e.g. 294-02-100-223). What you can get is the KPI (Key Performance Indicator) TCH Traffic (Half-Rate + Full-Rate) for a certain period of time in Erlangs. That is the average number of used TCH channels. If you multiply this number with the time length of the period, you get the total number of call minutes. There's no way the BTS (Base Transceiver Station) will provide the number (total/average) of mobile users by means of their MSISDN/IMSI/IMEI. You can get only the traffic. The only way to get the total number of active mobile subscribers is by querying the VLR, or via appropriate KPI (my specialty is the access part, not the core).