modifying google maps elevation return data - google-maps-api-3

I'm wondering if there is a preferred way to slightly modify the return results of elevation data from google's maps v3 elevations api. Given two points, each with a lat/lng, i'd like to add 20 meters to each point. If the response below is the elevation of one point,
{
"status": "OK",
"results": [ {
"location": {
"lat": 45.371093,
"lng": -114.381159
},
"elevation": 2255.52
} ]
}
I was thinking of just modifying the return result above - the elevation key data. I can't seem to find or think of any other way. it seems to work but feels like a hack.

It's understandable to feel a bit iffy about messing with data that you have gathered from Google, but I think you are fine in this case. Whenever I have the same concern, I check myself by making sure of two things:
You must remain solidly within the TOS constraints (which you say you have)
You must try to make sure that you: Don’t modify objects you don’t own.
In your scenario, you have a set of JSON data that you have retrieved from Google and as long as you don't modify the data and then pass it through to the user (or if you do, make it clear that it has been modified), you are free to work with the result data to satisfy a requirement or implement a use case in the context of a Google Map.
In your scenario, the response from the ElevationService has simply become application state data that you are using to make some calculations. It is a completely detached data set and you can always go retrieve the data again if needed. It doesn't belong to Google in the same way a JavaScript library, map tile overlay, or even an image file all clearly belong to Google. Many applications make calls to the Geocoder or make calls to the Places API and then adjust the data to: change the bounds of the map, set the center of the map, add a custom overlay that is generated on-the-fly to the map - all based on result data that has been adjusted in some way.
Of course, you can't change the values and then display them in an InfoWindow or use them as a marker label; in those cases, you are making the data available to the user. As long as you stay on the good side of the TOS and don't monkey with stuff you don't own, it feels like you are okay.

Related

Mapping GPS coordinates to an area

I have devices moving across the entire country that report their GPS positions back to me. What i would like to do is to have a system that maps these coordinates to a named area.
I see two approaches to this:
Have a database that defines areas as polygons stretching between various GPS coords.
Use some form of webservice that can provide the info for me.
Either will be fine. It doesn't have to be very accurate at all, as i only need to know the region involved so that i know which regional office to call if something wrong happens with the device.
In the first approach, how would you build an SQL table that contained the data? And what would be your approach for matching a GPS coordinate to one of the defined areas? There wouldn't be many areas to define, and they'd be quite large, so manually inputting the values defining the areas wouldn't be a problem.
In the case of the second approach, does anyone know a way of programatically pulling this info off the web on demand? (I'd probably go for Perl WWW::Mechanize in this case). "close to Somecity" would be enough.
-
PS: This is not a "do the work for me" kind of question, but more of a brainstorming request. pseudo-code is fine. General theorizing on the subject is also fine.
In the first approach, how would you build an SQL table that contained
the data? And what would be your approach for matching a GPS
coordinate to one of the defined areas?
Asume: An area is defined as an closed polygon.
You match the GPS coordinate by simply calling a point inside polygon method, like
boolean isInside = polygon.contains(latitude, longitude);
If you have few polygons you can do a brute force search through all existing polygons.
If you have many of them and each (ten-) thousands of points, the you want to use a spatial grid, like a quadtree or k-d tree, to reduce the search to the relevant polygons.
method.
this process is called reverse geocoding, many services providers such as google, yahoo, and esri provide services that will allow to do this thing
they will return the closest point of interest or address, but you can keep the administrative level you are interested in
check terms of use to see which service is compatible with your intended usage

Find nearest locations from json Sencha Touch 2

I'm trying to setup a portion to my app where the user's mobile location (which I have working) is used to find the nearest locations (lets say within 15 miles) against a json store of locations (lets say of 40 or so and it will increase).
I've been racking my brain how to go about this. I believe Distance Matrix (looking at the Google API, but no idea how to implement it from the docs) is something I need, but I can't figure how to load the json list and use it against the location (still a n00b).
If there's a tutorial or some details on how to go about it would be great. I just need a point in the right direction.
Any info is great, thanks ahead of time.
I'm using Sencha Touch 2.0.1, a json store and using the Google Maps JavaScript API v3.
Matt
You'd be better off doing that processing on the backend and sending only the close records back to the client. This will keep your client app from getting bogged down.
You can use Postgresql/Postgis for this if you store the lat/long points as spatial data. You can also do this with the MySQL spatial extensions. If you want to build the haversine formula into a MySQL function, you can draw 15 mile radii around all of your points and use MySQL's builtin 'within' function. If you'd rather use a flavor of NoSQL, then CouchDB has a nice GeoCouch extension which can perform the function you desire.
Yea, figured out a way. More than likely not the best, but it'll work for now.
I'm loading all the latitudes and longitudes with the locations into the store file.
Then, on initialize, it takes the current location and matches it against the stored locations and runs the Haversine formula with each one. Once it has all of the distances, it sorts and done.

Working with google maps api

I am trying to build a map based query interface for my website and I am having difficulty finding a starting point besides http://developer.google.com. I assume this is a rather simple task but I feel as though I am on a wild goose chase. Anyway the problem is the existing site places people into a category based on their address (primarily the zip code), this is not working out because of odd shapes and user density so I would like to solve the problem by creating custom zones.
I am not looking for a proprietary solution because I would really like to accomplish this on my own, I just need some better places to start or better suggestions for searches.
I understand that I will need to create a map with my predetermined polygons.
I understand how to create a map with polygons via js.
I do not understand how data will request which zone it is within and how it will return it as a hash I can store. eg. user=>####, zone=>####, section=>#####
http://blog.appdelegateinc.com./point-in-polygon-checking-with-google-maps.html
has some JS you can add to give the ability to test whether a point is within a polygon (sample: http://blog.appdelegateinc.com./static/samples/point_in_polygon.html ) using this approach: http://en.wikipedia.org/wiki/Point_in_polygon#Ray_casting_algorithm
I think as you place the markers, you'll hold them in an array (of objects)...then loop through, doing some sort of reduction of which polygons to test, testing those that remain, if inPoly, set marker.zone and marker.section to whatever suits your needs

Dynamic loading of KML placemarks on Google maps

I have a static .csv type file that lists 4000+ addresses, each with a unique id, and their correlative latitude and longitude. I want to query a database that has events stored for each venue id and then display on a Google map, only those addresses that have events that match the query.
This would be one thing if it were not for Google's query limit (when it goes live, there is the potential for hundreds of thousands to millions of queries daily). The limit for KML files is sufficient, however (I believe it is only file size that is counted, am I wrong?) and I would just convert the .csv type file to a .kml file were it not for the fact that I don’t want all 4000+ addresses to be loaded on the map every time, only those that correspond to the search query.
There has to be a way of selectively loading certain placemarks from a single .kml file, right? I would like to not need to use a server side approach (ASP.Net) if possible, but will if absolutely necessary.
~~~~~
I think I'll use the server side approach. I would still like to use kml as I was running into the query limit trying pure JavaScript (although I may have been doing something wrong then as that was when I was just learning how to use the Google maps API). The kml consists of venues and their relevant location for events one might have to buy tickets to. A search term might be 'wicked New York', or 'concerts FL'. The database will return venue id’s, which correlate to placemark id’s in the kml file. What I would like to do is use an array of venue id’s that are returned by the search query and then scan through the kml file and return only those placemark id’s that match the venue id’s in the array. I would then like to have the kml placemarks be loaded into a div tag in on same page and then have this be what Google uses to put the pointers on the map. Is there a way of using # named anchors instead of complete URL’s to load the kml into Google maps (var kmlVar = new google.maps.KmlLayer('#kmlDivTagOnSamePage'); this doesn’t work)? This would then be easy to write the server side ASP part.
If you are just using 2D maps, there is probably no need to use KML. And since you don't need to display all 4k locations at once, you probably don't need to use Google's server side rendering (you can just render the markers in javascript), so although Fusion Tables could work I'd instead recommend one of these alternatives:
- Loading up all the data as a JSON dictionary with the ids as your key, and when a search is run, find the matches and just display those
- Don't load anything on the client initially: Just query your server and have it return all the data you need for display for each search
If these suggestions don't seem to make sense, perhaps try provide the exact type of query someone might ask, and what type of response you expect from the database, whether you control that database, etc.

Bulk Collection Manipulation through a REST (RESTful) API

I'd like some advice on designing a REST API which will allow clients to add/remove large numbers of objects to a collection efficiently.
Via the API, clients need to be able to add items to the collection and remove items from it, as well as manipulating existing items. In many cases the client will want to make bulk updates to the collection, e.g. adding 1000 items and deleting 500 different items. It feels like the client should be able to do this in a single transaction with the server, rather than requiring 1000 separate POST requests and 500 DELETEs.
Does anyone have any info on the best practices or conventions for achieving this?
My current thinking is that one should be able to PUT an object representing the change to the collection URI, but this seems at odds with the HTTP 1.1 RFC, which seems to suggest that the data sent in a PUT request should be interpreted independently from the data already present at the URI. This implies that the client would have to send a complete description of the new state of the collection in one go, which may well be very much larger than the change, or even be more than the client would know when they make the request.
Obviously, I'd be happy to deviate from the RFC if necessary but would prefer to do this in a conventional way if such a convention exists.
You might want to think of the change task as a resource in itself. So you're really PUT-ing a single object, which is a Bulk Data Update object. Maybe it's got a name, owner, and big blob of CSV, XML, etc. that needs to be parsed and executed. In the case of CSV you might want to also identify what type of objects are represented in the CSV data.
List jobs, add a job, view the status of a job, update a job (probably in order to start/stop it), delete a job (stopping it if it's running) etc. Those operations map easily onto a REST API design.
Once you have this in place, you can easily add different data types that your bulk data updater can handle, maybe even mixed together in the same task. There's no need to have this same API duplicated all over your app for each type of thing you want to import, in other words.
This also lends itself very easily to a background-task implementation. In that case you probably want to add fields to the individual task objects that allow the API client to specify how they want to be notified (a URL they want you to GET when it's done, or send them an e-mail, etc.).
Yes, PUT creates/overwrites, but does not partially update.
If you need partial update semantics, use PATCH. See http://greenbytes.de/tech/webdav/draft-dusseault-http-patch-14.html.
You should use AtomPub. It is specifically designed for managing collections via HTTP. There might even be an implementation for your language of choice.
For the POSTs, at least, it seems like you should be able to POST to a list URL and have the body of the request contain a list of new resources instead of a single new resource.
As far as I understand it, REST means REpresentational State Transfer, so you should transfer the state from client to server.
If that means too much data going back and forth, perhaps you need to change your representation. A collectionChange structure would work, with a series of deletions (by id) and additions (with embedded full xml Representations), POSTed to a handling interface URL. The interface implementation can choose its own method for deletions and additions server-side.
The purest version would probably be to define the items by URL, and the collection contain a series of URLs. The new collection can be PUT after changes by the client, followed by a series of PUTs of the items being added, and perhaps a series of deletions if you want to actually remove the items from the server rather than just remove them from that list.
You could introduce meta-representation of existing collection elements that don't need their entire state transfered, so in some abstract code your update could look like this:
{existing elements 1-100}
{new element foo with values "bar", "baz"}
{existing element 105}
{new element foobar with values "bar", "foo"}
{existing elements 110-200}
Adding (and modifying) elements is done by defining their values, deleting elements is done by not mentioning it the new collection and reordering elements is done by specifying the new order (if order is stored at all).
This way you can easily represent the entire new collection without having to re-transmit the entire content. Using a If-Unmodified-Since header makes sure that your idea of the content indeed matches the servers idea (so that you don't accidentally remove elements that you simply didn't know about when the request was submitted).
Best way is :
Pass Only Id Array of Deletable Objects from Front End Application To Web API
2. Then You have Two Options:
2.1 Web API Way : Find All Collections/Entities using Id arrays and Delete in API , but you need to take care of Dependant entities like Foreign Key Relational Table Data too
2.2. Database Way : Pass Ids to your database side, find all records in Foreign Key Tables and Primary Key Tables and Delete in same order i.e. F-Key Table records then P-Key Table records

Resources