I am scraping data from some interactive maps, including markers locations, and would like to acquire and reconstruct the map image to use with those markers. For example, consider the map for the videogame Elden Ring provided by Map Genie here.
From what I understand, the tiles are 256x256 and are fetched depending on the user's view position and their zoom.
I am unaware how I may programmatically acquire all the map tiles (and put them together) such that it still relates to the longitude/latitude data of the scraped markers. I generally use Selenium in python to scrape web data, but in this instance, the data is 'dynamically fetched', and I'm not sure how to start.
Related
I am currently investigating the possibility of utilising the HERE APIs as source data inputs into possible solutions we would like to develop as a company on ESRI.
I understand that any data we retrieve from the API, i would need to create ESRI objects on the fly, and add them to an ESRI map layer.
However, one thing that is not clear in my mind, is how to calculate the tiles for my particular map extent.
The REST API specifies some base information, regarding calculating tiles based on lat lon - but what is the lat/lon of? is the map center? is it the bottom left or bottom right?
Are there any JS helper methods that HERE have, that would calculate the tiles required for a particular extent?
Are you referring to this documentation part: https://developer.here.com/documentation/map-tile/common/map_tile/topics/mercator-projection.html ?
It is the coordinate of the part of the map which you want to display e.g. the center of your map view.
Background:
Leaflet map
a vector tile layer
MapBox Studio to generate vector tiles (protocol buffer format)
MapBox.com for tile hosting
using Leaflet.MapboxVectorTile plugin to parse and render MapBox vector tiles
Issues:
Cannot get original feature geometry: I wanted to fetch feature geometry so can zoom to it e.g. when user clicks at it. But this plugin (Leaflet.MapboxVectorTile) doesn't have a clear way to do so. The geom values (including BBOX) are all in relative coordinates (i.e., not the original geometry)
Poor capability in identifying feature (click/hover): Another problem with this plugin is its identification algorithm is not robust enough. The authors disabled hover-identify as it turns out to be very slow. For clicking, I noticed sometimes it's hard to select a polygon (happens for some polygons and depends on where you clicked within the polygon), you have to click at certain locations so the plugin is able to identify the feature.
This plugin is very good in my experience except for above two issues.
My Questions:
Is there a counterpart MapBox solution that does the similar thing to this plugin? In other words, to parse and render a vector tile layer service on map, with support in robust feature identifying and ability to fetch feature geometry?
Currently the tiles generated in MapBox studio is in .pbf which is great in terms of efficient data transmission. I wonder if it supports JSON format (GeoJSON or TopoJSON) out of box? If it does, I probably could try using MapBox.JS featureLayer to consume vector tile json data.
I understand MapBOX GL.js is designed for vector tile based maps. I am looking for adding a vector tile layer though.
appreciate any response. thanks!
You'll want to try out mapbox-gl-leaflet, a library that integrates the efficient Mapbox GL library into Leaflet as a layer.
Cannot get original feature geometry
Vector tiles do not contain the original feature geometry. Mapbox GL does provide an api that delivers GeoJSON, but it isn't going to be your raw data: if it was, then the map would be slow and inefficient, since raw data is over-detailed.
I need a polygon for every German state. I go all the GeoPoints in one JavaScript-file but the file is because of the amout of points about 4MB. I've been googling and thinking about this problem all day but couldn't figure out a solution...
How can I use Google Maps polygons without forcing the user to download a huge js-file with the coordinates?
Thanks!
Ron
You can encode the polygon points to vastly reduce the size of the javascript file. To do this, you must include the geometry library.
https://developers.google.com/maps/documentation/javascript/reference#encoding
https://developers.google.com/maps/documentation/javascript/geometry#Encoding
One option is to use a FusionTablesLayer to display the polygons. They are available in the Natural Earth data set that is publicly available.
Example
You could do the same with your points and a KmlLayer if you convert your data to KML.
Well... somehow the points have to get to the user. You could think of the following solutions to reduce the data usage:
Use different polygons for different zoom levels. For example zoomed out you won't need the full details.
only send parts of your polygon back to the user. You can for example send the viewport coordinates to an AJAX script. This one queries your database and/or shapefile and only returns the polygon parts that are visible in the user's viewport
preprocess tiles. If you can generate images from your shape, you can overlay these on Google Maps.
I have a scenario where I have to highlight borders and shade a state or city after geocoding it (when I got the lang and lat).
How can I do this, do I need to have a complete information of a city to surround it with polylines? Or is there a way that map API can do this for me.
True. Google does not provide this feature. So what we can do... we can have the lat/long of the borders of the state. And we have to draw polygons ourselves.
I used this JS object. And changed it to Google map object (google.maps.LatLng).
For example:
var statesobj = {"AK": [new google.maps.LatLng(70.0187, -141.0205),
new google.maps.LatLng(70.1292, -141.7291),
new google.maps.LatLng(70.4515, -144.8163)]}
So, it's easy now. Loop on these lat/longs. And you can draw the polygons on every state of US.
So this is the solution I came up. If you guys know some better idea to do it. Please share.
You can also try Google Geo Charts:
http://code.google.com/apis/chart/interactive/docs/gallery/geochart.html
Google Maps API doesn't allow you to retrieve city borders. There are a couple other places from which you can get the coordinates, though:
Flickr API
There is a Flickr API based on photos that people tag, but it's only as accurate as the people who tag photos: so it's good enough for bootstrapping but probably not for production: http://karya-blog.blogspot.com/2012/12/fetching-city-polygons-with-flickr-api.html
Natural Earth Data
An accurate alternative is www.naturalearthdata.com. To get that data from there you just need to make two requests: one with the city name and one with their ID to get the parameters:
unlock.edina.ac.uk/ws/search?name=berlin&gazetteer=naturalearth&format=json
and then
unlock.edina.ac.uk/ws/footprintLookup?format=json&identifier=14126951
and you're set :)
Mapzen
If it's possible for you to pre-fetch the data, go for Mapzen, they have a full and pretty accurate database: https://mapzen.com/data/borders/
I'm afraid google maps API doesn't provide any means to access region (country, state, city, ...) shapes.
If you want to highlight regions you have to create custom overlays based on data acquired elsewhere.
Now the basic map example includes a "mashup" of data. When identifying data is fed to the web service, the resulting output can pinpoint locations on the map.
It shows how a geographic Map Marker is placed on the map to identify a specific location. Map Markers can use the default icon (shown) or a custom image, gauge, or even a chart. Optionally, the map can be configured to display a Map Marker Info window, containing additional location-specific data, when the marker is clicked.
It includes data-driven, colored regions (in this case, representing postal codes) overlaid a map of eg Washington, DC. Logi Info can work with GIS boundary data to produce region overlays for states, counties, cities, school districts, and other areas. Like the Map Marker, regions can be clicked to display a pop-up information window with detail data.
I am looking for advice for an application I am developing that uses Google Map.
Summary:
A user has a list of criteria for searching a street segment that fulfills the criteria. The street segments will be colored with 3 colors for showing those below average, average and over average. Then the user clicks on the street segment to see an information window showing the properties of that specific segment hiding those not selected until he/she closes the window and other polyline becomes visible again. This looks quite like the Monopoly City Streets game Hasbro made some month ago the difference being I do not use Flash, I can’t use Open Street Map because it doesn’t list street segment (if it does the IDs won’t be the same anyway) and I do not have to show Google sketch building over.
Information:
I have a database of street segments with IDs, polyline points and centroid.
The database has 6,000,000 street segment records in it. To narrow the generated data a bit we focus on city. The largest city we must show has 250,000 street segments. This means 250,000 line segment polyline to show.
Our longest polyline uses 9600 characters which is stored in two 8000 varchar columns in SQL Server 2008.
We need to use the API v3 because it is faster than the API v2 and the application will be ported to iPhone. For now it's an ASP.NET 3.5 with SQl Server 2008 application.
Performance is a priority.
Problems:
Most of the demo projects that do this are made with API v2. So besides tutorial on the Google API v3 reference page I have nothing to compare performance or technology use to achieve my goal.
There is no available .NET wrapper for the API v3 yet.
Generating a 250,000 line segment polyline creates a heavy file which takes time to transfer and parse. (I have found a demo of one polyline of 390,000 points. I think the encoder would be far less efficient with more polylines with less points since there will be less rounding.)
Since streets segments are shown based on criteria, polylines must be dynamically created and cache can't be used.
Some thoughts:
KML/KMZ:
Pros:
Since it is a standard we can easily load Bing maps, Yahoo! maps, Google maps, Google Earth, with the same KML file. The data generation would be the same.
Cons:
LineString in KML cannot be encoded polyline like the Google map API can handle. So it would probably be bigger and slower to display. Zipping the file at the size it will take more processing time and require the client side to uncompress the data and I am not quite sure with 250,000 data how an iPhone would handle this and how a server would handle 40 users browsing at the same time.
JavaScript file:
Pros:
JavaScript file can have encoded polyline and would significantly reduce the file to transfer.
Cons:
Have to create my own stripped version of API v3 to add overlays, create polyline, etc. It is more complex than just create a KML file and point to the source.
GeoRSS:
This option isn't adapted for my needs I think, but I could be wrong.
MapServer:
I saw some post suggesting using MapServer to generate overlays. Not quite sure for the connection with our database and the performance it would give. Plus it requires a plugin for generating KML. It seems to me that it wouldn't allow me to do better than creating my own KML or JavaScript file. Maintenance would be simpler without.
Monopoly City Streets:
The game is now over, but for those who know what I am talking about Monopoly City Streets was showing at max zoom level only the streets that the centroid was inside the Bounds of the window. Moving the map was sending request to the server for the new streets to show. While I think this was ingenious, I have no idea how to implement something similar. The only thing I thought about was to compare if the long was inside the bound of map area X and same with Y. While this could improve performance significantly at high zoom level, this would give nothing when showing a whole city.
Clustering:
While cluster is awesome for marker, it seems we cannot cluster polylines. I would have liked something like MarkerClusterer for polylines and be able to cluster by my 3 polyline colors. This will probably stay as a “would have been freaking awesome but forget it”.
Arrow:
I will have in a future version to show a direction for the polyline and will have to show an arrow at the centroid. Loading an image or marker will only double my data so creating a custom overlay will probably be my only option. I have found that demo for something similar I would like to achieve. Unfortunately, the demo is very slow, but I only wish to show 1 arrow per polyline and not multiple like the demo. This functionality will depend on the format of data since I don't think KML support custom overlays.
Criteria:
While the application is done with ASP.NET 3.5, the port to the iPhone won't use the web to show the application and be limited in screen size for selecting the criteria. This is why I was more orienting on a service or page generating the file based on criteria passed in parameters. The service would than generate the file I need to display the polylines on the map. I could also create an aspx page that does this. The aspx page is more documented than the service way. There should be a reason.
Questions:
Should I create a web service to returns the street segments file or create an aspx page that return the file?
Should I create a JavaScript file with encoded polyline or a KML with longitude/latitude based on the fact that maximum longitude/latitude polyline have 9600 characters and I have to render maximum 250,000 line segment polyline. Or should I go with a MapServer that generate the overlay?
Will I be able to display simple arrow on the polyline on the next version.
In case of KML generation is it faster to create the file with XDocument, XmlDocument, XmlWriter and this manually or just serialize the street segment in the stream?
This is more a brainstorming Stack Overflow question than an actual code problem. Any answer helping narrow the possibilities is as good as someone having all the knowledge to point me out a better choice.
Large numbers of short GPolylines run massively slower than small numbers of long GPolylines.
The speed difference between Google Maps v2 and Google Maps v3 is not going to be significant, because most of the CPU time will be taken up by the actual graphics system of the browser. Google Maps uses the VML, SVG or Canvas graphics systems, depending on the browser. Of these, VML is by far the slowest, and that gets used whenever the browser is MSIE.
Before embarking on tackling 250,000 line segments, I suggest you take a look at this quick speed test of 200 random polylines. Try zooming and paning that map in MSIE.
Then, also consider the amount of data that needs to be sent from the server to the client to specify 250,000 line segments. The amount of data will vary depending on whether you choose KML or JSON or GeoRSS, but if you end up with 20 bytes per line segment that would take 50 seconds to fetch on a 1 megabit broadband connection. Consider whether your users would be prepared to sit around for 50 seconds.
The only solution that really makes sense is to do what Google do for their traffic overlay, and draw the lines onto tiles in the server, and have those tiles be displayed as a GTileLayerOverlay in the client.
What you need is a spatially aware database, and a server-side graphics library like gd or ImageMagik. The client asks for a tile from the server. If the zoom is above a certain level the server scans the database for line segments that have bounding boxes that overlap the bounding box of the requested tile and use the graphics library to draw them.
The zoom level limit is there to limit the amount of work that your database and server needs to do. You don't want to end up drawing 250,000 line segments onto a single zoomed out tile because that's an awful lot of hard work for the server, and isn't going to mean very much to the user.
Regarding click handling:
The easy thing to do is to listen for clicks on the map, rather than on the objects, and send the click details to a server. The server then uses the click location to search the spatially aware database and returns the details of the clicked object if there is one. The client code does this:
GEvent.addListener(map,"click",function(overlay,point) {
var url="clickserver.php?lat=" + point.lat() + "&lng=" +point.lng();
GDownloadUrl(url, function(html) {
if (html.length) {
map.openInfoWindow(html)
}
});
});
The harder thing to do is to handle the changing of the cursor when the pointer is over the polylines. There's a known technique for doing cursor changes for small markers, which works like this:
Whenever a tile is fetched, the .getTileUrl() also makes a call to a server that returns a list of hotspot boxes for that tile. As the mouse moves, the client constantly calculates which tile the mouse is over, and then scans the corresponding list of hotspot boxes.
Google themselves, in their GLayer() code, add the sophistication of performing a quadtree search to speed up the search for hotspots within a tile, but other people who have implemented this strategy in their own code reckon that's not necessary, and a linear scan of the hotspot list is fast enough.
I've no idea how to extend that to handling cursor over polyline detection.