Need guidance on a Google Map application that has to show 250 000 polylines - asp.net

I am looking for advice for an application I am developing that uses Google Map.
Summary:
A user has a list of criteria for searching a street segment that fulfills the criteria. The street segments will be colored with 3 colors for showing those below average, average and over average. Then the user clicks on the street segment to see an information window showing the properties of that specific segment hiding those not selected until he/she closes the window and other polyline becomes visible again. This looks quite like the Monopoly City Streets game Hasbro made some month ago the difference being I do not use Flash, I can’t use Open Street Map because it doesn’t list street segment (if it does the IDs won’t be the same anyway) and I do not have to show Google sketch building over.
Information:
I have a database of street segments with IDs, polyline points and centroid.
The database has 6,000,000 street segment records in it. To narrow the generated data a bit we focus on city. The largest city we must show has 250,000 street segments. This means 250,000 line segment polyline to show.
Our longest polyline uses 9600 characters which is stored in two 8000 varchar columns in SQL Server 2008.
We need to use the API v3 because it is faster than the API v2 and the application will be ported to iPhone. For now it's an ASP.NET 3.5 with SQl Server 2008 application.
Performance is a priority.
Problems:
Most of the demo projects that do this are made with API v2. So besides tutorial on the Google API v3 reference page I have nothing to compare performance or technology use to achieve my goal.
There is no available .NET wrapper for the API v3 yet.
Generating a 250,000 line segment polyline creates a heavy file which takes time to transfer and parse. (I have found a demo of one polyline of 390,000 points. I think the encoder would be far less efficient with more polylines with less points since there will be less rounding.)
Since streets segments are shown based on criteria, polylines must be dynamically created and cache can't be used.
Some thoughts:
KML/KMZ:
Pros:
Since it is a standard we can easily load Bing maps, Yahoo! maps, Google maps, Google Earth, with the same KML file. The data generation would be the same.
Cons:
LineString in KML cannot be encoded polyline like the Google map API can handle. So it would probably be bigger and slower to display. Zipping the file at the size it will take more processing time and require the client side to uncompress the data and I am not quite sure with 250,000 data how an iPhone would handle this and how a server would handle 40 users browsing at the same time.
JavaScript file:
Pros:
JavaScript file can have encoded polyline and would significantly reduce the file to transfer.
Cons:
Have to create my own stripped version of API v3 to add overlays, create polyline, etc. It is more complex than just create a KML file and point to the source.
GeoRSS:
This option isn't adapted for my needs I think, but I could be wrong.
MapServer:
I saw some post suggesting using MapServer to generate overlays. Not quite sure for the connection with our database and the performance it would give. Plus it requires a plugin for generating KML. It seems to me that it wouldn't allow me to do better than creating my own KML or JavaScript file. Maintenance would be simpler without.
Monopoly City Streets:
The game is now over, but for those who know what I am talking about Monopoly City Streets was showing at max zoom level only the streets that the centroid was inside the Bounds of the window. Moving the map was sending request to the server for the new streets to show. While I think this was ingenious, I have no idea how to implement something similar. The only thing I thought about was to compare if the long was inside the bound of map area X and same with Y. While this could improve performance significantly at high zoom level, this would give nothing when showing a whole city.
Clustering:
While cluster is awesome for marker, it seems we cannot cluster polylines. I would have liked something like MarkerClusterer for polylines and be able to cluster by my 3 polyline colors. This will probably stay as a “would have been freaking awesome but forget it”.
Arrow:
I will have in a future version to show a direction for the polyline and will have to show an arrow at the centroid. Loading an image or marker will only double my data so creating a custom overlay will probably be my only option. I have found that demo for something similar I would like to achieve. Unfortunately, the demo is very slow, but I only wish to show 1 arrow per polyline and not multiple like the demo. This functionality will depend on the format of data since I don't think KML support custom overlays.
Criteria:
While the application is done with ASP.NET 3.5, the port to the iPhone won't use the web to show the application and be limited in screen size for selecting the criteria. This is why I was more orienting on a service or page generating the file based on criteria passed in parameters. The service would than generate the file I need to display the polylines on the map. I could also create an aspx page that does this. The aspx page is more documented than the service way. There should be a reason.
Questions:
Should I create a web service to returns the street segments file or create an aspx page that return the file?
Should I create a JavaScript file with encoded polyline or a KML with longitude/latitude based on the fact that maximum longitude/latitude polyline have 9600 characters and I have to render maximum 250,000 line segment polyline. Or should I go with a MapServer that generate the overlay?
Will I be able to display simple arrow on the polyline on the next version.
In case of KML generation is it faster to create the file with XDocument, XmlDocument, XmlWriter and this manually or just serialize the street segment in the stream?
This is more a brainstorming Stack Overflow question than an actual code problem. Any answer helping narrow the possibilities is as good as someone having all the knowledge to point me out a better choice.

Large numbers of short GPolylines run massively slower than small numbers of long GPolylines.
The speed difference between Google Maps v2 and Google Maps v3 is not going to be significant, because most of the CPU time will be taken up by the actual graphics system of the browser. Google Maps uses the VML, SVG or Canvas graphics systems, depending on the browser. Of these, VML is by far the slowest, and that gets used whenever the browser is MSIE.
Before embarking on tackling 250,000 line segments, I suggest you take a look at this quick speed test of 200 random polylines. Try zooming and paning that map in MSIE.
Then, also consider the amount of data that needs to be sent from the server to the client to specify 250,000 line segments. The amount of data will vary depending on whether you choose KML or JSON or GeoRSS, but if you end up with 20 bytes per line segment that would take 50 seconds to fetch on a 1 megabit broadband connection. Consider whether your users would be prepared to sit around for 50 seconds.
The only solution that really makes sense is to do what Google do for their traffic overlay, and draw the lines onto tiles in the server, and have those tiles be displayed as a GTileLayerOverlay in the client.
What you need is a spatially aware database, and a server-side graphics library like gd or ImageMagik. The client asks for a tile from the server. If the zoom is above a certain level the server scans the database for line segments that have bounding boxes that overlap the bounding box of the requested tile and use the graphics library to draw them.
The zoom level limit is there to limit the amount of work that your database and server needs to do. You don't want to end up drawing 250,000 line segments onto a single zoomed out tile because that's an awful lot of hard work for the server, and isn't going to mean very much to the user.
Regarding click handling:
The easy thing to do is to listen for clicks on the map, rather than on the objects, and send the click details to a server. The server then uses the click location to search the spatially aware database and returns the details of the clicked object if there is one. The client code does this:
GEvent.addListener(map,"click",function(overlay,point) {
var url="clickserver.php?lat=" + point.lat() + "&lng=" +point.lng();
GDownloadUrl(url, function(html) {
if (html.length) {
map.openInfoWindow(html)
}
});
});
The harder thing to do is to handle the changing of the cursor when the pointer is over the polylines. There's a known technique for doing cursor changes for small markers, which works like this:
Whenever a tile is fetched, the .getTileUrl() also makes a call to a server that returns a list of hotspot boxes for that tile. As the mouse moves, the client constantly calculates which tile the mouse is over, and then scans the corresponding list of hotspot boxes.
Google themselves, in their GLayer() code, add the sophistication of performing a quadtree search to speed up the search for hotspots within a tile, but other people who have implemented this strategy in their own code reckon that's not necessary, and a linear scan of the hotspot list is fast enough.
I've no idea how to extend that to handling cursor over polyline detection.

Related

Totally offline app-SQlite-show POIS

Can i use osmdroid to save my offline maps in a sqlite db and after show offline some POIS and get distances beetween my location and the location of the POIS?
From a research i figure out that osmdroid not support offline searching for POIS. Is that true?
Its better to search for the lip Mapforge?
With osmdroid, you can
View downloaded, prerendered, map tiles offline via database or several other mechanisms
Plot icons, lines, polygons, etc and attach on click handers for each item
Show your location on the map
So that said, you can get the point for a specific item on the map. If you can get your location, then it's a simple equation to calculate straight line distance.
If you want the points to be offline and searchable, you'll need a database of some sort, a way to populate it and some strategy to search it. This is pretty much what mapsforge is doing. I've done similar things with wikimapia data and it can definitely be done, but there's nothing provided out of the box since osmdroid only handles raster images.
If online is an option, osmbonuspack provides a number of drivers to search several online resources for POIs, including turn by turn directions.

Loading a KML file - is there a limit to the number of points?

I was loading 6,000 points of interests from a single KML file but it's not loading. What I did is to split it in 4 KML files. It loads but it's quite slow. My questions are:
Is there a limit on the number of points I can put in the KML file?
Is there a code to speed it up? I just used these codes to load the KML files:
kmlManager.parseKML("./SOURCE_KML/Part1.KML")
kmlManager2.parseKML("./SOURCE_KML/Part2.KML", onParsed);
kmlManager3.parseKML("./SOURCE_KML/Part3.kml", onParsed);
kmlManager4.parseKML("./SOURCE_KML/Part4.kml", onParsed);
Obviously there must be some limit to the number of points read/displayed from a KML file. Since you haven't mentioned how large your file(s) are and what the time taken for each stage of processing it is difficult to tell where in the chain of actions your file or files are being so slow.
It could be any of the following:
Network connection/download of the KML. Speed is based on your network.
KML processing as done by the HERE Maps JavaScript Library. Speed is based on the browser used.
Actual rendering of Points on the map as done by the HERE Maps JavaScript Library. Speed is based on the browser used.
Programmatically there is nothing you can do about point 1. If it takes time to load then so be it. This is basically a user interface issue (managing expectations), your technique of chunking up the data and loading a smaller files is probably a good one. Combine this with a wait icon to show the user that something is happening.
Regarding point 2 - you should consider whether KML is the correct (i.e flexible) format for processing your data. Other file formats could be shorter or could hold your data more concisely. Maybe you need to use some custom processing prior to display. This example uses AJAX and the KML Manager parse() method prior to displaying the file. This allows customization of the KML prior to rendering.
Regarding point 3 - there is something you can do about this - adding and rendering 6000 Markers directly is bound to take time. This can be alleviated by marker clustering - i.e. only rendering a fraction of the markers at any one time.
Consider the data visualisation KML Earthquake example from developer.here.com - the example blindly renders a given KML file with approximately 300 points. At the size shown below the points overlap and can't be easily distinguished anyway:
Now if you want to modify the rendered result it would be better to preprocess or use another format such as GeoJSON, and customize the response. An example combining GeoJSON parsing and marker clustering can be found in the HERE Maps community examples. This renders a fraction of the data and hence displays the data file more quickly.
Obviously if you have 6000 points rather than 300 the improvement will be even more noticeable.

Google Earth Plugin not exact coordinates

I've got quite strange google earth plugin behaviour. I get the camera position from the plugin to create some KML with coordinates, then I store it in database. When I reload the page, then it reads the kml, inserts it inside some other string - as a result I've got a string with whole kml document inside my javascript code. Then I load it into the plugin. Usually everything works, however after loading I see two things:
The coordinates returned by the API are not the same I have in the kml I'm loading
The camera position is sometimes moved a little bit, which causes errors like: I've got a camera inside a building, and after a couple of page refreshing, the camera suddenly is outside the building.
Do you have any hints how this could be fixed?
Example:
I've created a document, and inserted this camera tag inside:
<Camera>
<gx:ViewerOptions><gx:option name='streetview'></gx:option></gx:ViewerOptions>
<longitude>2.1201209999999993</longitude>
<latitude>48.80452499999986</latitude>
<altitude>2.4999999991174264</altitude>
<heading>22.795249807940547</heading>
<tilt>82.25987544961218</tilt>
<altitudeMode>relativeToGround</altitudeMode>
</Camera>
Then I loaded it into the plugin, and asked to fly there. When it stopeed flying, I got the coordinates using copyAsCamera() and the latitude was changed to 48.8044078508718.
The difference is not huge, just 0.000117149 but as a result it is showing a totally different place (a different room in the palace.
I'm trying to get exactly the same place, as written in the coordinates.
I have rewritten the answer to cover the various points you have made and the example you have provided.
street view
The KML data is setting <gx:ViewerOptions> to enter street view mode based on the camera. The key words being based on - a street view is an approximation. Things like the camera tilt and heading are no longer applicable as they are replaced by a SteeetView POV object. Further to that you can't guarantee that a camera at any given latitude and longitude will actually enter street view at the same given latitude and longitude.
relativeToGround and terrain data
Using altitude mode relativeToGround can cause the issue you are seeing. This is because the terrain data hasn't always finished streaming when the relatively positioned element (in your case a camera) is added.
To be clear you should use <altitudeMode>absolute</altitudeMode> and ge.ALTITUDE_ABSOLUTE.
The example you provided uses both <altitudeMode>relativeToGround</altitudeMode> and ge.ALTITUDE_RELATIVE_TO_GROUND.
You could also try disabling the terrain data by turning off the terrain layer, i.e.
ge.getLayerRoot().enableLayerById(ge.LAYER_TERRAIN, false);
multiple viewchangeend events
The viewchangeend event may fire in the middle of a viewchange, especially if the plugin pauses for a brief period during the change. Your markup is triggering street view mode which causes this to happen.
You can resolve this by using setTimeout to throttle the viewchangeend event like so.
var timer = null;
google.earth.addEventListener(ge.getView(), 'viewchangeend', function(){
if(timer){
clearTimeout(timer);
}
timer = setTimeout(eventHandler, 100);
}
);
see: https://developers.google.com/earth/documentation/events#event_listeners
Tilt discrepancy
The plugin automatically "swoops" at ground level so that it moves from looking straight down (0 degrees tilt) to straight along the horizon (90 degrees tilt). This is what is causing the discrepancy in the tilt value in the view. You are viewing objects at ground level and so the view is being automatically set - this can't be disabled.
Storing and outputting KML data
Take a look through this document, it gives some really good information of storing coordinate data and covers points like the one I mentioned - A Database Driven Earth App.
.

How to detect on client side if FusionMap tiles cached and ready to be displayed?

My webpage displays runtime generated FusionTables data on a Google Map.
The problem is: when you create a FusionTable with geometry type column and display it for the first time, Google has to load all related map tiles in its server side cache. This takes a while - sometimes 2-3 sec, sometimes 15 -20 sec.
During caching, the map should display a grey overlay saying "Data may still be loading...". I'd like to avoid this screen, because it's very buggy. Sometimes the overlay is displayed, sometimes not.
I'm looking for a way to detect if all map tiles cached so that i can display the map to the user.
I already tried to refresh the map tiles periodically, but this will give me no feedback when to stop refreshing:
$("img[src*='googleapis']").each(function(){
$(this).attr("src",$(this).attr("src")+"&"+(new Date()).getTime());
});
For this reason I'm looking for other solutions.
Try simplifying your geographic data. The message appears when the server-side process misses a deadline for serving the map tile in a reasonable time. By reducing the complexity of the polygons, you're much more likely to not see the "Data may still be loading...." message tiles.
You can reduce the complexity in two ways: reduce the number of vertices (points) that define the polygons, and reduce the precision of the lat/long locations.
Please also note as an FYI that as the exact same map gets called again and again by different viewers, the process results are cached server-side and the message becomes much less likely to appear, and then usually due to public cacheing.
-Rebecca
Answering my own question: simply there's no way to do it as of now.
As a sidenote: I'd never advise anyone to think about using FusionTables to display dynamically generated geographic data. It's not designed that way. Only use FT if you have a somewhat static dataset that changes rarely.

GTFS/NextBus/Google Maps - transit distance traveled

I am trying to get the distance traveled on a transit route -- particularly San Francisco MUNI, but the standards NextBus, GTFS, and Google Maps API appear to be universal. I'm comfortable using any of these APIs, I'm just not sure how to go about this problem.
The easy way - ask Google Maps (this using webservices, but there is also the javascript API):
http://maps.googleapis.com/maps/api/directions/json?origin=37.7954199,-122.397&destination=37.7873299,-122.44691&sensor=false&mode=transit&departure_time=1348109609&alternatives=true
this JSON includes distance traveled, but there are two issues:
Google does not allow you to use this data unless you're displaying a map, which I don't want to do
I would need to ensure that the distance returned is for the correct route/line, since it can/will give multiple routing options. This is probably doable but would require more logic.
EDIT: using alternatives=true (or provideRouteAlternatives: true using the javascript API) only returns a maximum of 3 routes, which here in SF often doesn't include the route I'm looking for (other transit agencies, multiple lines on the same route, etc). So this isn't such a great option.
NextBus:
example route config:
http://webservices.nextbus.com/service/publicXMLFeed?command=routeConfig&a=sf-muni&r=1
The coordinates for each stop are given, but connecting the dots on those is not the same as the route taken -- it will cut corners, etc, and I need this to be accurate. The actual route taken is given under <path>/<point>, but I don't see any obvious correlation between stop and path coordinates. Plus, NextBus says in their documentation (p.10 near the bottom) that you should NOT connect points between <path> segments, they're only meant for drawing on a map and can overlap.
GTFS:
The GTFS data also separates stop and "shape" coordinates (like NextBus paths). Unfortunately, the coordinates are slightly different for the same stops between NextBus and GTFS (rounding), though the stop ID/tags are the same. Also, the data files are in the megabytes, and I need to use this for a mobile app. I suppose I could put all the data in a database and query that, but that still leaves figuring out how to correlate the stops with the shape. The "shapes_distance_traveled" column in the shapes.txt file is especially promising. MUNI chooses to leave the optional "shapes_distance_traveled" field out of stop_times.txt, though.
Any advice would be appreciated, I understand this seems like an epic task to get a simple value. Maybe I'll just throw a map in to legitimately use the distance :)
Instead of using Google Maps, I would look into the un-encumbered licensing of OpenStreetMap. There are multiple
routing engines that can use OSM data. Personally, I would use routing in PostGIS or SQLite, but depending on your skillset you might choose another.
You've clearly done your research, (+1), and as you said, the easy way is to ask Google. If it is worth for you then you might want to look into purchasing a business licence to use the Google Maps API, and negotiate with them about the requirement of displaying a map. That's the only legal way I can think of with the Google API. Alternatively, you can try building you own routing engine with data from the TIGER data set, which is freely available from the US Census Bureau, but again, as you said, it may seem like an epic task. :-)

Resources