I'm new to the Google Maps API and have just taken over an RoR application that plots up to 2,000 markers on a map using MarkerClusterer and each marker has an associated infowindow.
The current implementation creates an array of infowindow content strings in JavaScript and downloads the JavaScript to the browser. Uncompressed (without server content compression), the array and Javascript can be as large as 9 MB.
The performance bottlenecks associated with this implementation that I have found are:
1. Time on server to create 2000 strings and put them in the JavaScript array. (~4-5 seconds)
2. Time on server to compress the multi-megabyte JavaScript before sending to the browser. (~2-3 seconds)
My initials thought is to create a template for the infowindow content string that contains all of the HTML formatting so that the only data contained in the array of infowindow contents are the actual raw numbers to be displayed. This should greatly reduce the compute time of assembling the content strings array on the server, and correspondingly reduce the memory consumed by the array in the browser. Also, only one infowindow needs to be open at a time.
Being unfamiliar with the Google Maps v3 API, I'm looking for any guidance as to whether or not this is the best strategy for optimization. And, any pointers to code samples that implement this type of strategy.
Thanks in advance,
-Scott
I think you should not load all 2000 datasets in your array, but doing one of 2 alternatives:
You only load the markers and so the infowindows that are in the current viewport.
You load the infowindow content by ajax, when clicking the marker.
Related
I was loading 6,000 points of interests from a single KML file but it's not loading. What I did is to split it in 4 KML files. It loads but it's quite slow. My questions are:
Is there a limit on the number of points I can put in the KML file?
Is there a code to speed it up? I just used these codes to load the KML files:
kmlManager.parseKML("./SOURCE_KML/Part1.KML")
kmlManager2.parseKML("./SOURCE_KML/Part2.KML", onParsed);
kmlManager3.parseKML("./SOURCE_KML/Part3.kml", onParsed);
kmlManager4.parseKML("./SOURCE_KML/Part4.kml", onParsed);
Obviously there must be some limit to the number of points read/displayed from a KML file. Since you haven't mentioned how large your file(s) are and what the time taken for each stage of processing it is difficult to tell where in the chain of actions your file or files are being so slow.
It could be any of the following:
Network connection/download of the KML. Speed is based on your network.
KML processing as done by the HERE Maps JavaScript Library. Speed is based on the browser used.
Actual rendering of Points on the map as done by the HERE Maps JavaScript Library. Speed is based on the browser used.
Programmatically there is nothing you can do about point 1. If it takes time to load then so be it. This is basically a user interface issue (managing expectations), your technique of chunking up the data and loading a smaller files is probably a good one. Combine this with a wait icon to show the user that something is happening.
Regarding point 2 - you should consider whether KML is the correct (i.e flexible) format for processing your data. Other file formats could be shorter or could hold your data more concisely. Maybe you need to use some custom processing prior to display. This example uses AJAX and the KML Manager parse() method prior to displaying the file. This allows customization of the KML prior to rendering.
Regarding point 3 - there is something you can do about this - adding and rendering 6000 Markers directly is bound to take time. This can be alleviated by marker clustering - i.e. only rendering a fraction of the markers at any one time.
Consider the data visualisation KML Earthquake example from developer.here.com - the example blindly renders a given KML file with approximately 300 points. At the size shown below the points overlap and can't be easily distinguished anyway:
Now if you want to modify the rendered result it would be better to preprocess or use another format such as GeoJSON, and customize the response. An example combining GeoJSON parsing and marker clustering can be found in the HERE Maps community examples. This renders a fraction of the data and hence displays the data file more quickly.
Obviously if you have 6000 points rather than 300 the improvement will be even more noticeable.
My webpage displays runtime generated FusionTables data on a Google Map.
The problem is: when you create a FusionTable with geometry type column and display it for the first time, Google has to load all related map tiles in its server side cache. This takes a while - sometimes 2-3 sec, sometimes 15 -20 sec.
During caching, the map should display a grey overlay saying "Data may still be loading...". I'd like to avoid this screen, because it's very buggy. Sometimes the overlay is displayed, sometimes not.
I'm looking for a way to detect if all map tiles cached so that i can display the map to the user.
I already tried to refresh the map tiles periodically, but this will give me no feedback when to stop refreshing:
$("img[src*='googleapis']").each(function(){
$(this).attr("src",$(this).attr("src")+"&"+(new Date()).getTime());
});
For this reason I'm looking for other solutions.
Try simplifying your geographic data. The message appears when the server-side process misses a deadline for serving the map tile in a reasonable time. By reducing the complexity of the polygons, you're much more likely to not see the "Data may still be loading...." message tiles.
You can reduce the complexity in two ways: reduce the number of vertices (points) that define the polygons, and reduce the precision of the lat/long locations.
Please also note as an FYI that as the exact same map gets called again and again by different viewers, the process results are cached server-side and the message becomes much less likely to appear, and then usually due to public cacheing.
-Rebecca
Answering my own question: simply there's no way to do it as of now.
As a sidenote: I'd never advise anyone to think about using FusionTables to display dynamically generated geographic data. It's not designed that way. Only use FT if you have a somewhat static dataset that changes rarely.
I have loaded 6 kml layers via url to my website to be toggled off/on by checkboxes. I have recently notice though, that it will only allow me to show 4 kml's at a given time. When I select more than 4 the 5th and 6th does not show. It does not matter what ones I choose, it seems to limit me to only showing 4. Can someone direct me on what may be causing this or should I be coding this some other way? The kml's by themselves do work and are under 800kb. Is just seems very weird that I can only have 4 kml showing at a given time.
This is the site - www.gbnrtc.org/bikemap
If you check the KML Support page, which lists the level of KML support provided for Google Maps and Google Maps for mobile, it lists the following size and complexity restrictions:
Maximum fetched file size (raw KML, raw GeoRSS, or compressed KMZ): 3MB
Maximum uncompressed KML file size: 10MB
Maximum number of Network Links: 10
Maximum number of total document-wide features: 1,000
Given that you estimate your file size at ~800K, that would put you right around 3.2 MB for four of your files. Without knowing more about your KML content, it seems to make sense that the limit would gate you after loading four.
This answer is quite old. I have found a great library to load large KML/KMZ files (even if Google is throwing an error). Please note that the library is quite old and have not received any update.
Here are more details about it:
Using GeoXML, I was able to add all the required KMLs to the map successfully! https://github.com/geocodezip/geoxml3
Step 1: Download and include geoxml3.js script
<script src="geoxml3.js"></script> // include after google map api script
Stpe 2: Instantiate and initialize the object in JS:
var myParser = new geoXML3.parser({map: map});
myParser.parse('/path/to/data.kml');
And this will load the KMl file.
I use google.maps.KmlLayer('http://mywebsite.com/my.kml') to set objects from KML file. It is working, but when I change kml and try to refresh website...I still have the same state as before...without my changes. When I change file name to my2.kml - it is working... Are google caching my kml? What I need to do to update changes with the same kml file name?
The Google servers do in fact cache KML data. Because the Google servers are processing your KML and not your browser, clearing your cache will not help. This is why changing the file name works. In order to prevent caching, add a cache-buster to your KML URL that you create the KML layer with, such as a random number or the current timestamp. http://mywebsite.com/my.kml?rev=323626252346 where the value of rev changes every time you refresh the page would work. You could also write Javascript so that you can click a button that updates the URL on the KML Layer object, removing the need to refresh the page.
Yes, google servers cache the KML data. So avoid this caching, change the kml url to
"http://www.kmlsource.com/foo.kml?dummy=" + (new Date()).getTime();
This will always generate a new website and the caching problem will be solved.
I would like to add a strong caveat to the advice to add cache-busting to the KML URLs: please note that if you use a timestamp like (new Date()).getTime(), this means that Google will try to fetch the KML file from your server almost every time a user tries to display your KML layer.
Two consequences:
the bandwidth and CPU used by simultaneous file uploads can get very high if you have a peak of visitors on your website (this will of course also depend on the size of your KML);
this in turn can add big delays to show the KML on your map.
A better strategy would be to only add a cache-busting parameter when you know the KML has changed. Maybe the browser could receive a hash of your KML file from your server, or a version number of the KML, and add that as an extra parameter to the request. You'd need to recompute the hash, or generate a new version number, every time the KML file got updated.
A lazier, far less efficient idea, would be for the server to generate a new token at regular intervals (e.g. once every 10 minutes, make it a reasonable period for your use-case), and have the browser use that token when it needs to display the KML file.
A much worse idea, but still a little bit better than using a browser-side millisecond timestamp to do cache-busting, is to only change the parameter at most once every 10 minutes on the browser side.
For example, instead of (new Date()).getTime(), you could use something like: Math.floor((new Date()).getTime()/1000/(10*60))*(10*60)
Note that this is a pretty bad solution, because this will still be computed based on the end-user's computer clock. If the end-users have set up incorrect times on their machines, many different timestamps can still be generated within a short period of time.
One of the advantages of using KML layers in Google Maps JavaScript API is that they get cached on Google side: if you don't need that, you might be better served with layers built directly on the browser-side. E.g. use GeoJSON. There is even a doc for that for the Google JS API: https://developers.google.com/maps/documentation/javascript/datalayer#load_geojson
I am looking for advice for an application I am developing that uses Google Map.
Summary:
A user has a list of criteria for searching a street segment that fulfills the criteria. The street segments will be colored with 3 colors for showing those below average, average and over average. Then the user clicks on the street segment to see an information window showing the properties of that specific segment hiding those not selected until he/she closes the window and other polyline becomes visible again. This looks quite like the Monopoly City Streets game Hasbro made some month ago the difference being I do not use Flash, I can’t use Open Street Map because it doesn’t list street segment (if it does the IDs won’t be the same anyway) and I do not have to show Google sketch building over.
Information:
I have a database of street segments with IDs, polyline points and centroid.
The database has 6,000,000 street segment records in it. To narrow the generated data a bit we focus on city. The largest city we must show has 250,000 street segments. This means 250,000 line segment polyline to show.
Our longest polyline uses 9600 characters which is stored in two 8000 varchar columns in SQL Server 2008.
We need to use the API v3 because it is faster than the API v2 and the application will be ported to iPhone. For now it's an ASP.NET 3.5 with SQl Server 2008 application.
Performance is a priority.
Problems:
Most of the demo projects that do this are made with API v2. So besides tutorial on the Google API v3 reference page I have nothing to compare performance or technology use to achieve my goal.
There is no available .NET wrapper for the API v3 yet.
Generating a 250,000 line segment polyline creates a heavy file which takes time to transfer and parse. (I have found a demo of one polyline of 390,000 points. I think the encoder would be far less efficient with more polylines with less points since there will be less rounding.)
Since streets segments are shown based on criteria, polylines must be dynamically created and cache can't be used.
Some thoughts:
KML/KMZ:
Pros:
Since it is a standard we can easily load Bing maps, Yahoo! maps, Google maps, Google Earth, with the same KML file. The data generation would be the same.
Cons:
LineString in KML cannot be encoded polyline like the Google map API can handle. So it would probably be bigger and slower to display. Zipping the file at the size it will take more processing time and require the client side to uncompress the data and I am not quite sure with 250,000 data how an iPhone would handle this and how a server would handle 40 users browsing at the same time.
JavaScript file:
Pros:
JavaScript file can have encoded polyline and would significantly reduce the file to transfer.
Cons:
Have to create my own stripped version of API v3 to add overlays, create polyline, etc. It is more complex than just create a KML file and point to the source.
GeoRSS:
This option isn't adapted for my needs I think, but I could be wrong.
MapServer:
I saw some post suggesting using MapServer to generate overlays. Not quite sure for the connection with our database and the performance it would give. Plus it requires a plugin for generating KML. It seems to me that it wouldn't allow me to do better than creating my own KML or JavaScript file. Maintenance would be simpler without.
Monopoly City Streets:
The game is now over, but for those who know what I am talking about Monopoly City Streets was showing at max zoom level only the streets that the centroid was inside the Bounds of the window. Moving the map was sending request to the server for the new streets to show. While I think this was ingenious, I have no idea how to implement something similar. The only thing I thought about was to compare if the long was inside the bound of map area X and same with Y. While this could improve performance significantly at high zoom level, this would give nothing when showing a whole city.
Clustering:
While cluster is awesome for marker, it seems we cannot cluster polylines. I would have liked something like MarkerClusterer for polylines and be able to cluster by my 3 polyline colors. This will probably stay as a “would have been freaking awesome but forget it”.
Arrow:
I will have in a future version to show a direction for the polyline and will have to show an arrow at the centroid. Loading an image or marker will only double my data so creating a custom overlay will probably be my only option. I have found that demo for something similar I would like to achieve. Unfortunately, the demo is very slow, but I only wish to show 1 arrow per polyline and not multiple like the demo. This functionality will depend on the format of data since I don't think KML support custom overlays.
Criteria:
While the application is done with ASP.NET 3.5, the port to the iPhone won't use the web to show the application and be limited in screen size for selecting the criteria. This is why I was more orienting on a service or page generating the file based on criteria passed in parameters. The service would than generate the file I need to display the polylines on the map. I could also create an aspx page that does this. The aspx page is more documented than the service way. There should be a reason.
Questions:
Should I create a web service to returns the street segments file or create an aspx page that return the file?
Should I create a JavaScript file with encoded polyline or a KML with longitude/latitude based on the fact that maximum longitude/latitude polyline have 9600 characters and I have to render maximum 250,000 line segment polyline. Or should I go with a MapServer that generate the overlay?
Will I be able to display simple arrow on the polyline on the next version.
In case of KML generation is it faster to create the file with XDocument, XmlDocument, XmlWriter and this manually or just serialize the street segment in the stream?
This is more a brainstorming Stack Overflow question than an actual code problem. Any answer helping narrow the possibilities is as good as someone having all the knowledge to point me out a better choice.
Large numbers of short GPolylines run massively slower than small numbers of long GPolylines.
The speed difference between Google Maps v2 and Google Maps v3 is not going to be significant, because most of the CPU time will be taken up by the actual graphics system of the browser. Google Maps uses the VML, SVG or Canvas graphics systems, depending on the browser. Of these, VML is by far the slowest, and that gets used whenever the browser is MSIE.
Before embarking on tackling 250,000 line segments, I suggest you take a look at this quick speed test of 200 random polylines. Try zooming and paning that map in MSIE.
Then, also consider the amount of data that needs to be sent from the server to the client to specify 250,000 line segments. The amount of data will vary depending on whether you choose KML or JSON or GeoRSS, but if you end up with 20 bytes per line segment that would take 50 seconds to fetch on a 1 megabit broadband connection. Consider whether your users would be prepared to sit around for 50 seconds.
The only solution that really makes sense is to do what Google do for their traffic overlay, and draw the lines onto tiles in the server, and have those tiles be displayed as a GTileLayerOverlay in the client.
What you need is a spatially aware database, and a server-side graphics library like gd or ImageMagik. The client asks for a tile from the server. If the zoom is above a certain level the server scans the database for line segments that have bounding boxes that overlap the bounding box of the requested tile and use the graphics library to draw them.
The zoom level limit is there to limit the amount of work that your database and server needs to do. You don't want to end up drawing 250,000 line segments onto a single zoomed out tile because that's an awful lot of hard work for the server, and isn't going to mean very much to the user.
Regarding click handling:
The easy thing to do is to listen for clicks on the map, rather than on the objects, and send the click details to a server. The server then uses the click location to search the spatially aware database and returns the details of the clicked object if there is one. The client code does this:
GEvent.addListener(map,"click",function(overlay,point) {
var url="clickserver.php?lat=" + point.lat() + "&lng=" +point.lng();
GDownloadUrl(url, function(html) {
if (html.length) {
map.openInfoWindow(html)
}
});
});
The harder thing to do is to handle the changing of the cursor when the pointer is over the polylines. There's a known technique for doing cursor changes for small markers, which works like this:
Whenever a tile is fetched, the .getTileUrl() also makes a call to a server that returns a list of hotspot boxes for that tile. As the mouse moves, the client constantly calculates which tile the mouse is over, and then scans the corresponding list of hotspot boxes.
Google themselves, in their GLayer() code, add the sophistication of performing a quadtree search to speed up the search for hotspots within a tile, but other people who have implemented this strategy in their own code reckon that's not necessary, and a linear scan of the hotspot list is fast enough.
I've no idea how to extend that to handling cursor over polyline detection.