I have a list of cities (and some other locations) around the world, formatted like this:
America/Antigua
America/Anguilla
Europe/Tirane
Asia/Yerevan
America/Curacao
Africa/Luanda
Antarctica/McMurdo
And I need to get their corresponding coordinates formatted like this:
Europe/Stockholm 59.21N 18.04W
Since I have a rather large list (around 1k posts) I would like to be able to automate the retrieval of these coordinates. Does there exist a free resource (preferably downloadable, not search-only) from which it's easy to extract this data?
The alternatives I can think of at the moment are google maps (which would require an api key, if I understand it correctly), or Wikipedia (which doesn't have that data easily available, and is not optimized for that kind of searches). And both of these are online-only, which is sub-optimal for me.
Look at geonames.org
I am sure you can query that service for free but I dont know if that data can be downloadable
Check out http://www.realestate3d.com/gps/world-latlong.htm
Related
I've been reading about linked data and I think I understand the basics of publishing linked data, but I'm trying to find real world practical (and best practise) usage for linked data. Many books and online tutorials talk a lot about RDF and SPARQL but not about dealing with other peoples data.
My question is, if I have a project with a bunch of data that I output as RDF, what is the best way to enhance (or correctly use) other people's data?
If I create an application for animals and I want to use data from the BBC wildlife page (http://www.bbc.co.uk/nature/life/Snow_Leopard) what should I do? Crawl the BBC wildlife page, for RDF, and save the contents to my own triplestore or query the BBC with SPARQL (I'm not sure that this is actually possible with the BBC) or do I take the URI for my animal (owl:sameAs) and curl the content from the BBC website?
This also asks the question, can you programmatically add linked data? I imagine you would have to crawl the BBC wildlife page unless they provide an index of all the content.
If I wanted to add extra information such as location for these animals (http://www.geonames.org/2950159/berlin.html) again what is considered the best approach? owl:habitat (fake predicate) Brazil? and curl the RDF for Brazil from the geonames site?
I imagine that linking to the original author is the best way because your data can then be kept up-to-date, which from these slides from a BBC presentation (http://www.slideshare.net/metade/building-linked-data-applications) is what the BBC does, but what if the authors website goes down or is too slow? And if you were to index the author's RDF I imagine your owl:sameAs would point to a local RDF.
Here's one potential way of creating and consuming linked data.
If you are looking for an entity (i.e., a 'Resource' in Linked Data terminology) online, see if there is Linked Data description about it. One easy place to find this is DBpedia. For Snow Leopard, one URI that you can use is http://dbpedia.org/page/Snow_leopard. As you can see from the page, there are several object and property descriptions. You can use them to create a rich information platform.
You can use SPARQL in two ways. Firstly, you can directly query a SPARQL endpoint on the web where there might be some data. BBC had one for music; I'm not sure if they do for other information. DBpedia can be queried using snorql. Secondly, you can retrieve the data you need from these endpoints and load into your triple store using INSERT and INSERT DATA features of SPARQL 1.1. To access the SPARQL end points from your triple store, you will need to use the SERVICE feature of SPARQL. The second approach protects you from the inability to execute your queries when a publicly available end point is down for maintenance.
To programmatically add the data to your triplestore, you can use one of the predesigned libraries. In Python, RDFlib is useful for such applications.
To enrich the data with that sourced from elsewhere, there can again be two approaches. The standard way of doing it is using existing vocabularies. So, you'd have to look for the habitat predicate and just insert this statement:
dbpedia:Snow_leopard prefix:habitat geonames:Berlin .
If no appropriate ontologies are found to contain the property (which is unlikely in this case), one needs to create a new ontology.
If you want to keep your information current, then it makes sense to periodically run your queries. Using something such as DBpedia Live is useful is this regard.
I'm writing a simple Wordpress plugin for work and am wondering if using the Transients API is practical in this case, or if I should seek out another way.
The plugin's purpose is simple. I'm making a call to USZip Web Service (http://www.webservicex.net/uszip.asmx?op=GetInfoByZIP) to retrieve data. Our sales team is using a Lead Intake sheet that the plugin will run on.
I wanted to reduce the number of API calls, so I thought of setting a transient for each zip code as the key and store the incoming data (city and zip). If the corresponding data for a given zip code already exists, then no need to make an API call.
Here are my concerns:
1. After a quick search, I realized that the transient data is stored in the wp_options table and storing the data would balloon that table in no time. Would this cause a significance performance issue if the db becomes huge?
2. Is this horrible practice to create this many transient keys? It could easily becomes thousands in a few months time.
If using Transient is not the best way, could you please help point me in the right direction? Thanks!
P.S. I opted for the Transients API vs the Options API. I know zip codes don't change often, but they sometimes so. I set expiration time of 3 months.
A less-inflated solution would be:
Store a single option called uszip with a serialized array inside the option
Grab the entire array each time and simply check if the zip code exists
If it doesn't exist, grab the data and save the whole transient again
You should make sure you don't hit the upper bounds of a serialized array in this table (9,000 elements) considering 43,000 zip codes exist in the US. However, you will most likely have a very localized subset of zip codes.
I'm trying to setup a portion to my app where the user's mobile location (which I have working) is used to find the nearest locations (lets say within 15 miles) against a json store of locations (lets say of 40 or so and it will increase).
I've been racking my brain how to go about this. I believe Distance Matrix (looking at the Google API, but no idea how to implement it from the docs) is something I need, but I can't figure how to load the json list and use it against the location (still a n00b).
If there's a tutorial or some details on how to go about it would be great. I just need a point in the right direction.
Any info is great, thanks ahead of time.
I'm using Sencha Touch 2.0.1, a json store and using the Google Maps JavaScript API v3.
Matt
You'd be better off doing that processing on the backend and sending only the close records back to the client. This will keep your client app from getting bogged down.
You can use Postgresql/Postgis for this if you store the lat/long points as spatial data. You can also do this with the MySQL spatial extensions. If you want to build the haversine formula into a MySQL function, you can draw 15 mile radii around all of your points and use MySQL's builtin 'within' function. If you'd rather use a flavor of NoSQL, then CouchDB has a nice GeoCouch extension which can perform the function you desire.
Yea, figured out a way. More than likely not the best, but it'll work for now.
I'm loading all the latitudes and longitudes with the locations into the store file.
Then, on initialize, it takes the current location and matches it against the stored locations and runs the Haversine formula with each one. Once it has all of the distances, it sorts and done.
I am trying to build a map based query interface for my website and I am having difficulty finding a starting point besides http://developer.google.com. I assume this is a rather simple task but I feel as though I am on a wild goose chase. Anyway the problem is the existing site places people into a category based on their address (primarily the zip code), this is not working out because of odd shapes and user density so I would like to solve the problem by creating custom zones.
I am not looking for a proprietary solution because I would really like to accomplish this on my own, I just need some better places to start or better suggestions for searches.
I understand that I will need to create a map with my predetermined polygons.
I understand how to create a map with polygons via js.
I do not understand how data will request which zone it is within and how it will return it as a hash I can store. eg. user=>####, zone=>####, section=>#####
http://blog.appdelegateinc.com./point-in-polygon-checking-with-google-maps.html
has some JS you can add to give the ability to test whether a point is within a polygon (sample: http://blog.appdelegateinc.com./static/samples/point_in_polygon.html ) using this approach: http://en.wikipedia.org/wiki/Point_in_polygon#Ray_casting_algorithm
I think as you place the markers, you'll hold them in an array (of objects)...then loop through, doing some sort of reduction of which polygons to test, testing those that remain, if inPoly, set marker.zone and marker.section to whatever suits your needs
I am currently looking into using Lucene.NET for powering the search functionality on a web application I am working on. However, the search functionality I am implementing not only needs to do full text searches, but also needs to rank the results by proximity to a specified address.
Can Lucene.NET handle this requirement? Or do I have need to implement some way of grouping hits into different locations (e.g. less than 5 miles, less than 10 miles, etc) first, then use Lucene.NET to rank the items within those groups? Or is there a completely different way that I am overlooking?
You can implement a custom scorer to rank the results in order of distance, but you must filter the results before to be efficient. You can make use of the bounding boxes method, filtering the results in a square of 20 milles around your address, and after that apply the ranking.
If I don't remember bad, In the lucene in action book there is an example of a distance relevance algorithm. It's for java lucene, but the api is the same and you can translate easily to c# or vb.net
What you are looking for is called spatial search. I'm not sure if there are extensions to Lucene.Net to do this but you could take a look at NHibernate Spatial. Other than that, these queries are often done within the database. At least PostGreSQL, MySQL and SQL Server 2008 have spatial query capabilities.
After some additional research, I think I may have found my answer. I will use Lucene.NET to filter the search results down by other factors, then use the geocoded information from Google or Yahoo to sort the results by distance.