I have a static .csv type file that lists 4000+ addresses, each with a unique id, and their correlative latitude and longitude. I want to query a database that has events stored for each venue id and then display on a Google map, only those addresses that have events that match the query.
This would be one thing if it were not for Google's query limit (when it goes live, there is the potential for hundreds of thousands to millions of queries daily). The limit for KML files is sufficient, however (I believe it is only file size that is counted, am I wrong?) and I would just convert the .csv type file to a .kml file were it not for the fact that I don’t want all 4000+ addresses to be loaded on the map every time, only those that correspond to the search query.
There has to be a way of selectively loading certain placemarks from a single .kml file, right? I would like to not need to use a server side approach (ASP.Net) if possible, but will if absolutely necessary.
~~~~~
I think I'll use the server side approach. I would still like to use kml as I was running into the query limit trying pure JavaScript (although I may have been doing something wrong then as that was when I was just learning how to use the Google maps API). The kml consists of venues and their relevant location for events one might have to buy tickets to. A search term might be 'wicked New York', or 'concerts FL'. The database will return venue id’s, which correlate to placemark id’s in the kml file. What I would like to do is use an array of venue id’s that are returned by the search query and then scan through the kml file and return only those placemark id’s that match the venue id’s in the array. I would then like to have the kml placemarks be loaded into a div tag in on same page and then have this be what Google uses to put the pointers on the map. Is there a way of using # named anchors instead of complete URL’s to load the kml into Google maps (var kmlVar = new google.maps.KmlLayer('#kmlDivTagOnSamePage'); this doesn’t work)? This would then be easy to write the server side ASP part.
If you are just using 2D maps, there is probably no need to use KML. And since you don't need to display all 4k locations at once, you probably don't need to use Google's server side rendering (you can just render the markers in javascript), so although Fusion Tables could work I'd instead recommend one of these alternatives:
- Loading up all the data as a JSON dictionary with the ids as your key, and when a search is run, find the matches and just display those
- Don't load anything on the client initially: Just query your server and have it return all the data you need for display for each search
If these suggestions don't seem to make sense, perhaps try provide the exact type of query someone might ask, and what type of response you expect from the database, whether you control that database, etc.
Related
I'm working with a classmate to build some kind of politicaly-related memes database where users will have the ability to tag images with hashtags, using Meteor. The purpose of this, beyond data collection, is to provide a powerful search engine, where one can find memes with keywords (let's say, for i.e., with the keywords "ukraine" and/or "poutine", you'll find memes related to theses topics) that matches the hashtags.
We have to build everything from scratch, and I'm wondering if someone here have an idea where to start. In other words :
What is the easiest way to host images with Meteor ? Is it through MangoDB ?
Is it possible to change the metadata of the images in the client side ? Do we need to grant this ability using javascript only (or is there also json in it) ?
If we can manage the two first parts, is there a way to link the metadata (the hashtags in that case) with the search engine in order to retrieve the images ?
Thank you for inputs !
It's not easiest but I would store images in Google Cloud Storage or Amazon S3
I would store image metadata in mongodb database. You can update the database from client side by calling Meteor Methods
When users search for images by entering keywords or link with keywords, you can query the database then return the related images.
The title is probably poorly worded, but I'm trying my hand at creating a REST api with symfony. I've studied a few public api's to get a feel for it, and a common principle seems to be dealing with a single resource path at a time. However, the data I'm working with has a lot of levels (7-8), and each level is only guaranteed to be unique under its parent (the whole path makes a composite key).
In this structure, I'd like to get all children resources from all or several parents. I know about filtering data using the queryParam at the end of a URI, but it seems like specifying the parent id(s) as an array is better.
As an example, let's say I have companies in my database, which own routers, which delegate traffic for some number of devices. The REST URI to get all devices for a router might look like this:
/devices/company/:c_id/routers/:r_id/getdevices
but then the user has to crawl through all :r_id's to get all the devices for a company. Some suggestions I've seen all involve moving the :r_id out of the path and using it in the the query string:
/devices/company/:c_id/getdevices?router_id[]=1&router_id[]=2
I get it, but I wouldn't want to use it at that point.
Instead, what seems functionally better, yet philosophically questionable, is doing this:
/devices/company/:c_id/routers/:[r_ids]/getdevices
Where [r_ids] is a stringified array of ids that can be decoded into an array of integers/strings server-side. This also frees up the query-parameter string to focus on filtering devices by attributes (age, price, total traffic, status).
However, I am new to all of this and having trouble finding what is "standard". Is this a reasonable solution?
I'll add I've tested the array string out in Symfony and it works great. But I can't tell if it can become a vehicle for malicious queries since I intend on using Doctrine's DBAL - I'll take tips on that too (although it seems like a problem regardless for string id's)
However, I am new to all of this and having trouble finding what is "standard". Is this a reasonable solution?
TL;DR: yes, it's fine.
You would probably see an identifier like that described using a level 4 URI Template, with your list of identifiers encoded via a path segment expansion.
Your example template might look something like:
/devices/company{/c_id}/routers{/r_ids}/devices
And you would need to communicate to the template consumer that c_id is a company id, and r_ids is a list of router identifiers, or whatever.
You've seen simplified versions of this on the web: URI templates are generalizations of web forms that read information from input controls and encode the inputs into the query string.
I'm planning my migration away from fusion tables. My current implementation (on gae) has a fusion table with hundreds of locations and displays it on a fusion tables layer via the google maps api. Filtering of locations is done by modfiying the fusion tables query in javascript on the client.
My plan is to migrate to the google app engine datastore combined with a google maps data layer. But I'm completely puzzled how to implement the retrieving and displaying of data, also when the user is browsing the map (zooming, panning) or applying filters, as this was all taken care of by the fustion tables layer.
Should I query for data that is visible on the current map view only, and query again when the user goes to a different view? (when panning a map, this will be lots of queries). Or should I querying for all data, even it is out of view? (looks like less of a hassle on the implementation, but is not scalable when datasets grow).
How about filtering. When the user applies a filter, should I query for data again, or is it better to implement the filter on the client side and hide items on the map via map styles?
Would it make sense to use the geojson format to transfer data from the server to the client, so it can be used to populate the data layer without further processing?
What will happen if a user zooms out all the way? Must I then transfer the complete dataset to the client and from there back to the google maps api for rendering? That doesn't seem scalable either?
With fusion tables, that was all taken care of… Now there's so many choises to be made!? There should be some kind of common approach to this kind of use case, shouldn't there?
Here's a screenshot of my app, to show the amount of data that's involved (can grow!)
I have something similar going on. For me a single datastore query & fetch caps out at a couple hundred records before the http request timesout. So that won't handle your 'zoom out' scenario very well.
Another thing to note is that datastore will not be able to do an inequality filter on both latitude and longitude.
https://cloud.google.com/appengine/docs/standard/python/datastore/query-restrictions#inequality_filters_are_limited_to_at_most_one_property
Currently I publish my entire list of ~10k points as a json file into google cloud storage and then my endpoint serves that cached file instead of actually doing the fetch.
https://cloud.google.com/appengine/docs/standard/python/tools/webapp/blobstorehandlers#Using_with_GCS
On the Client side I just feed the whole thing into the google maps sdk. The map performs fine, if you enable things like marker clustering.
https://developers.google.com/maps/documentation/javascript/reference/marker#MarkerOptions.optimized
https://developers.google.com/maps/documentation/javascript/marker-clustering
I'm writing a simple Wordpress plugin for work and am wondering if using the Transients API is practical in this case, or if I should seek out another way.
The plugin's purpose is simple. I'm making a call to USZip Web Service (http://www.webservicex.net/uszip.asmx?op=GetInfoByZIP) to retrieve data. Our sales team is using a Lead Intake sheet that the plugin will run on.
I wanted to reduce the number of API calls, so I thought of setting a transient for each zip code as the key and store the incoming data (city and zip). If the corresponding data for a given zip code already exists, then no need to make an API call.
Here are my concerns:
1. After a quick search, I realized that the transient data is stored in the wp_options table and storing the data would balloon that table in no time. Would this cause a significance performance issue if the db becomes huge?
2. Is this horrible practice to create this many transient keys? It could easily becomes thousands in a few months time.
If using Transient is not the best way, could you please help point me in the right direction? Thanks!
P.S. I opted for the Transients API vs the Options API. I know zip codes don't change often, but they sometimes so. I set expiration time of 3 months.
A less-inflated solution would be:
Store a single option called uszip with a serialized array inside the option
Grab the entire array each time and simply check if the zip code exists
If it doesn't exist, grab the data and save the whole transient again
You should make sure you don't hit the upper bounds of a serialized array in this table (9,000 elements) considering 43,000 zip codes exist in the US. However, you will most likely have a very localized subset of zip codes.
I have a task to extend my web application to provide users the ability to segment their own data (i.e choose their own fields and add their criteria using And/Or etc), so I'm creating something similar to a query builder tool but lighter. I'm not worrying about the front end for the moment, i am just trying to focus on how to do this in the back end.
My only thoughts so far are to store their "Segment" as an XML document (serialized in the DB) which contains all of their columns and criteria and how they map to the database, then when the segment is called, i have a mapping class which deserializes this xml document and maps the fields and builds a SQL query for this and then returns the query results. The problem i see with this is if the database setup changes (likely) then i have a serialized XML document which knows nothing about these changes.
Has anyone tacked a similar situation?
I had a similar problem and posted a question on here with what could be a potential solution to your own issue.
Dynamic linq query with multiple/unknown criteria
See how you get on with that.