I have trawled the web looking for an answer to my question, unfortunately I didn't find a solution. I want to read a json file to get the points to plot on map. I also want to add 3 checkboxes to act as a filter.
I came across 2 examples but they don't seem to be reading from a json, instead they read from an xml file. Hope someone can provide an example.
You will need a JSON parser. See http://www.json.org/js.html for more.
After the difference in how to parse (JSON vs. XML), the rest of the code should be the same.
Good luck!
Related
Guys I am working on getting data as tables from QuickBase using Requests library (Python). I found somebody doing it using the URL of the report, but he added two parameters to the URL like that:
&dlta=xs%xx&ridlist=xxxx.
Can anybody please tell me what are those two parameters, I searched for them in the internet but found nothing related to them.
I've been using Quickbase for over ten years and haven't seen documentation for either of these parameters. I have noticed that ridList seems to be used by Quickbase's grid edit view of reports (I suspect it's an ID for a server-side cached list of record IDs to display especially when using the type-ahead search of a report before choosing to grid edit) and dlta is used in the "Download report as CSV" button.
That example you're following may have simply copy and pasted a link generated by Quickbase as a hack to get a CSV instead of XML response. I recommend following the Quickbase HTTP API Reference instead. If you don't want an XML response, Quickbase also has a JSON RESTful API which may be easier to work with.
I have a ton of saved places that appear on my Google Maps - but there is no way to manage, filter or search them. Is there a way to access these locations by API?
I scanned the maps api and can't find any reference. Is there another Google API that makes this available?
There do have a REST API can retrieve the saved places.
http://www.google.com/bookmarks/?output=xml
Visit this link to get more information.
https://www.google.com/bookmarks/
There are also api like:
https://www.google.com/bookmarks/find?q=conf&output=xml&num=10000
https://www.google.com/bookmarks/lookup?
But seems like they have been deprecated and most of document are not available anymore. Use them as you own risk.
Currently the list of saved places in My Maps is not available via an API. There is a feature request tracking this you can use to follow along # https://code.google.com/p/gmaps-api-issues/issues/detail?id=2953.
2022: I created a gist for parsing saved places from a shared list via python. It is really unstable because its a quick&dirty solution but maybe it will help someone: https://gist.github.com/ByteSizedMarius/8c9df821ebb69b07f2d82de01e68387d
Edit: The above answer did not yet take pagination into consideration. Please see my answer here.
I followed this simple tutorial and created a nested repeater.
This tutorial is simple enough so i could easily create something like that.
But I have different XML structure in my organisation which i can't change. My XML structure is repeated structure of this.
<pupil>
<academicYear>2011/2010</academicYear>
<grade>Kindergarten 1</grade>
<class>class 1</class>
<name>emma</name>
<admissionDate>01/05/2010</admissionDate>
<language>English</language>
<CountryofBirth>United Kingdom</CountryofBirth>
<fullName>emma watson</fullName>
</pupil>
I would like to see academicYear, grade, class, name, admissiondate, etc As Titles.
And below each title, there should be coresponding data about it.
Eg.
*Academic Year
-2011/2010
-2010/2009
*Grade
-kingdergarten1
-kingdergarten2
-kingdergarten3
I don't post all my code again coz it's same as in this tutorial. Please don't tell me why don't u go and ask the guy who made that tutorial. I found people here are very nice and always helpful.
Thanks so much.
Having looked at the tutorial and your XML, the big difference between your XML and the example given on the tutorial is that yours isn't nested XML.
I'd also dispute your assertion that you cannot change the XML structure. Sure, you might not be able to change what you get from the service that is providing you with the XML, but there is no reason why you couldn't reorganise the XML you are receiving into a nested XML document that is more compliant with your intentions.
It could be a project well beyond my skills right now but I've got around one full month to spend on it so I think I can do it. What I want to build is this: Gather news about a specific subject from various sources. Easy, right? Just get the rss feeds and display them on a page. Well, I want something more advanced: Duplicates removed and customized presentation (that is, be able to define/change the format in which the news headlines are displayed).
I've played a bit with Yahoo Pipes and some other tools and I am facing two big problems:
Some sources don't provide rss feeds. How do I create one?
What's the best method to find and remove duplicates. I thought about comparing the headlines and checking if there is a matching bigger than, say, 50%. Is that a good practice though?
Please add any other things (problems, suggestions, whatever) I might not have considered.
Duplication is a nasty issue. What I eventually ended up doing:
1. Strip out all HTML tags except for links (Although I started using regex, I was burned. I eventually moved to custom parsing to remove tags)
2. Strip out all whitespace
3. Case-desensitize
4. Hash all that with MD5.
Here's why you leave the link in:
A comment might be as simple as "Yes, this sucks". "Yes, this sucks" could be a common comment. BUT if the text "this sucks" is linked to different things, then it is not a duplicate comment.
Additionally, you will find that HTML tag escaping is weird with RSS feeds. You would think that a stray < would be double-encoded: (I think)&<;
But it is not. It is encoded <
But so too are HTML tags! :<p>
I eventually copied all the known HTML tags as parsed by Mozilla Firefox and manually recognized those tags.
Creating an RSS feed from HTML is quite nasty and I can only point you to services such as Spinn3r, which are fantastic at de-duplication and content extraction. These services typically use probability-based algorithms that are above me. I know of one provider that got away with regexing pages (They had to know that a certain page was MySpace-based or Blogger-based) but they did not perform admirably.
You might want to try to use the YQL module to scrape a webpage that doesn't provide RSS. Here's a sample of a YQL statement to scrape HTML.
About duplicates, take a look at this pipe.
Customized presentation: if you want it truly customized you'll have to manipulate the pipe results yourself, e.g. get it as JSON an manipulate it with Javascript, or process it server-side.
I have no idea what I am doing, but I keep trying. I have been trying to find a way to add a dictionary search box to my school website for my 3rd grade (7-8 year olds). Most of the dictionary sites are too complex and riddled with inappropriate advertisements. I found out about google/dictionary.com the other day and have been trying to figure out how to create a custom search with it.
I asked for help here before and was able to get a script that passed a word to the dictionary and displayed the results in an Iframe. Which works ok but it is a full page and I can't change the size of the page in the Iframe.
I came across this
http://www.google.com/dictionary/json?callback=dict_api.callbacks.id100&q=school&sl=en&tl=en&restrict=pr%2Cde&client=te
Where "school" is the word that is looked up.
However I can't figure out how to style the results, any ideas?
I suggest you dont use this url(API) as it is against google policy. It violates the contract google has with its providers, google asked a developer who made a dictionary extension for chrome to stop using the api.
The result is coming back in JSON. You'll probably want something that can parse JSON, and then you can output the result in whatever form you like, based on the data from the result.
You need to be a little conversant with javascript since the results are sent back as a javascript object ie the result is sent back as json text which you need to parse to retrieve the contents. To parse the contents you can use the javascript eval() function.