StackOverflow RSS Feed only returns 30 Items - rss

I use this code to get RSS from stackoverflow.com
SyndicationFeed feed = SyndicationFeed.Load(XmlReader.Create("http://stackoverflow.com/feeds"));
foreach (SyndicationItem item in feed.Items)
{
Console.WriteLine(item.Title.Text);
Console.WriteLine(item.Title.Type);
Console.WriteLine(feed.Items.Count());
Debug.Print(item.Title.Text);
}
I get just 30 items but when I check in Google Reader I get more than this count.
Is there a limitation here?

30 is what stackoverflow returns, it is not a limitation of the SyndicationFeed class.

Google Reader stores old articles from RSS feeds. So we are limited to what the RSS feed contains, but Google has an archive that'll let you keep scrolling.

Related

How to get latest news posted on twitter by a website

I am using R and I need to retrieve the few most recent posts from a Twitter user (#ExpressNewsPK) using twitteR api. I have created an account and have an access token, etc. I have used the following command to extract the tweets:
setup_twitter_oauth(consumerkey,consumersecret,accesstoken,accesssecret)
express_news_tweets <- searchTwitter("#ExpressNewsPK", n = 10, lang = "en" )
However, the posts that are returned aren't the most recent ones from this user. Where have I made a mistake?
I think searchTwitter would search with the search string provided (here #ExpressNewsPK). So instead of giving tweets by #ExpressNewsPK it would give tweets which are directed to #ExpressNewsPK.
To get tweets from #ExpressNewsPK, you have a function named userTimeline which would give tweets from a particular user.
So after you are done with setup_twitter_oauth, you can try
userTimeline("ExpressNewsPK")
read more about it at ?userTimeline
When you use searchTwitter(), you call the Twitter Search API. Search API only returns a sample history of the tweets.
What you really need to do is to call Twitter Streaming API. Using it you'll be able to download tweets in near real time. You can read more about the Streaming API here: https://dev.twitter.com/streaming/overview

Feedburner RSS url variable for number of items in feed?

I'm parsing another blog's content by reading their Feedburner RSS feed...
Example: http://feeds.feedburner.com/WebsiteNameHere?fmt=xml
...but the feed only returns 10 items. Is there a URL variable to override the number of items returned in the feed?
I have already tried the following to return 25 items: n=25, no=25, num=25, q=25, max=25, max_results=25, and items=25.
...any suggestions?
There is no such parameter available to override or limit the number of items through a Feedburner delivered RSS feed.
The count of items that appear are set by the site's original feed. If that raw version (before it is wrapped around Feedburner) allows you to add parameters to adjust the number of items, that may be possible.
But no, you can't do it after it's been burned.
Also note going through the Feedburner API route is off since Google has shut that down.
You can use nItems as the parameter. For example:
nItems=40
It works in our case, but I dont know the max limit.

Javascript and Wordpress

Two related questions:
Is there any good documentation on the Fusion Tables Javascript API? I've found a list of methods, but with little info on return values, semantics, or usage idioms.
Is there any guidance (or suggested plugins or idioms) for integrating the FT Javascript API into a locally hosted Wordpress site?
There is some documentation here:
https://developers.google.com/fusiontables/docs/v1/getting_started#JS
but I didn't find it very useful.
But this example, in the context of the Google Maps API I found very useful for the new API 1.0
https://googledrive.com/host/0B5KVZ6J1ohN_Q3ZqVkFGSGZ2cEE/custom%20markers%20code/customicons_viaApi.html
You'll need to view and save the source. Also if you search the FT tag for JSONP you will find many examples using the old pre 1.0 API but the concepts are the same, just the AJAX end point has changed and the need for an apiKey.
The basic idea is that any FT query will return a JSON object with both columns and rows members, very much like a CSV response.
As the example above shows:
function onDataFetched(data) {
var rows = data.rows;
var cols = data.cols;
...
}

How to get all the posts from rss feed rather than the latest posts?

Rss seems only have the latest n posts, I just wonder is there anyway to get all the posts including the history post. Thanks
Jeff Zhang
This isn't generally possible since a RSS reader only shows what is currently in the feed. You can only pull as much as is published at that time. Finding the dataset which backs the RSS feed items and downloading directly from there is something else entirely though and is sometimes possible.
using c# you can do it but it will return the Json not in Rss format.
you have to do like
string AccessToken="Your Access Token e.g KIMJSLIFJEILMFSLJFSDIIIIFLDFJSLFJLSFSLFJSLJF";
var client= new facebookclient(AccessToken);
dynamic allFeeds=client.Get("me/feed");//OR "me/feeds
foreach (var uniquefeed in (JsonArray)allFeeds["data"])
{
string feedids = (string)(((JsonObject)uniquefeed )["id"]);
//Write more stuff here what you want.
}

Bulk edit-tag for Google Reader

How to bulk edit tag of the google reader item ?
Now I'm using /reader/api/0/edit-tag to edit tags, but it's very slow to update tags for all items (in loop).
Dow you know any way to send tags for many items at once?
Possible solution looks like using some threads to send these requests to Google Reader Server.
You can include multiple i= and s= in the same post. just make sure that as you add a new i= you add the corresponding s= for that item even if you've already included the s= for that exact same stream previously (this is really important, or you'll get a 400 error when making the call). I was doing batches of 10 with my code, I'm sure you can do more but I don't know the limit.
Could a working URL for marking all items as read be posted?
I have tried:
<?php echo 'http://www.google.com/reader/api/0/edit-tag?'.'s=feed%2F'.urlencode('http://feeds.feedburner.com/filehippo').'&i='.urlencode('tag:google.com,2005:reader/item/c7701cf414f3539e').'&a=user%2F-%2Flabel%2Fread'.'&T=bbi44C5CQzjzM43yKUPwnA'; ?>
but I just keep getting a 400 error back.
Many thanks.

Resources