Is it possible to get RSS archive - rss

I know that rss feeds is news...Is it possible to get rss feed's from yesterday or day before yesterday...(more exactly, archive of rss feeds).

No, the server decides what posts to feed you.
Your RSS server might be configured to let you have more posts by supplying arguments to the feed url - but thats unlikely.

Related

Google webmastertools rejects submitted RSS feed

In Google Webmastertools I try to test and submit a RSS feed, this can be done in the section 'Crawl' > 'Sitemaps'. Now, when Google Webmastertools is testing my submitted URL of the RSS feed, it repeatedly gives an error:
"Network unreachable: Network unreachable
We encountered an error while trying to access your Sitemap. Please ensure your Sitemap follows our guidelines and can be accessed at the location you provided and then resubmit."
The RSS feed I submitted http://www.harrieboerhof.nl/nl/over-harrieboerhof-hovenier-tuincentrum-modeltuinen-drenthe/blog/?format=feed&type=rss
seems allright to me, what could I have done wrong?
Without doing anything else, Google Webmastertools suddenly accepts my RSS feed...
So, just try to do the same thing after half an hour!

Bad requests for WordPress RSS and author URLs

On a popular WordPress site, I'm getting a constant stream of requests for these paths (where author-name is the first and last name of one of the WordPress users):
GET /author/index.php?author=author-name HTTP/1.1
GET /index.rdf HTTP/1.0
GET /rss HTTP/1.1
The first two URLs don't exist, so the server is constantly returning 404 pages. The third is a redirect to /feed.
I suspect the requests are coming from RSS readers or search engine crawlers, but I don't know why they keep using these specific, nonexistent URLs. I don't link to them anywhere, as far as I can tell.
Does anybody know (1) where this traffic is coming from and (2) how I can stop it?
Check Apache logs to get the "where" part.
Stopping random internet traffic is hard. Maybe serve them some other error codes and it will stop. It probably wont tho.
Most my sites have these, most of the time I track them to Asia or the americas, blocking the ip works but if they are few and far between that would be just wasting resources.

How to find HTTP POST Data sent to a CGI Page?

I searched google for a good number of hours. Maybe I searched for the wrong keywords.
Here is what I want to do.
I'm posting data to a website which then makes a HTTP POST request and returns a .CGI webpage. I want to know the parameters the web page uses to send that HTTP POST request so that I can directly link a page from my Webpage to the final .CGI webpage by making the user enter the data on my own webpage.
How do I achieve it?
Usually the POST body is piped into STDIN, just read it as a normal file

How is WordPress redirecting/rewriting my URL?

i'm running a wordpress free classified website and the address is http://www.gosell.co.uk what i need is if a user from london open this url then it should be concatenated with http://www.gosell.co.uk/longon how is that possible? shall i make a new function or wat? Thanks in advance.
Hook into init action (http://codex.wordpress.org/Function_Reference/add_action), detect where is your visitor from using some of available geoip solutions, and then use wp_redirect to redirect to the right url...

Google feed Api

So, basically this is what I want to do. A, building a mobile news applet that will connect to a server running Java/Python to get news items. The server connects to a news site which has rss feeds. Note: the site is not mine.!
Questions
1. Can I use google feed api to read the sites feeds?
2. If so, I need a short abstract description of how to go about it(I can't understand what google have written about it.)
N/B: Am a newbie, so please keep explanations basic.
yes you can.
import urllib2
import simplejson
url = ('https://ajax.googleapis.com/ajax/services/feed/find?'+'v=1.0&q=Official%20Google%20Blog&key=INSERT-YOUR-KEY&userip=INSERT-USER-IP')
request = urllib2.Request(url, None, {'Referer': /* Enter the URL of your site here */})
response = urllib2.urlopen(request)
Process the JSON string.
results = simplejson.load(response)
now have some fun with the results...
http://code.google.com/apis/feed/v1/jsondevguide.html#json_snippets_python
after we have "results" its just a matter of processing a JSON format response. cant get simpler than this.

Resources