Is there any Google Reader API that I can plug in to? I building a clean RSS/Atom reader in PHP and would love to get all the goodies from Google Reader like the history of a feed, able to add comments to each feed item, etc.
I've built some google reader integration in python but I can share some of the api knowledge so you can get started. output=json is also available for all.
Login: https www.google.com/accounts/ClientLogin
POST &email=email&passwd=password&service=reader&source=appname&continue=http://www.google.com
from the response grab Auth=
Next hit: www.google.com/reader/api/0/token
HEADER Authorization=GoogleLogin auth=$Auth
That response becomes $token for the session.
From there it's just hitting some url's always passing that auth header and including the token in the querystring or post.
Gets a list of your subscriptions: www.google.com/reader/api/0/subscription/list?output=xml
To modify subscriptions this is the base url plus some post data for the action to perform
www.google.com/reader/api/0/subscription/edit?pos=0&client=$source
POST to add: s=$streams&t=$title&T=$token&ac=subscribe
POST to remove: s=$stream&T=$token&ac=unsubscribe
The $stream is generally feed/$feedurl like this for techcrunch, feed/http:// feeds.feedburner.com/Techcrunch
Sorry had to mangle some of the urls cause i don't have enough rep yet.
this is a working example in python:
import urllib, urllib2
import json, pprint
email, password = 'jose#gmail.com', 'nowayjose'
clientapp, service = 'reader', 'reader'
params = urllib.urlencode({'Email': email, 'Passwd': password, 'source': clientapp, 'service': service})
req = urllib2.Request(url='https://www.google.com/accounts/ClientLogin', data=params)
f = urllib2.urlopen(req)
for line in f.readlines():
if line[0:5] == 'Auth=':
auth=line[5:]
root = "http://www.google.com/reader/api/0/"
req = urllib2.Request(root + "token")
req.add_header('Authorization', 'GoogleLogin auth=' + auth)
f = urllib2.urlopen(req)
token = f.readlines()[0]
# get user id
req = urllib2.Request(root + "user-info?output=json&token="+token)
req.add_header('Authorization', 'GoogleLogin auth=' + auth)
f = urllib2.urlopen(req)
dictUser = json.loads(f.read())
user_id = dictUser["userId"]
print "user_id",user_id
req = urllib2.Request(root + "subscription/list?output=json&token="+token)
req.add_header('Authorization', 'GoogleLogin auth=' + auth)
f = urllib2.urlopen(req)
# for line in f.readlines():
# print line
dictSubscriptions = json.loads(f.read())
# pprint.pprint(dictSubscriptions)
# print the first 3 subscription titles
for i in dictSubscriptions["subscriptions"][0:2]:
print i["title"]
req = urllib2.Request("http://www.google.com/reader/api/0/unread-count?output=json&token="+token)
req.add_header('Authorization', 'GoogleLogin auth=' + auth)
f = urllib2.urlopen(req)
dictUnread = json.loads(f.read())
# pprint.pprint(dictUnread)
# print the first 3 unread folders
for i in dictUnread["unreadcounts"][0:3]:
print i["count"], i["id"]
# this returns all starred items as xml
req = urllib2.Request("http://www.google.com/reader/atom/user/"+user_id+"/state/com.google/starred?token="+token)
req.add_header('Authorization', 'GoogleLogin auth=' + auth)
f = urllib2.urlopen(req)
starredItems = f.read()
Google Reader has feeds for user's. I guess you could use those. Also, they're PubSubHubbub ready, so you will get comments/likes... as soon as they happen.
Also, as of July 1st 2013, Google Reader is no more. Options for replacements include Superfeedr.
Related
I'm trying to create authenticate into the Letterboxd API using R and the httr package. The Letterboxd docs give instructions, but I am not sure how to put everything together into a URL.
I know the url is:
url <- "https://api.letterboxd.com/api/v0/auth/token"
And then they want my username and password, presumably as JSON, but what I'll write as a named list since I'm doing this in R:
login_info <- list(
grant_type = "password",
username = "myemail#gmail.com",
password = "extremelysecurepassword"
)
I've tried various calls, using GET(), oauth2.0_token(), oauth_endpoint() functions from the httr package.
I feel like I have all the necessary information and am circling around a solution, but I can't quite nail it.
The docs contain this information:
When generating or refreshing an access token, make a form request to the /auth/token endpoint with Content-Type: application/x-www-form-urlencoded and Accept: application/json headers
(Full text is linked to above)
And I'm not sure how to add that information; in working with APIs through R, I'm used to just sending URLs with UTM parameters, but the inputs they want don't work here using ? and &.
I'm aware of this related post, but it looks like it relies on having a secret token already. And I don't seem to be able to generate a secret token inside of the GUI of Letterboxd.com, which is again what I'm used to doing with authentication. I think I need to feed it those sources of information above in login_info to the url, but I don't quite know how to connect the dots.
How do I authenticate to the Letterboxd API using R?
This runs for me but I get a 401 Unauthorized since you correctly did not supply valid credentials. It looks like there is a python library for this API https://github.com/swizzlevixen/letterboxd if you need hints how to make subsequent requests.
sign_request() is mimicking python library's api.py#L295-L304
sign_request <- function(apisecret, url, method, body = "") {
signing_bytes <- as.raw(c(charToRaw(method), 0, charToRaw(url), 0, charToRaw(body)))
# https://stackoverflow.com/a/31209556/8996878
# https://stackoverflow.com/q/54606193/8996878
digest::hmac(key = apisecret, object = signing_bytes, algo = "sha256", serialize = FALSE)
}
url <- "https://api.letterboxd.com/api/v0/auth/token"
login_info <- list(
grant_type = "password",
username = "myemail#gmail.com",
password = "extremelysecurepassword"
)
apikey <- "mytopsecretapikey"
apisecret <- "YOUR_API_SECRET"
method <- "POST"
params <- list(
apikey = apikey,
nonce = uuid::UUIDgenerate(),
timestamp = round(as.numeric(Sys.time()))
)
# now we need to sign the request
body <- paste(names(login_info), login_info, sep = "=", collapse = "&")
body <- URLencode(body)
body <- gsub("#","%40", body) # something URLencode doesn't do but post does
destination <- httr::parse_url(url)
destination$query <- params
post_url_with_params <- httr::build_url(destination)
signature <- sign_request(apikey, post_url_with_params, method, body)
token_request <- httr::POST(url, httr::add_headers(
"Accept" = "application/json",
"Authorization" = paste0("Signature ", signature)
),
query = params,
body = login_info, encode = "form", httr::verbose()
)
token_body <- httr::content(token_request, type = "application/json")
# look for the value of"access_token"
def login_with_requests():
url = "https://url/login/"
login_data = {'csrfmiddlewaretoken':'', 'username':'username', 'password':'password'}
response = requests.get(url)
# print(response.headers)
response_cookies = response.cookies
print(csrf_token)
csrfmiddlewarepattern = re.compile(r'csrfmiddlewaretoken\W\svalue\W{2}([a-zA-Z0-9]+)\W')
matches = csrfmiddlewarepattern.finditer(response.text)
for match in matches:
csrfmiddlewaretoken = match.group(1)
# print(csrfmiddlewaretoken)
login_data['csrfmiddlewaretoken'] = csrfmiddlewaretoken
login_response = requests.post(url, cookies=response_cookies, data=login_data)
print(login_response.headers)
print(login_response.history)
I'm able to successfully login to a site using this code. The problem I have is that when I make a post request to the login site with the necessary parameters, although it is successful the site makes a redirection to the home page. Therefore I receive 2 response headers; the first one is the actual post response (status:302) to the login made containing a redirection header to the home page and the second one is the response containing data meant for the home page.
My problem is that the first response from the site contains a session-id token that I need before I can keep on interacting with the website. But the login_response.headers returns the final response headers which are meant for the request made to the redirected home page.
How can I extract the original response headers received from the site before the redirection as it contains the session-id token that I need for further interaction with the website?
I checked the login_response.history data, it seems to only return the status code for the previous request.
I found a solution, i thought i should share.
def login_with_requests_PaymentSite():
url = "<site.com/login/>"
login_data = {'csrfmiddlewaretoken':'', 'username':<username>, 'password':<password>}
response = requests.get(url)
csrf_token = response.cookies # Cookies returned from site for non-authenticated user.
# Extract csrfmiddlewaretoken that would be used to make a login post request.
csrfmiddlewarepattern = re.compile(r'csrfmiddlewaretoken\W\svalue\W{2}([a-zA-Z0-9]+)\W')
matches = csrfmiddlewarepattern.finditer(response.text)
for match in matches:
csrfmiddlewaretoken = match.group(1)
login_data['csrfmiddlewaretoken'] = csrfmiddlewaretoken # Save csrfmiddlewaretoken in the post login data.
# Start a session
session = requests.Session()
login_session = session.post(url, cookies=csrf_token, data=login_data) # Login to the site
sessionid_cookies = login_session.cookies # Set sessionid cookies that would be used for consecutive requests.
login_response_file = open('Login_Response_Paymentsite.html', 'w')
login_response_file.write(login_session.text)
login_response_file.close()
transaction_history_url = "<site.com/transactions/>"
transaction_history = requests.get(transaction_history_url, cookies=sessionid_cookies)
print("\n Result returned for the transactions page: ")
print(transaction_history.text)
userinfomation_url = "<site.com/userinformation/>"
userinformation = requests.get(userinfomation_url, cookies=sessionid_cookies)
print('\n Result returned for userinformation page: ')
print(userinformation.text)
To be able to make consecutive requests to a site after successful login using the requests module you have to make use of the requests.Session() method. This method helps you to store the session_id returned by the web application after successful login. If you make use of requests.post method instead you won't be able to retrieve the session_id. But using the requests.Session method stores the session_id automatically.
After making the post request;
login_session = session.post(url, cookies=csrf_token, data=login_data) # Login to the site You extract the session_id that would be used for consecutive requests with sessionid_cookies = login_session.cookies
I have been banging my head over this the whole day. I am trying to access StockTwits API (https://api.stocktwits.com/developers) from an R session. I have earlier accessed the twitter API (via rtweet) without hassles.
I have created an app and got the client id and key (the below are just examples).
app_name = "some.name";
consumer_key = "my_client_id";
consumer_secret = "my_client_key";
uri = "http://iimb.ac.in" # this is my institute's homepage. It doesn't allow locahost OR 127.0.0.1
scope = "read,watch_lists,publish_messages,publish_watch_lists,direct_messages,follow_users,follow_stocks";
base_url = "https://api.stocktwits.com/api/2/oauth"; # see https://api.stocktwits.com/developers/docs/api
The procedure is to create an oauth2.0 app and endpoint. Then call oauth2.0_token.
oa = httr::oauth_app(app_name, key = consumer_key, secret = consumer_secret, redirect_uri = uri);
oe = httr::oauth_endpoint("stocktwits", "authorize", "token", base_url = base_url);
mytoken = httr::oauth2.0_token(oe, oa, user_params = list(resource = base_url), use_oob = F); # use_oob = T doesn't work.
After firing the above, it takes me to the browser for sign-in. I sign-in and it asks me to connect. After that, I am taken back to my URI plus a code, i.e. https://www.iimb.ac.in/?code=295ea3114c3d8680a0ed295d52313d7092dd90ae&state=j9jXzEqri1
Is the code my access token or something else? The oauth2.0_token() call keeps waiting for the code since the callback is not localhost. I didn't seem to get a hang of that.
I then try to access the API using the above code as access token but I am thrown "invalid access token" error. The format is described in https://api.stocktwits.com/developers/docs/api#search-index-docs
Can someone tell me what I have missed? If required I can share my app_name, consumer_key and consumer_secret for replication.
I am looking to do a simple GET request (from the Aplos API) in R using the httr package. I'm able to obtain a temporary token by authenticating with an API key, but then I get a 401 "Token could not be located" once trying to use the token to make an actual GET request. Would appreciate any help! Thank you in advance.
AplosURL <- "https://www.aplos.com/hermes/api/v1/auth/"
AplosAPIkey <- "XYZ"
AplosAuth <- GET(paste0(AplosURL,AplosAPIkey))
AplosAuthContent <- content(AplosAuth, "parsed")
AplosAuthToken <- AplosAuthContent$data$token
#This is where the error occurs
GET("https://www.aplos.com/hermes/api/v1/accounts",
add_headers(Authorization = paste("Bearer:", AplosAuthToken)))
This is a Python snippet provided by the API documentation:
def api_accounts_get(api_base_url, api_id, api_access_token):
# This should print a contact from Aplos.
# Lets show what we're doing.
headers = {'Authorization': 'Bearer: {}'.format(api_access_token)}
print 'geting URL: {}accounts'.format(api_base_url)
print 'With headers: {}'.format(headers)
# Actual request goes here.
r = requests.get('{}accounts'.format(api_base_url), headers=headers)
api_error_handling(r.status_code)
response = r.json()
print 'JSON response: {}'.format(response)
return (response)
In the python example, the return of the auth code block is the api_bearer_token which is base64 decoded and rsa decrypted (using your key) before it can be used.
...
api_token_encrypted = data['data']['token']
api_bearer_token = rsa.decrypt(base64.decodestring(api_token_encrypted), api_user_key)
return(api_bearer_token)
That decoded token is then used in the api call to get the accounts.
The second issue I see is that your Authorization header does not match the example's header. Specifically, you are missing the space after "Bearer:"
headers = {'Authorization': 'Bearer: {}'.format(api_access_token)}
vs
add_headers(Authorization = paste("Bearer:", AplosAuthToken)))
Likely after addressing both of these you should be able to proceed.
I'm trying to use the Groovy HTTPBuilder library to delete some data from Firebase via a HTTP DELETE request. If I use curl, the following works
curl -X DELETE https://my.firebase.io/users/bob.json?auth=my-secret
Using the RESTClient class from HTTPBuilder works if I use it like this:
def client = new RESTClient('https://my.firebase.io/users/bob.json?auth=my-secret')
def response = client.delete(requestContentType: ContentType.ANY)
However, when I tried breaking down the URL into it's constituent parts, it doesn't work
def client = new RESTClient('https://my.firebase.io')
def response = client.delete(
requestContentType: ContentType.ANY,
path: '/users/bob.json',
query: [auth: 'my-secret']
)
I also tried using the HTTPBuilder class instead of RESTClient
def http = new HTTPBuilder('https://my.firebase.io')
// perform a POST request, expecting TEXT response
http.request(Method.DELETE, ContentType.ANY) {
uri.path = '/users/bob.json'
uri.query = [auth: 'my-secret']
// response handler for a success response code
response.success = { resp, reader ->
println "response status: ${resp.statusLine}"
}
}
But this also didn't work. Surely there's a more elegant approach than stuffing everything into a single string?
There's an example of using HttpURLClient in the tests to do a delete, which in its simplest form looks like:
def http = new HttpURLClient(url:'https://some/path/')
resp = http.request(method:DELETE, contentType:JSON, path: "destroy/somewhere.json")
def json = resp.data
assert json.id != null
assert resp.statusLine.statusCode == 200
Your example is very close to the test for the delete in a HTTPBuilder.
A few differences I see are:
Your path is absolute and not relative
Your http url path doesn't end with trailing slash
You're using content type ANY where test uses JSON. Does the target need the content type to be correct? (Probably not as you're not setting it in curl example unless it's doing some voodoo on your behalf)
Alternatively you could use apache's HttpDelete but requires more boiler plate. For a HTTP connection this is some code I've got that works. You'll have to fix it for HTTPS though.
def createClient() {
HttpParams params = new BasicHttpParams()
HttpProtocolParams.setVersion(params, HttpVersion.HTTP_1_1)
HttpProtocolParams.setContentCharset(params, "UTF-8")
params.setBooleanParameter(ClientPNames.HANDLE_REDIRECTS, true)
SchemeRegistry registry = new SchemeRegistry()
registry.register(new Scheme("http", PlainSocketFactory.getSocketFactory(), 80))
ClientConnectionManager ccm = new PoolingClientConnectionManager(registry)
HttpConnectionParams.setConnectionTimeout(params, 8000)
HttpConnectionParams.setSoTimeout(params, 5400000)
HttpClient client = new DefaultHttpClient(ccm, params)
return client
}
HttpClient client = createClient()
def url = new URL("http", host, Integer.parseInt(port), "/dyn/admin/nucleus$component/")
HttpDelete delete = new HttpDelete(url.toURI())
// if you have any basic auth, you can plug it in here
def auth="USER:PASS"
delete.setHeader("Authorization", "Basic ${auth.getBytes().encodeBase64().toString()}")
// convert a data map to NVPs
def data = [:]
List<NameValuePair> nvps = new ArrayList<NameValuePair>(data.size())
data.each { name, value ->
nvps.add(new BasicNameValuePair(name, value))
}
delete.setEntity(new UrlEncodedFormEntity(nvps))
HttpResponse response = client.execute(delete)
def status = response.statusLine.statusCode
def content = response.entity.content
I adopted the code above from a POST version, but the principle is the same.