I'm still struggling a bit with how to use and organize Paw files (*.paw documents), particularly as an API consumer. Is it smarter to:
Organize by project (i.e. project A consumes these particular API calls, so create a my-project.paw document for that project/client); or
Organize by API service (i.e. a MailChimp.paw document defining the various MailChimp endpoints, then add new environments for each project that consumes the MailChimp API)?
(As a side note, it would be great if there were a public repository for sharing .paw files for popular APIs!)
There's no rule, you may organize stuff as it fits to your use case, but I'd recommend organizing files around APIs, so it's all ordered around the given API, following its standard, groupping requests depending on the resources served by the API. One tree example would be:
My-Blog-API.paw
Users (a group)
Get one user (GET /users/:id)
Create user (POST /users)
Articles (a group)
Get list of articles (GET /articles)
Get one article (GET /articles/:id)
Create article (POST /articles)
This way your document closely follows the semantic of the API. Also, it allows a good organization of your Environments, to store for example the Base URL of your API, or the user credentials/access token(s).
Though, maybe I'd keep a "Related" folder for calls to 3rd party services needed for the API calls. For example, you may want to have a call to the Twitter API to get an access token, you'd pass then to your API for the "login with Twitter" feature.
Related (a group)
Twitter (a group)
Login with Twitter (GET https://api.twitter.com/login – whatever I don't know the exact URL)
(Also, thanks for the idea of creating a public repo, some have shared Paw collections on their website, but it would be nice!)
Related
We're designing a new service orchestrator which will provide a multi-staged flow for the user and will be communicating with several foreign services internally.
I'm preparing API for it and I have a few questions:
The main flow will constitute of several web pages and each one of them will contain unique information extracted from several sources. Should we set several different API endpoints for each page or it's better to create a single endpoint which would internally decide what logic to execute based on stage/page number?
Is it ok to use POST requests for those endpoints where we could use GET, but the data there is ever-changing? I'm concerned about caching when I don't want to. Though I can always explicitly disable caching.
Background: This is my first standalone web development project, and my only experience in Meteor is building the Discover Meteor app over the last summer. I come from about a year of CS experience as a side interest in school, and I am most comfortable with C and C++. I have experience in python and java.
Project so far: I'm creating a calendar management system (for fun). Using accounts-google, I have created user accounts that are authenticated through google. I have requested the necessary permissions that I need for my app, including 'identity' and 'calendar read/write access'. I've spent the last week or so trying to get over this next hurdle, which is actually getting data from google.
Goal: I'd like to be able to make an API call to Calendar.list using a GET request. I've already called meteor add http to add the GET request functionality, my issue comes with the actual implementation.
Problem: I have registered my app on the developer console and set up Accounts using the client ID and secret, but I have not been able to find/generate my 'API key' for use in the request. Here is the google guide for creating the access token by using my (already) downloaded private key. I'm having a hard time wrapping my head around an implementation on the server side using JS because I don't have a lot of experience with what is mentioned in the HTTP/REST portion of the implementation examples. I would appreciate some help on how to implement a handshake and receive an access token for use in my app. If there is a call I can make or some package that will handle the token generation for me, that would be even better than implementation help. I believe an answer to this would also benefit this other question
The SO answer that I've been referring to so far: https://stackoverflow.com/a/14543159/4259653 Some of it is in spanish but it's pretty understandable code. He has an API key for his request, which I asked this question to help me with. The accounts-google documentation isn't really enough to explain this all to me.
Also an unrelated small question: What is the easiest way to deal with 'time' parameters in requests. I'm assuming JS has some sort of built-in functionality that I'm just not aware of yet.
Thanks for your research. I have also asked a very similar question, and right now I am looking into the package you recommend. I have considered this meteor-google-api package, but it looks abandoned.
Regarding your question about time manipulation, I recommend MomentJS. There are many packages out there; I am using meteor add mrt:moment
EDIT: MomentJS now has an official package for Meteor, so use meteor add momentjs:moment instead of the mrt command above
Below is a snippet of what moment can do. More documentation here.
var startTimeUTC = moment.utc(event.startTime, "YYYY-MM-DD HH:mm:ss").format();
//Changes above formatting to "2014-09-08T08:02:17-05:00" (ISO 8601)
//which is acceptable time format for Google API
So I started trying to implement all of this myself on the server side, but was wary of a lot of the hard-coding I was doing and assumptions I was making to fill gaps. My security prof. used to say "never implement encryption yourself", so I decided to take another gander for a helpful package. Revising search criteria to "JWT", I found jagi's meteor-google-oauth-jwt on Atmosphere. The readme is comprehensive and provides everything I need. Following the process used in The Google OAuth Guide, an authorization request can be made and a key generated for making an API call.
Link to Atmosphere: https://atmospherejs.com/jagi/google-oauth-jwt
Link to Repo: https://github.com/jagi/meteor-google-oauth-jwt/
I will update this answer with any additional roadblocks I hit in the Google API process and how I solved them:
Recently, I've been running into problems with the API request result. I get an empty calendarlist back from the API call. I suspect this is becuase I make an API call to my developer account rather than to the subject user. I will investigate the problem and either create a new question or update this solution with the fix I find.
Fix: Wasn't including the 'sub' qualifier to the JWT token. Fixed by modifying JWT package token generation code to include delegationEmail: user.services.google.email after scope. I don't know why he used such a long designation for the option instead of sub: as it is in the google API, but I appreciate his package nontheless.
I'm quickly becoming proficient in this, so if people have meteor-related google auth questions, let me know.
DO NOT USE SERVICE ACCOUNTS AS POSTED ABOVE!
The correct approach is to use standard web access + requesting offline access. The documentation on the api page specifically states this:
Typically, an application uses a service account when the application uses Google APIs to work with its own data rather than a user's data.
The only exception to this is when you are using google apps domain accounts and want to delegate access to your service account for the entire domain:
Authorizing a service account to access data on behalf of users in a domain is sometimes referred to as "delegating domain-wide authority"
This makes logical sense as a user must be allowed to "authorise" your application.
Back to the posters original question the flow is simple:
1) Meteor accounts google package already does most of the work for you to get tokens. You can include the scope for offline access required.
2) if you are building your own flow, you will go through the stock standard process and calls as explained on auth
This will require you to:
1) HTTP call to make the original request or you can piggyback off some of the internal meteor calls : Package.oauth.OAuth.showPopup() -- go look at the source there are more nifty functions around there.
2) Then you need to create an Iron router server side route to accept the oauth response which will contain a code parameter that you will use to exchange for tokens.
3) Next use this code to make a final call to exchange the "code" for the token + refresh_token
4) Store these where ever you want - my requirement was to store them not at the user level but multiple per user
5) Use a package like GoogleAPI this wraps up Google API calls and refreshes when required - it only works when tokens are stored in user accounts so you will need to rip it apart a bit if your tokens are stored somewhere else (like in my case)
I'm using this link https://www.google.com/reader/api/0/stream/contents/feed/FEEDHERE?output=json&n=20
to fetch feeds using Google's algorithm. As you can see I'm not adding any other parameters, just fetching the returned data in JSON format. My app will be heavily used hopefully and if I send a lot of requests to this link, will Google block my access or something?
Is there anything I can include, like userip, url for my app (so if they have problem to just contact me) or something else?
The most basic answer to your question is that Google will change its Terms of Service whenever it likes, and you've got no say in the matter. So if it's allowed today, it might not be allowed tomorrow, at Google's whim.
On this issue, though, you seem fairly safe. From the Terms of Service (these is the general document, since Reader doesn't seem to have a specific one):
Don’t misuse our Services. For example, don’t interfere with our Services or try to access them using a method other than the interface and the instructions that we provide.
Google provides RSS and Atom. They provide these feeds, so I assume they expect that they'll be used. They don't say that it's a misuse to point someone else at those feeds, so it looks OK for now, but they could add such a clause at any time.
All online services are subject to the terms and conditions of the providers of those services. So, as others have said, they may be ok with your use today, but they can change their mind any time down the line. I doubt including a URL or email or contact info will help anything, because when these services change, they don't notify every user of the service, they just announce the change publicly, and usually they give several month's notice in order to give users a chance to adapt their applications, but this is not standardized or enforced so there is no guarantee. One example would be the fairly recent discontinuance of the Google Finance API (for which no replacement has been announced).
The safest approach would be to design your app such that this feature that uses google's functionality is decoupled as much as possible from the rest of your app, so that, when or if the availability of the service changes (ie it's no longer available at all) you can adapt your app to use some other source for the feeds with minimal impact to the rest of the app. Design for change and plan for the worst.
I came across this site in alfresco discussing publish/subscribe notifications within alfresco and was wondering if there were any progress on it or someone had created an add-on
http://wiki.alfresco.com/wiki/Publish_Subscribe_Content_Notifications
Only type of notification I've read thus far from the wiki or forums is email or using rss feeds. The CMIS specifications does not encompass this and alfresco web services does not include any such methods.
We have several web applications that need to download content once a document has been uploaded and transformed in alfresco. I could develop an action to push the documents to the appropriate app, but that would require me to know every endpoint. At this point there are only 3 application but there are requirements to add additional ones in the future. Having a publisher/subscriber model would make the solution more scalable and easy maintenance in the future
What if you wrote a custom action that adds a message to a queue. You could have the queue/topic name configurable so that when someone configures a rule on a folder, they can specify which queue to put the message on. Your apps can then subscribe to the queue and act appropriately.
You could also do something similar as a step in a workflow.
Maybe the message would be something simple like the nodeRef or the CMIS object ID.
I'm going to build an API for a web app and I'm interested in what people can suggest as good practices.
I'm already planning to make it versioned (version 1 can only control certain aspects of the system, version 2 could control more, but this may need a change in the way authentication is performed that would be incompatible with version 1), and the authentication will be distinct from the standard username/password people use to log in (if someone does use a malicious tool it won't open them up to full impersonation, just whatever the api allows).
Does anyone have further ideas, or examples of sites with particularly good APIs you have used?
Read the RESTful Web Services book, which give you a good overview of how to use REST in practice, and get to up to speed quickly enough to get started now, with some confidence. This is more useful than just looking at an existing API, because it also discusses design choices and trade-offs.
1) Bake the version number directly into the URL rather than passing it as a parameter, since that gives you complete freedom to change the organization of your API namespace with each version bump.
2) Keep your URL rewriting rules (if any) as simple/lean as possible (but no simpler), while making your URLs as beautiful as possible (but no more).
3) Always look for the best HTTP status code you can find for each response (and don't forget about 202 and 207, for example).
4) Implement fascist parameter validation logic, and informative error messages.
5) Use HTTP request headers where appropriate instead of parameters (like Accept, for example, to allow clients to specify the desired data format of the response).
6) Organize your "nouns" in such a way that the URLs used by different client audiences are separated near the "root" of your URL tree (this makes it easier to enforce different authentication mechanisms for those different audiences if needed, or even map different portions of your URL tree to different servers).
7) If you're serving regular web pages off the same domain as your APIs and use the same authentication credentials, require an X-Requested-With header in your API requests so as to avoid XSRF vulnerabiities.
I would take a look at proven APIs:
YouTube API
Twitter API
There's a lot of argument about whether these APIs are "good" but I think their success is demonstrated, and they're all easy to use.
Use REST.
RESTful web services architecture is easy to implement and uses the strengths and semantics of HTTP for what they were intended. It's resource-oriented, just like the web itself.
Amazon Web Services, Google and many others offer REST APIs to interact with their products.
Use REST.
Read up on standards for APIs, or copy the ideas from one of the popular ones.
Be careful when authenticating users.
Start very very simple.
Build a site that uses your API (even if it's not useful) to check things work. Perhaps you could build a mobile version of the site or something that forces you to use the API in a lot of depth.