How to avoid exposing LRS credentials when launching xAPI package from LMS - scorm

I'm building an xAPI compliant LMS, using https://learninglocker.net/ as our LRS. Admins can upload a zip file containing an xAPI package. The LMS will unzip it, find the launch file, and allow users to launch that URL, passing in credentials for our LRS as query parameters. The package can then report whatever it wants, directly to our LRS, without our LMS having any control over it.
Additionally, since the LRS credentials are in plain view in the url, tech-savvy users could use them to write any records they wanted to the LRS.
What's the standard approach to avoiding this? Currently the only solution I can think of is to not give packages access to our LRS, and instead proxy all requests to our LRS via our LMS, and give packages access to that proxy endpoint.
Is there a better approach?

The proxy approach should work and is the least burden on the LRS.
In our implementation we use an auto generated short lived (configurable) token that has limited permissions to the LRS. This naturally requires the LRS to have an implementation of a permissions model that allows for such things, I don't know if Learning Locker does (or will). In this setup the user still has direct access to the LRS, but the risk is low because of the limitations on access.
My other suggestion would be to look into cmi5 rather than implementing the Tin Can launch guidelines (which is presumably what you've found). It won't help with the LRS permission model, but is on more of a standards path and is specifically for xAPI based content in the LMS model.

Related

DocuSign integration tests

I am writing an application that should interact with DocuSign to create envelopes and then download the signed document when all the signatories have signed.
There are several other use cases, but that does not matter for this question.
I am wondering what is the best way to write automated integration tests.
Do I need to automate the interaction of the signatories withe DocuSign? This would mean that I have to receive the email, click the link, etc...
Even if it seems possible, it does not seem ideal. Is there a way to "simulate" in a dev environment the actions of the signatories?
There has been a lot of talks if a document can be signed without viewing it. And the conclusion was that NO, user cannot sign a document without viewing it. User has to view/see what is to be signed. So, that part needs to be automated using Selenium or one of its "flavors" or pretty much any UI automation you are comfortable with. And Yes, that involves receiving email, clicking the link, opening a document and signing it. You can use a Mailinator or any other email service which API you can leverage to facilitate things for you.
As for other parts of DocuSign integration automation it is encouraged to use API (makes things more stable).
So, very simple workflow steps would look like this:
Use API to prepare environment, sent variables and values (in your product and in DocuSign)
Send envelope for signing using DocuSign API
Get the link to the document
Sign using UI automation
Do verification (of envelope status and more) using DocuSign API

Integration with third party system using flat file vs Web Service in BizTalk 2013

I have to integrate third party system through BizTalk 2013. The third party providing below two option for integration:
Flat files (File Adapter)
Web Service API (WCF-BasicHttp Adapter)
What is the best way to integrate and what are the pros and cons of these? I am a beginner and want an expert opinion.
It is usually preferable to use a web service over a flat file.
With a web service you can
Retain the order the messages were received in (you cannot guarantee that with the file adapter)
It is more secure (especially if you use HTTPS) rather than having files sitting in a folder.
You can give a synchronous response to the caller, either just a receipt or even the results from the downstream system.
Advantages of the File adapater
If you know a downstream system is going to be off for a while, you can disable the File adapater and stop picking up the incoming files until you are ready to resume processing. If there is a large number you may wish to move some of the files out and submit them in batches to prevent BizTalk throttling.
If you create an archive of files, you can easily re-submit messages when needed.
You can debatch large files into individual messages easily.
So in conclusion, there is no best way, it depends on the capabilities of the system you are integrating with and the nature of the messaging.
Consider following factors to decide between Web Service and File Adapter.
How much data you want to send and Is it batched or real time? if its batched and data size could be large, consider using File adapter unless third party system has capability to support SOAP with attachments (Mtom)
Want to receive a response/acknowledgment of message(s) sent - this you can only achieve using web services
Is third party system hosted within network or outside your company network - its easy to call a public web service rather than configuring File adapter
Security - you get always better security with web services such as SSL and much more.

AWS Alexa - perform basic auth

I am trying to create a skill that will reach out to an application that uses Basic authentication to render APIs (albeit i know this is bad practice). I was wanting to go down a route similar to account linking, however seems they enforce the usage of OAuth 2.0.
Is there an alternative to this or am I forced to use OAuth 2.0 in order to request APIs to a 3rd party application?
My wanted workflow:
customer enables skill
Skill card request for username/pw combo
after setup, the skill can be utilized fully
Not sure if its helpful, but Im using Lambda to run my skill source code.
That is a terrible practice.
First of all, what if your user's password includes case sensitive letters and numbers and possibly other characters?
You can use Literal Slots but they are not case sensitive and probably won't return a number-word combination either. For example your user's pass is Word123 literal slots may return word one two three
https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-interaction-model-reference#literal-slot-type-reference
I am not sure if you can force user to spell his password's characters and so then you can try to detect the password though... Again this sounds like a terrible practice.
So as you mentioned: Users link their accounts using the Amazon Alexa app. Note that users must use the app. There is no support for establishing the link solely by voice
I guess you have to do the linking the way amazon requires
https://developer.amazon.com/blogs/post/Tx3CX1ETRZZ2NPC/alexa-account-linking-5-steps-to-seamlessly-link-your-alexa-skill-with-login-with-amazon

SAML 2.0 configuration

I'm totally new to SAML. I want implement SSO for my ASP.NET Website. I got the SAML assertion from my client. I would like to know what are all other requirements I need to get it from my client and what setup I need to implement at my end.
Can anybody help me out in this.
Thanks in advance.
The first thing that I would do is avoid writing the SAML code yourself. There's plenty out there. #Woloski (above) has some. My company has some (I work for the company that makes PingFederate). There's some open source stuff, too. I've seen good connections from KentorIT authServices. If this is your first foray into SAML, then my bet is that ADFS is way overboard. I'll be honest, the groups we see most commonly at Ping is when they decide to go "all in" with SSO. The first one or two connections are easy. Tehn it becomes a management nightmare rapidly thereafter. The reason I say to avoid writing your own, is because there are a LOT of nuances to SAML, with massive pitfalls, and headaches you just don't need.
As the service provider (SP), you need to tell your client (Identity Provider, or IdP) what "attributes" you need from them to properly connect their users to their account in your application (maybe a username?). In addition, you can ask for additional attributes to ensure their profile is up to date - phone number, email, etc. It's up to the two of you to determine what you need (and what they'll give you). Obviously, they shouldn't send social security number, if you have no need for it.
You also need to decide if you will do SP initiated SSO (will the users get links to documents deep inside your app?), or if just IdP initiated (Or will always just come to the front door?) will suffice. What about Single Logout? Do you (or they) want to do that? [Personally, I suggest NO, but that's a different topic]
What about signing the assertion? Your cert or theirs? If you're doing SP-init, do you need to use their cert or yours for signing the AuthnRequest? Do you need encryption of the assertion, or maybe just a few of the attributes?
Generally, you do all of this with a "metadata exchange". You give them your metadata that says "this is what we need". They import that metadata to build a new connection, fulfilling the attributes your app needs with calls to their LDAP or other user repository, as well as doing authentication (if required). They finish building their connection, and export THEIR metadata, which you import to build your connection (thereby making sure you all agree on certificates). You hook it to your app, and away you go.
I make this sound easy. It is, and it isn't. Rolling your own can mean issues. Lots of them. With some being so minute that it takes pros hours (and days) to see it. When it works, it works, and well.
HTH -- Andy
you can use something like ADFS to accepto SAML Assertions. ADFS gets installed on Windows 2008 or 2012.
You would need to ask your customer
the signing certificate public key and
the sign in URL.
Then you would create a "Claims Provider Trust" in ADFS and enter those details. Then a "Relying Party Trust" that represents your application. Finally you would have to configure your application with ADFS using WIF. This blog post have more details:
http://thedotnethub.blogspot.com.ar/2012/12/adfs-20-by-example-part1-adfs-as-ip-sts.html
Also you can use Auth0 to accomplish the same without setting up any software on your side (disclaimer: I work there).

How should I build a good (web) API

I'm going to build an API for a web app and I'm interested in what people can suggest as good practices.
I'm already planning to make it versioned (version 1 can only control certain aspects of the system, version 2 could control more, but this may need a change in the way authentication is performed that would be incompatible with version 1), and the authentication will be distinct from the standard username/password people use to log in (if someone does use a malicious tool it won't open them up to full impersonation, just whatever the api allows).
Does anyone have further ideas, or examples of sites with particularly good APIs you have used?
Read the RESTful Web Services book, which give you a good overview of how to use REST in practice, and get to up to speed quickly enough to get started now, with some confidence. This is more useful than just looking at an existing API, because it also discusses design choices and trade-offs.
1) Bake the version number directly into the URL rather than passing it as a parameter, since that gives you complete freedom to change the organization of your API namespace with each version bump.
2) Keep your URL rewriting rules (if any) as simple/lean as possible (but no simpler), while making your URLs as beautiful as possible (but no more).
3) Always look for the best HTTP status code you can find for each response (and don't forget about 202 and 207, for example).
4) Implement fascist parameter validation logic, and informative error messages.
5) Use HTTP request headers where appropriate instead of parameters (like Accept, for example, to allow clients to specify the desired data format of the response).
6) Organize your "nouns" in such a way that the URLs used by different client audiences are separated near the "root" of your URL tree (this makes it easier to enforce different authentication mechanisms for those different audiences if needed, or even map different portions of your URL tree to different servers).
7) If you're serving regular web pages off the same domain as your APIs and use the same authentication credentials, require an X-Requested-With header in your API requests so as to avoid XSRF vulnerabiities.
I would take a look at proven APIs:
YouTube API
Twitter API
There's a lot of argument about whether these APIs are "good" but I think their success is demonstrated, and they're all easy to use.
Use REST.
RESTful web services architecture is easy to implement and uses the strengths and semantics of HTTP for what they were intended. It's resource-oriented, just like the web itself.
Amazon Web Services, Google and many others offer REST APIs to interact with their products.
Use REST.
Read up on standards for APIs, or copy the ideas from one of the popular ones.
Be careful when authenticating users.
Start very very simple.
Build a site that uses your API (even if it's not useful) to check things work. Perhaps you could build a mobile version of the site or something that forces you to use the API in a lot of depth.

Resources