Changing credentials on client-side for Basic Authentication on Flex - apache-flex

I want to let the user automatically re-login in my Flex app, which uses Basic Authentication
By the way, I have noted this StackOverflow question, which is relevant, but does not address the question of logging out client-side.
For example, after user A logs in, user B comes to the browser, goes to the login screen (perhaps in a new tab) and logs in.
This should mean that I send user B's credentials in the HTTP headers, and that since these are different from user A's, the server notes the fact and creates a new and separate session.
However, Flex's HTTP proxy catches the header and actually ignores these new credentials.
Flex does offer a way to tell the server to logout, and the Flex login code could invoke this every time before sending credentials, but that seems like an ugly workaround. I want to be able to do this client-side. I could also use a non-standard header for Basic Authentication (since I control the server-side Authentication as well), but that also seems like an ugly workaround.
Is there some way to simply end the session on client-side from Flex code? This is possible from JavaScript, for example.
And is there a way to directly work with cookies at client-side, as I can in JavaScript?
I understand that some of the limitations may be caused by security considerations, but all my communication is to the "home" server, so it should be possible to avoid the restrictions.

You're sort of asking a couple of different questions here.
You can't actually end a basic-auth "session" manually per se (at least not to the best of my knowledge); at best, you can authenticate against a kind of variable basic-auth realm, which may or may not work for you, but otherwise, you're sort of stuck with the first-authenticated session for the duration of the browser instance. Generally not the best way to go, unless you're pretty sure the user owns the machine, or can be depended on to close the browser after each session.
That leaves at least two other options, then. The first is to send in your credentials with an URLRequest object (the post you cited, which I wrote, shows how to do that), and to have your HTTP response hand back something indicating the credentials were accepted -- e.g., a GUID, maybe, generated and stored in some session table (in the database sense) on the server, perhaps. Then on successive HTTP requests, you might send along that GUID in an HTTP header, or as a value in each GET or POST request (similarly to the way Facebook handles their API clients, for instance), check the timeliness of that value on the server, and if all's well, carry on. To "log out," then, you'd simply send in a request to invalidate that GUID, perform the necessary cleanup on the server and inside your Flex app, and all should be fine: the next user can sit down, log in, authenticate, and the process continues.
Another way would be to work with cookies directly. The cookie mechanisms are actually handled mostly for you in Flex, though, since everything gets passed back and forth by the browser on your behalf. For example, if you send in a URLRequest with a username and password, and the server responds with a cookie of any kind, each request you make thereafter will package and send the same cookie, so in most cases, all you need to do is parse the initial response from the server (to set the state of your Flex app), assume the continued presence of the cookie, and when it's time to log out, send a URLRequest to log out, kill the cookie on the server, on status=200 do your Flex-app cleanup, and so on. Accessing the cookie values directly isn't the easiest thing in the world, though; you can use ExternalInterface as a proxy to JavaScript (examples of this online and here on SO, I'm sure), and get at them that way, but there's a good chance you don't even have to do that.
Hopefully that helps. Good luck!

Note also this post, which details some of the incredible distortion that Flex adds to HTTP Requests.

Related

CSRF protection while making use of server side caching

Situation
There is a site at examp.le that costs a lot of CPU/RAM to generate and a more lean examp.le/backend that will perform various tasks to read, write and serve user-specific data for authenticated requests. A lot of resources could be saved by utilizing a server side cache on the examp.le site but not on examp.le/backend and just asynchronously grab all user-specific data from the backend once the page arrives at the client. (Total loading time may even be lower, despite the need of an additional request.)
Threat model
CSRF attacks. Assuming (maybe foolishly) that examp.le is reliably safeguarded against XSS code injection, we still need to consider scripts on malicious site exploit.me that cause the victims browser to run a request against examp.le/backend with their authorization cookies included automagically and cause the server to perform some kind of data mutation on behalf of the user.
Solution / problem with that
As far as I understand, the commonly used countermeasure is to include another token in the generated exampl.le page. The server can verify this token is linked to the current user's session and will only accept requests that can provide it. But I assume caching won't work very well if we are baking a random token into every response to examp.le..?
So then...
I see two possible solutions: One would be some sort of "hybrid caching" where each response to examp.le is still programmatically generated but that program is just merging small dynamic parts to some cached output. Wouldn't work with caching systems that work on the higher layers of the server stack, let alone a CDN, but still might have its merits. I don't know if there is a standard ways or libraries to do this, or more specifically if there are solutions for wordpress (which happens to be the culprit in my case).
The other (preferred) solution would be to get an initial anti-CSRF token directly from examp.le/backend. But I'm not quite clear in my understanding about the implications of that. If the script on exploit.me could somehow obtain that token, the whole mechanism would make no sense to begin with. The way I understand it, if we leave exploitable browser bugs and security holes out of the picture and consider only requests coming from a non-obscure browser visiting exploit.me, then the HTTP_ORIGIN header can be absolutely trusted to be tamper proof. Is that correct? But then that begs the question: wouldn't we get mostly the same amount of security in this scenario by only checking authentication cookie and origin header, without throwing tokens back and forth?
I'm sorry if this question feels a bit all over the place, but I'm partly still in the process of getting the whole picture clear ;-)
First of all: Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF) are two different categories of attacks. I assume, you meant to tackle CSRF problem only.
Second of all: it's crucial to understand what CSRF is about. Consider following.
A POST request to exampl.le/backend changes some kind of crucial data
The request to exampl.le/backend is protected by authentication mechanisms, which generate valid session cookies.
I want to attack you. I do it by sending you a link to a page I have forged at cats.com\best_cats_evr.
If you are logged in to exampl.le in one browser tab and you open cats.com\best_cats_evr in another, the code will be executed.
The code on the site cats.com\best_cats_evr will send a POST request to exampl.le/backend. The cookies will be attached, as there is not reason why they should not. You will perform a change on exampl.le/backend without knowing it.
So, having said that, how can we prevent such attacks?
The CSRF case is very well known to the community and it makes little sense for me to write everything down myself. Please check the OWASP CSRF Prevention Cheat Sheet, as it is one of the best pages you can find in this topic.
And yes, checking the origin would help in this scenario. But checking the origin will not help, if I find XSS vulnerability in exampl.le/somewhere_else and use it against you.
What would also help would be not using POST requests (as they can be manipulated without origin checks), but use e.g. PUT where CORS should help... But this quickly turns out to be too much of rocket science for the dev team to handle and sticking to good old anti-CSRF tokens (supported by default in every framework) should help.

HTTP session tracking through base URL "resource"?

A little background: We're currently trying try specify an HTTP API between a couple of vendors so that different products can easily inter operate. We're not writing any "server" software yet, nor any client, but just laying out the basics of the API so that every party can start prototyping and then we can refine it. So the typical use-case for this API would be being used by (thin) HTTP layers inside a given application, not from within the browser.
Communication doesn't really make sense without having session state here, so we were looking into how to track sessions typically.
Thing is, we want to keep the implementation of the API as easy as possible with as little burden as possible on any used HTTP library.
Someone proposed to manage session basically through "URL rewriting", but a little more explicit:
POST .../service/session { ... }
=> reply with 201 Created and session URL location .../service/session/{session-uuid}
subsequent requests use .../service/session/{session-uuid}/whatever
to end the session the client does DELETE .../service/session/{session-uuid}
Looking around the web, initial searches indicate this is somewhat untypical.
Is this a valid approach? Specific drawbacks or pros?
The pros we identified: (please debunk where appropriate)
Simple on the implementation, no cookie or header tracking etc. required
Orthogonal to client authentication mechanism - if authentication is appropriate, we could easily pass the URLs to a second app that could continue to use the session (valid use case in our case)
Should be safe, as we're going https exclusively for this.
Since, PHPSESSID was mentioned, I stumbled upon this other question, where it is mentioned that the "session in URL" approach may be more vulnerable to session fixation attacks.
However, see 2nd bullet above: We plan to implement~specify authentication/authorization orthogonally to this session concept, so passing around the "session" url might even be a feature, so we think we're quite fine with having the session appear in the URL.

Is it okay to set a cookie with a HTTP GET request?

This might be a bit of an ethical question, but I'm having quite a discussion in the office about the following issue:
Is it okay to set a cookie with a HTTP GET request? Because whenever a HTTP request changes something in the application, you should use a POST request. HTTP GET should only be used to retrieve data identified by the Request-URI.
In this case, the application doesn't change, but because the cookie is altered, the user might get a different experience when the page loads again, meaning that the HTTP GET request changed the application behaviour (nothing changed server-side though).
Get request reference
The discussion started because we want to use a normal anchor element to set a cookie.
The problem with GETs, especially if they are on an a tag, is when they get spidered by the likes of Google.
In your case, you'd needlessly be creating cookies that will, more than likely, never get used.
I'd also argue that the GET rule it's not really about changing the application, more about changing data. I appreciate the subtle distinction with the cookie ( i.e. you are not changing data on YOUR system ), but generally, it's a good rule to have, and irrespective of where the data is stored, GET shouldn't really be used to change it.
The user can always have different experience when he issues another GET request - you do not expect to return always the same set of data for (imagined) time service: "GET /time/current".
Also, it is not said you are not allowed to change server-side state in response for GET requests - it's perfectly 'legal' to increase a page hit counter, for example, even if you store it in the database.
Consider the section 9.1.1 Safe Methods
Naturally, it is not possible to ensure that the server does not
generate side-effects as a result of performing a GET request; in
fact, some dynamic resources consider that a feature. The important
distinction here is that the user did not request the side-effects, so
therefore cannot be held accountable for them.
Also I would say it is perfectly acceptable to change or set a cookie in response for the GET request because you just return some data.

Is it possible to send HTTP Request without cookies?

I have to send some private data once from server to browser. I want to store it in a cookie. This data will be using later in Javascript code. But I want to send never(!) this private data to server when the browser does HTTP Request (because of security).
I know that I can set "path" value in cookie (to i.e. some abstract path) but then I won't be able to read this cookie (I'll be able to read from this abstract path, but if so, the browser send this cookie to server once - but as I said this data can't be sent to server).
So, my question is: is it somehow possible not to send a cookie with HTTP Request?
If you're sending this private data from server to browser then it is being exposed anyway. I don't think it matters much that it will be included in subsequent requests.
In general you should never place private data in cookies, at least not unless encrypted. So either a) do all of this over https or b) encrypt the data. But I'm guessing that b) will be a problem as you'll have to decrypt on the client side.
To be honest it sounds like you need to rethink your strategy here.
I don't think you'll be able to force the browser not to resend the cookie up if it's valid for that request.
Perhaps a better way would be to delete the cookie within the JS once you've read your data from it:
http://techpatterns.com/downloads/javascript_cookies.php
If you need to have it back in the JS again on the next response, just have the server re-send it, and delete it again on the client side.
I should say that sending data which you would deem to be 'private' in this way does not seem entirely appropriate on the face of it - as this information could easily be viewed using a proxy of some type sat between the browser and the server.
As Richard H mentioned, data in cookies is visible to the user if they know where to look. So this is not a good place to store secrets.
That said, I had a different application which needed to store lots of data client-side and ran into this same problem. (In my application, I needed to make the application able to run offline and keep the user actions if the PC crashes or website is down.) The solution is pretty simple:
1) Store the cookie data in a JavaScript variable. (Or maintain it in a variable already.)
2) Remove the cookies. Here's a function that can erase a cookie:
function cookieErase (name) {
document.cookie = name+'=; Max-Age=-99999999;path=/';
}
If you have more than one cookie (as was my case), you have to run the above function for every cookie you have stored. There is code to iterate each cookie, but in practice you probably know the names of the large cookies already and you might not want to delete other cookies your website is relying on.
3) Send the request as you would normally.
4) Restore the cookie data from the saved variables.
Here are some optimizations you can use:
1) Only trigger this code on a status 400 bad request (which is what you get back if the cookie data is too large). Sometimes, your data isn't too big, so deleting is unnecessary.
2) Restore the cookie data after a timeout if it isn't needed immediately. In this way, you can make multiple requests and only restore the data if there is idle time. This means your users can have a fast experience when actively using your website.
3) The moment you can, try to get any data moved to the server-side so the burden on the client/communication is less. In my case, the moment that the connection is back up, all actions are synchronized as soon as possible.

Stop Direct Page Calls to Ajax Pages

Is there a "clever" way of stopping direct page calls in ASP.NET? (Page functionality, not the page itself)
By clever, I mean not having to add in hashes between pages to stop AJAX pages being called directly. In a nutshell, this is stopping users from accessing the Ajax pages without it coming from one of your websites pages in a legitimate way. I understand that nothing is impossible to break, I am simply interested in seeing what other interesting methods there are.
If not, is there any way that one could do it without using sessions/cookies?
Have a look at this question: Differentiating Between an AJAX Call / Browser Request
The best answer from the above question is to check for a requested-by or custom header.
Ultimately, your web server is receiving requests (including headers) of what the client sends you - all data that can be spoofed. If a user is determined, then any request can look like an AJAX request.
I can't think of an elegant method to prevent this (there are inelegant and probably non-perfect methods whereby you provide a hash of some sort of request counter between ajax and non-ajax requests).
Can I ask why your application is so sensitive to "ajax" pages being called directly? Could you design around this?
You can check the Request headers to see if the call is initiated by AJAX Usually, you should find that x-requested-with has the value XMLHttpRequest. Or in the case of ASP.NET AJAX, check to see if ScriptMAnager.IsInAsyncPostBack == true. However, I'm not sure about preventing the request in the first place.
Have you looked into header authentication? If you only want your app to be able to make ajax calls to certain pages, you can require authentication for those pages...not sure if that helps you or not?
Basic Access Authentication
or the more secure
Digest Access Authentication
Another option would be to append some sort of identifier to your URL query string in your application before requesting the page, and have some sort of authentication method on the server side.
I don't think there is a way to do it without using a session. Even if you use an Http header, it is trivial for someone to create a request with the exact same headers.
Using session with ASP.NET Ajax requests is easy. You may run into some problems, like session expiration, but you should be able to find a solution.
With sessions you will be able to guarantee that only logged-in users can access the Ajax services. When servicing an Ajax request simply test that there is a valid session associated with it. Of course a logged-in user will be able to access the service directly. There is nothing you can do to avoid this.
If you are concerned that a logged-in user may try to contact the service directly in order to steal data, you can add a time limit to the service. For example do not allow the users to access the service more often than one minute at a time (or whatever rate else is needed for the application to work properly).
See what Google and Amazon are doing for their web services. They allow you to contact them directly (even providing APIs to do this), but they impose limits on how many requests you can make.
I do this in PHP by declaring a variable in a file that's included everywhere, and then check if that variable is set in the ajax call file.
This way, you can't directly call the file ever because that variable will never have been defined.
This is the "non-trivial" way, hence it's not too elegant.
The only real idea I can think of is to keep track of every link. (as in everything does a postback and then a response.redirect). In this way you could keep a static List<> or something of IP addresses(and possible browser ID and such) that say which pages are allowed to be accessed at the moment from that visitor.. along with a time out for them and such to keep them from going straight to a page 3 days from now.
I recommend rethinking your design to be sure that this is really needed though. And also note IPs and such can be spoofed.
Also if you follow this route be sure to read up about when static variables get disposed and such. You wouldn't want one of those annoying "your session has expired" messages when they have been using the site for 10 minutes.

Resources