Is it possible to send HTTP Request without cookies? - http

I have to send some private data once from server to browser. I want to store it in a cookie. This data will be using later in Javascript code. But I want to send never(!) this private data to server when the browser does HTTP Request (because of security).
I know that I can set "path" value in cookie (to i.e. some abstract path) but then I won't be able to read this cookie (I'll be able to read from this abstract path, but if so, the browser send this cookie to server once - but as I said this data can't be sent to server).
So, my question is: is it somehow possible not to send a cookie with HTTP Request?

If you're sending this private data from server to browser then it is being exposed anyway. I don't think it matters much that it will be included in subsequent requests.
In general you should never place private data in cookies, at least not unless encrypted. So either a) do all of this over https or b) encrypt the data. But I'm guessing that b) will be a problem as you'll have to decrypt on the client side.
To be honest it sounds like you need to rethink your strategy here.

I don't think you'll be able to force the browser not to resend the cookie up if it's valid for that request.
Perhaps a better way would be to delete the cookie within the JS once you've read your data from it:
http://techpatterns.com/downloads/javascript_cookies.php
If you need to have it back in the JS again on the next response, just have the server re-send it, and delete it again on the client side.
I should say that sending data which you would deem to be 'private' in this way does not seem entirely appropriate on the face of it - as this information could easily be viewed using a proxy of some type sat between the browser and the server.

As Richard H mentioned, data in cookies is visible to the user if they know where to look. So this is not a good place to store secrets.
That said, I had a different application which needed to store lots of data client-side and ran into this same problem. (In my application, I needed to make the application able to run offline and keep the user actions if the PC crashes or website is down.) The solution is pretty simple:
1) Store the cookie data in a JavaScript variable. (Or maintain it in a variable already.)
2) Remove the cookies. Here's a function that can erase a cookie:
function cookieErase (name) {
document.cookie = name+'=; Max-Age=-99999999;path=/';
}
If you have more than one cookie (as was my case), you have to run the above function for every cookie you have stored. There is code to iterate each cookie, but in practice you probably know the names of the large cookies already and you might not want to delete other cookies your website is relying on.
3) Send the request as you would normally.
4) Restore the cookie data from the saved variables.
Here are some optimizations you can use:
1) Only trigger this code on a status 400 bad request (which is what you get back if the cookie data is too large). Sometimes, your data isn't too big, so deleting is unnecessary.
2) Restore the cookie data after a timeout if it isn't needed immediately. In this way, you can make multiple requests and only restore the data if there is idle time. This means your users can have a fast experience when actively using your website.
3) The moment you can, try to get any data moved to the server-side so the burden on the client/communication is less. In my case, the moment that the connection is back up, all actions are synchronized as soon as possible.

Related

Database rollback on API response failure

A customer of ours is very persistent that they expect this "from any API" (meaning they don't want to pay for the changes). I seem to have trouble finding clear information on this though.
Say we have an API that creates an appointment for a calendar. Server-side everything was successful, data is committed to the database. API tries to send the HTTP 201 (Created) response, but something goes wrong there. Client ignores the response, or connection dropped, ...
They want our API to undo the database changes in that particular situation.
The question is not how to do this, but rather if this is something most APIs do? Is this standard behavior? Or something similar like refusing duplicate create requests?
The difficult part of course is to actually know if an API has failed to send the response, and as far as I am concerned with respect to the crux of the question, it is not a usual behavior implemented. If the user willingly inputs the data, you can go ahead and store it. If the response doesn't return properly due to timeouts (you are not responsible for user "ignoring" the response), then the client side code can refresh on failure and load fresh data. And the user can delete inputted data themselves(given you provide an endpoint for that)
Depending on the database, it is possible to make all database changes of an API reversible. For example, with SQL, you use [SQL transactions][1] using commit, rollback and savepoints. There is most likely a similar mechanism available for noSQL.

Is it okay to set a cookie with a HTTP GET request?

This might be a bit of an ethical question, but I'm having quite a discussion in the office about the following issue:
Is it okay to set a cookie with a HTTP GET request? Because whenever a HTTP request changes something in the application, you should use a POST request. HTTP GET should only be used to retrieve data identified by the Request-URI.
In this case, the application doesn't change, but because the cookie is altered, the user might get a different experience when the page loads again, meaning that the HTTP GET request changed the application behaviour (nothing changed server-side though).
Get request reference
The discussion started because we want to use a normal anchor element to set a cookie.
The problem with GETs, especially if they are on an a tag, is when they get spidered by the likes of Google.
In your case, you'd needlessly be creating cookies that will, more than likely, never get used.
I'd also argue that the GET rule it's not really about changing the application, more about changing data. I appreciate the subtle distinction with the cookie ( i.e. you are not changing data on YOUR system ), but generally, it's a good rule to have, and irrespective of where the data is stored, GET shouldn't really be used to change it.
The user can always have different experience when he issues another GET request - you do not expect to return always the same set of data for (imagined) time service: "GET /time/current".
Also, it is not said you are not allowed to change server-side state in response for GET requests - it's perfectly 'legal' to increase a page hit counter, for example, even if you store it in the database.
Consider the section 9.1.1 Safe Methods
Naturally, it is not possible to ensure that the server does not
generate side-effects as a result of performing a GET request; in
fact, some dynamic resources consider that a feature. The important
distinction here is that the user did not request the side-effects, so
therefore cannot be held accountable for them.
Also I would say it is perfectly acceptable to change or set a cookie in response for the GET request because you just return some data.

Stop Direct Page Calls to Ajax Pages

Is there a "clever" way of stopping direct page calls in ASP.NET? (Page functionality, not the page itself)
By clever, I mean not having to add in hashes between pages to stop AJAX pages being called directly. In a nutshell, this is stopping users from accessing the Ajax pages without it coming from one of your websites pages in a legitimate way. I understand that nothing is impossible to break, I am simply interested in seeing what other interesting methods there are.
If not, is there any way that one could do it without using sessions/cookies?
Have a look at this question: Differentiating Between an AJAX Call / Browser Request
The best answer from the above question is to check for a requested-by or custom header.
Ultimately, your web server is receiving requests (including headers) of what the client sends you - all data that can be spoofed. If a user is determined, then any request can look like an AJAX request.
I can't think of an elegant method to prevent this (there are inelegant and probably non-perfect methods whereby you provide a hash of some sort of request counter between ajax and non-ajax requests).
Can I ask why your application is so sensitive to "ajax" pages being called directly? Could you design around this?
You can check the Request headers to see if the call is initiated by AJAX Usually, you should find that x-requested-with has the value XMLHttpRequest. Or in the case of ASP.NET AJAX, check to see if ScriptMAnager.IsInAsyncPostBack == true. However, I'm not sure about preventing the request in the first place.
Have you looked into header authentication? If you only want your app to be able to make ajax calls to certain pages, you can require authentication for those pages...not sure if that helps you or not?
Basic Access Authentication
or the more secure
Digest Access Authentication
Another option would be to append some sort of identifier to your URL query string in your application before requesting the page, and have some sort of authentication method on the server side.
I don't think there is a way to do it without using a session. Even if you use an Http header, it is trivial for someone to create a request with the exact same headers.
Using session with ASP.NET Ajax requests is easy. You may run into some problems, like session expiration, but you should be able to find a solution.
With sessions you will be able to guarantee that only logged-in users can access the Ajax services. When servicing an Ajax request simply test that there is a valid session associated with it. Of course a logged-in user will be able to access the service directly. There is nothing you can do to avoid this.
If you are concerned that a logged-in user may try to contact the service directly in order to steal data, you can add a time limit to the service. For example do not allow the users to access the service more often than one minute at a time (or whatever rate else is needed for the application to work properly).
See what Google and Amazon are doing for their web services. They allow you to contact them directly (even providing APIs to do this), but they impose limits on how many requests you can make.
I do this in PHP by declaring a variable in a file that's included everywhere, and then check if that variable is set in the ajax call file.
This way, you can't directly call the file ever because that variable will never have been defined.
This is the "non-trivial" way, hence it's not too elegant.
The only real idea I can think of is to keep track of every link. (as in everything does a postback and then a response.redirect). In this way you could keep a static List<> or something of IP addresses(and possible browser ID and such) that say which pages are allowed to be accessed at the moment from that visitor.. along with a time out for them and such to keep them from going straight to a page 3 days from now.
I recommend rethinking your design to be sure that this is really needed though. And also note IPs and such can be spoofed.
Also if you follow this route be sure to read up about when static variables get disposed and such. You wouldn't want one of those annoying "your session has expired" messages when they have been using the site for 10 minutes.

Changing credentials on client-side for Basic Authentication on Flex

I want to let the user automatically re-login in my Flex app, which uses Basic Authentication
By the way, I have noted this StackOverflow question, which is relevant, but does not address the question of logging out client-side.
For example, after user A logs in, user B comes to the browser, goes to the login screen (perhaps in a new tab) and logs in.
This should mean that I send user B's credentials in the HTTP headers, and that since these are different from user A's, the server notes the fact and creates a new and separate session.
However, Flex's HTTP proxy catches the header and actually ignores these new credentials.
Flex does offer a way to tell the server to logout, and the Flex login code could invoke this every time before sending credentials, but that seems like an ugly workaround. I want to be able to do this client-side. I could also use a non-standard header for Basic Authentication (since I control the server-side Authentication as well), but that also seems like an ugly workaround.
Is there some way to simply end the session on client-side from Flex code? This is possible from JavaScript, for example.
And is there a way to directly work with cookies at client-side, as I can in JavaScript?
I understand that some of the limitations may be caused by security considerations, but all my communication is to the "home" server, so it should be possible to avoid the restrictions.
You're sort of asking a couple of different questions here.
You can't actually end a basic-auth "session" manually per se (at least not to the best of my knowledge); at best, you can authenticate against a kind of variable basic-auth realm, which may or may not work for you, but otherwise, you're sort of stuck with the first-authenticated session for the duration of the browser instance. Generally not the best way to go, unless you're pretty sure the user owns the machine, or can be depended on to close the browser after each session.
That leaves at least two other options, then. The first is to send in your credentials with an URLRequest object (the post you cited, which I wrote, shows how to do that), and to have your HTTP response hand back something indicating the credentials were accepted -- e.g., a GUID, maybe, generated and stored in some session table (in the database sense) on the server, perhaps. Then on successive HTTP requests, you might send along that GUID in an HTTP header, or as a value in each GET or POST request (similarly to the way Facebook handles their API clients, for instance), check the timeliness of that value on the server, and if all's well, carry on. To "log out," then, you'd simply send in a request to invalidate that GUID, perform the necessary cleanup on the server and inside your Flex app, and all should be fine: the next user can sit down, log in, authenticate, and the process continues.
Another way would be to work with cookies directly. The cookie mechanisms are actually handled mostly for you in Flex, though, since everything gets passed back and forth by the browser on your behalf. For example, if you send in a URLRequest with a username and password, and the server responds with a cookie of any kind, each request you make thereafter will package and send the same cookie, so in most cases, all you need to do is parse the initial response from the server (to set the state of your Flex app), assume the continued presence of the cookie, and when it's time to log out, send a URLRequest to log out, kill the cookie on the server, on status=200 do your Flex-app cleanup, and so on. Accessing the cookie values directly isn't the easiest thing in the world, though; you can use ExternalInterface as a proxy to JavaScript (examples of this online and here on SO, I'm sure), and get at them that way, but there's a good chance you don't even have to do that.
Hopefully that helps. Good luck!
Note also this post, which details some of the incredible distortion that Flex adds to HTTP Requests.

How do I prevent replay attacks?

This is related to another question I asked. In summary, I have a special case of a URL where, when a form is POSTed to it, I can't rely on cookies for authentication or to maintain the user's session, but I somehow need to know who they are, and I need to know they're logged in!
I think I came up with a solution to my problem, but it needs fleshing out. Here's what I'm thinking. I create a hidden form field called "username", and place within it the user's username, encrypted. Then, when the form POSTs, even though I don't receive any cookies from the browser, I know they're logged in because I can decrypt the hidden form field and get the username.
The major security flaw I can see is replay attacks. How do I prevent someone from getting ahold of that encrypted string, and POSTing as that user? I know I can use SSL to make it harder to steal that string, and maybe I can rotate the encryption key on a regular basis to limit the amount of time that the string is good for, but I'd really like to find a bulletproof solution. Anybody have any ideas? Does the ASP.Net ViewState prevent replay? If so, how do they do it?
Edit: I'm hoping for a solution that doesn't require anything stored in a database. Application state would be okay, except that it won't survive an IIS restart or work at all in a web farm or garden scenario. I'm accepting Chris's answer, for now, because I'm not convinced it's even possible to secure this without a database. But if someone comes up with an answer that does not involve the database, I'll accept it!
If you hash in a time-stamp along with the user name and password, you can close the window for replay attacks to within a couple of seconds. I don't know if this meets your needs, but it is at least a partial solution.
There are several good answers here and putting them all together is where the answer ultimately lies:
Block-cipher encrypt (with AES-256+) and hash (with SHA-2+) all state/nonce related information that is sent to a client. Hackers with otherwise just manipulate the data, view it to learn the patterns and circumvent everything else. Remember ... it only takes one open window.
Generate a one-time random and unique nonce per request that is sent back with the POST request. This does two things: It ensures that the POST response goes with THAT request. It also allows tracking of one-time use of a given set of get/POST pairs (preventing replay).
Use timestamps to make the nonce pool manageable. Store the time-stamp in an encrypted cookie per #1 above. Throw out any requests older than the maximum response time or session for the application (e.g., an hour).
Store a "reasonably unique" digital fingerprint of the machine making the request with the encrypted time-stamp data. This will prevent another trick wherein the attacker steals the clients cookies to perform session-hijacking. This will ensure that the request is coming back not only once but from the machine (or close enough proximity to make it virtually impossible for the attacker to copy) the form was sent to.
There are ASPNET and Java/J2EE security filter based applications that do all of the above with zero coding. Managing the nonce pool for large systems (like a stock trading company, bank or high volume secure site) is not a trivial undertaking if performance is critical. Would recommend looking at those products versus trying to program this for each web-application.
If you really don't want to store any state, I think the best you can do is limit replay attacks by using timestamps and a short expiration time. For example, server sends:
{Ts, U, HMAC({Ts, U}, Ks)}
Where Ts is the timestamp, U is the username, and Ks is the server's secret key. The user sends this back to the server, and the server validates it by recomputing the HMAC on the supplied values. If it's valid, you know when it was issued, and can choose to ignore it if it's older than, say, 5 minutes.
A good resource for this type of development is The Do's and Don'ts of Client Authentication on the Web
You could use some kind of random challenge string that's used along with the username to create the hash. If you store the challenge string on the server in a database you can then ensure that it's only used once, and only for one particular user.
In one of my apps to stop 'replay' attacks I have inserted IP information into my session object. Everytime I access the session object in code I make sure to pass the Request.UserHostAddress with it and then I compare to make sure the IPs match up. If they don't, then obviously someone other than the person made this request, so I return null. It's not the best solution but it is at least one more barrier to stop replay attacks.
Can you use memory or a database to maintain any information about the user or request at all?
If so, then on request for the form, I would include a hidden form field whose contents are a randomly generated number. Save this token to in application context or some sort of store (a database, flat file, etc.) when the request is rendered. When the form is submitted, check the application context or database to see if that randomly generated number is still valid (however you define valid - maybe it can expire after X minutes). If so, remove this token from the list of "allowed tokens".
Thus any replayed requests would include this same token which is no longer considered valid on the server.
I am new to some aspects of web programming but I was reading up on this the other day. I believe you need to use a Nonce.
(Replay attacks can easily be all about an IP/MAC spoofing, plus you're challenged on dynamic IPs )
It is not just replay you are after here, in isolation it is meaningless. Just use SSL and avoid handcrafting anything..
ASP.Net ViewState is a mess, avoid it. While PKI is heavyweight and bloated, at least it works without inventing your own security 'schemes'. So if I could, I'd use it and always go for mutual authent. Server-only authentification is quite useless.
The ViewState includes security functionality. See this article about some of the build-in security features in ASP.NET . It does validation against the server machineKey in the machine.config on the server, which ensures that each postback is valid.
Further down in the article, you also see that if you want to store values in your own hidden fields, you can use the LosFormatter class to encode the value in the same way that the ViewState uses for encryption.
private string EncodeText(string text) {
StringWriter writer = new StringWriter();
LosFormatter formatter = new LosFormatter();
formatter.Serialize(writer, text);
return writer.ToString();
}
Use https... it has replay protection built in.
If you only accept each key once (say, make the key a GUID, and then check when it comes back), that would prevent replays. Of course, if the attacker responds first, then you have a new problem...
Is this WebForms or MVC? If it's MVC you could utilize the AntiForgery token. This seems like it's similar to the approach you mention except it uses basically a GUID and sets a cookie with the guid value for that post. For more on that see Steve Sanderson's blog: http://blog.codeville.net/2008/09/01/prevent-cross-site-request-forgery-csrf-using-aspnet-mvcs-antiforgerytoken-helper/
Another thing, have you considered checking the referrer on the postback? This is not bulletproof but it may help.

Resources