What are the potential security issues in this implementation of SSO? - asp.net

I'm currently researching cross-domain SSO implementations, and I may not be able to use a third party SSO provider.
I found a custom implementation online that involves a series redirects and an encrypted querystring parameter.
MrUser logs into http://www.foo.com
MrUser clicks a link to http://www.bar.com/page.aspx
MrUser is not authenticated on bar.com, but bar.com has authentication code that redirects to http://www.foo.com/sso.aspx
The sso.aspx page checks for a valid ASP.NET authentication cookie in the cookies collection. If it exists, sso.aspx redirects to http://www.bar.com/page.aspx&ssoauth=[encryptedkey] (where [encryptedkey] is an MrUser's encrypted id that foo.com and bar.com have agreed on). If there is no valid ASP.NET authentication cookie, then it just redirects without the ssoauth parameter.
Bar.com does a check to avoid an infinite redirect loop and then decrypts the ssoauth value. If there is no ssoauth parameter, then MrUser is redirected to the login page, otherwise Bar.com uses the decrypted id to authenticate MrUser before it sends him on to page.aspx.
What are the potential security issues (or other types of issues) with this method?
(cite: http://blogs.neudesic.com/blogs/michael_morozov/archive/2006/03/17/72.aspx)
Edit: In response to the answers citing that the encrypted id is the same every time, and an attacker could use it to gain access - What if bar.com checks the referrer so that it only accepts ssoauth parameters from foo.com?

The first issue is that any encrypt/decrypt scheme can be figured out when it's plainly visible. You'd be better of implementing something more along the lines of a PKI encryption/decryption platform where the encryption keys are public but the decryption keys are private. The encryption will need to be suitably complex in order to increase the "time to crack", and that will require resources to perform encrypt/decrypt of the key.
The fact that you have a non-common domain will create the need to supply the encrypted piece in the header (either post or get), and pass it in plaintext. While querystring information is kept secure for the lifetime of the request (edit: assuming SSL), it is not secure from a browser history (making it accessible to common-use computers).
The worst security problem is the concept of "crack one/crack em all". If one of the servers is compromised and its encryption/decryption algorithms, salt, etc are exposed, it would be possible for an attacker to compromise all systems by generating valid encrypted SSO keys at will and on demand.
None of these problems is terribly tragic. I wouldn't implement this scheme at a bank or a medical establishment, but for a low risk site like SO or Twitter it would be perfectly acceptable. It will all come down to managing resources, risk, and gain.

Anyone can use encryptedKey to gain access as MrUser. Rather than encryption, a signing or message authentication service is needed. The authenticated message should include a nonce with the user identifier to prevent replays.
Protocols like this are hard to devise. Even TLS, which was widely used and reviewed for years has security flaws. Don't try to use an unproven authentication mechanism.

A potential problem is if ssoauth encrypted key is only the User's encrypted ID. Such a setup will result in providing the same key each time, which can therefore be both reused, by original user or worse by a third party.
One way to avoid this situation is to keep the time-of-day clocks of foo.com and bar.com servers relatively synchronized and to issue keys which include the date/time (modulo 5 minutes).
People are often tempted to use the web client's IP address as one of the elements of this encryption, but this is a bad idea, for several proxies and gateways use different public IPs within their pools to access different domains/servers.
Another way to allow for a distinct password each time is to have bar.com's redirect to
http://www.foo.com/sso.aspx
include a parameter such as
http://www.foo.com/sso.aspx?ParamForKey=some_random_number
and to use ParamForKey as part of the encryption process

There are several issues:
1) How long is the encrypted token valid? It should be valid only for a couple of seconds. Easy if all servers are on ntp. Having expiry also protects user in case they have the link containing encrypted token is left around. Validating nonce is difficult if you have many bar.com servers - you could say that having an expiry of a couple of seconds should mitigate replay.
2) Problem with SSO cross domain is single sign off. What if users sign off foo.com. The session on bar.com must be invalidated. You could XSRF bar.com logout as a hack :). You should have bar.com beacon foo.com every 15 minutes to see if user is still logged in.
3) What if user does not sign off bar.com and it is a multi-user computer and another user signs onto foo.com? You have to ensure that you if userids do not match previous user's data is not shown.
4) As someone else mentioned, you probably want a signature or HMAC on userid rather than encryption. If you do encrypt, remember to protect the integrity of the ciphertext. You do not want user A flipping bits in ciphertext to see if they can access User B's data on bar.com.
I would have the redirect over https.
Finally, as everyone said, referrers can be spoofed. Don't trust them.

Related

Does Forms Authentication protect from session hijacking?

I have ASP.NET MVC app that uses Forms Authentication. After user is authenticated, in response he will receive forms cookie that contains auth information. Now regarding the forms cookie: It is encrypted by a machine key and it is protected from tampering by signature. I also use HTTPS... However, what if somehow I get the cookie and try to make request from another client (meaning that the request will be made from another IP address)?
It seems to me that this scenario will work. Are there any ways to defend from this kind of attack?
If you are using HTTPS everywhere on your site and set requireSSL="true" on your system.web/authentication/forms element in web.config, you are instructing the browser to only pass that cookie back over an HTTPS connection. This will protect against the vast majority of traffic sniffing-based session hijacking attacks and you should definitely use it if your site is HTTPS only.
Forms Authentication is inherently stateless. The server is encrypting the following information and storing it client-side: CookiePath, Expiration, Expired, IsPersistent, IssueDate, Name, UserData, Version. Assuming your machineKey hasn't been compromised, the client will just see this as a blob of encrypted data. When it presents that blob to the server again, the server decrypts it and converts it back into a FormsAuthenticationTicket, validates the fields in the ticket against config, verifies that the ticket isn't expired, etc. and decides whether to treat the request as authenticated. It doesn't 'remember' anything about which tickets are outstanding. Also note that it doesn't include the IP address anywhere.
The only real attack vector I can think of if you are HTTPS-only, take care to protect your machineKey, and set the forms auth cookie to requireSSL would be for an attacker to target the client's browser and/or computer. Theoretically they could steal the cookie from memory or disk out of the browser's space. It might be possible for a virus/trojan to do this or even a malicious browser extension. In short, if a user could get their hands on a valid, non-expired Forms Auth cookie, they could present it from any machine they wanted to until it expired. You can reduce the risk here by not allowing persistent auth cookies and keeping your timeouts to a minimum.
If they had the machineKey, they could create FormsAuth cookies from scratch whenever they wanted to.
Oh.. Can't forget Heartbleed. If you had a load balancer or reverse proxy that was using an insecure version of OpenSSL, it's possible an attacker could compromise your private key and intercept traffic over HTTPS connections. ASP.NET doesn't use OpenSSL, so you're safe from this in a pure-MS stack. If you ever hear anything about a vulnerability in MS' SSL implementation, you'd want to patch it ASAP and get your passwords changed and certificates re-issued.
If you are concerned about the browser/machine based hijacking, you might want to take a look at a project I started [and abandoned] called Sholo.Web.Security (https://github.com/scottt732/SholoWebSecurity). It's goal was to strengthen Forms Authentication by maintaining state on the server at the expense of some overhead on each request. You get the ability to do things like revoke tickets server-side (kick/logout a user) and prevent users from moving tickets between IP addresses. It can get annoying in the traveling mobile user scenario that Wiktor describes (it's optional). Feel free to fork it or submit pull requests.
The Anti-CSRF features that 0leg refers to apply to the UI/form mechanism that initiates the login process, but to my knowledge there is nothing in the Forms Authentication process itself that relates to CSRF. That is, once the cookie is issued to the client, the only thing protecting it from being bounced between servers is the fact that cookies are restricted to the domains/subdomain they were issued for. Your stackoverflow.com cookies won't be presented to serverfault.com. The browser takes care of that stuff for you.
Are there any ways to defend from that kind of attacks?
You shouldn't. Years ago we have had implemented such feature and abandoned it soon. It turned out that:
a single user making requests from the very same browser/machine but switching between http/https can sometimes be seen from different IP adresses
a single user traveling and using her mobile phone sometimes makes consecutive requests from different IP addresses when her phone switches between BTSes
Just to clarify the terminology, session hijacking is usually referred to the vulnerability where an unauthorized user accesses the session state on the server.
Authentication cookies are different from session cookies. ASP.NET puts a great deal more precautions in safeguarding authentication cookies. What you describe is better described by the term CSRF (Cross Site Request Forgery). As #Wiktor indicated in his response, restricting access by IP is not practical. Plus, if you read how CSRF works, the exploit can run in the user browser from the original IP address.
The good news is that ASP.NET MVC has built in support for CSRF prevention that is pretty easy to implement. Read here.

How can ASP.NET or ASP.NET MVC be protected from related domain cookie attacks?

The related domain cookie attack (more info) allows machines in the same DNS domain to add additional cookies that will also be sent to other computers in the same domain.
This can cause issues with authentication, or at worst be a component in a confused deputy attack.
Question
How can I protect ASP.NET or ASP.NET MVC from this type of attack?
One possible attack scenario
I log into a "secure" web app
I get the credentials for my account
I trick the user into visiting my site on the same DNS domain
I insert the cookie (of my creds)
the user goes back to your app.
Both cookies (or an overwritten one) is sent to the server
User is doing things under my account
That is a simplified example, but the idea can be ported other style of attacks, Im just picking the scenario that doesn't seem "too bad".
One idea how it can "get bad" is if this was step 1 of a two-step attack. Suppose the user uploaded a bad file that was accessible only in his account; the other user then unwittingly downloads that file, running any executable code that is there.
There are a ton of other scenarios that are possible... rather than list them all here I'm trying to figure out how I can protect my server from this type of attack.
Channel Bound Cookies
The following Proposed RFC comes from a Google employee and describes a way for Clients use a self-signed Browser Certificate (thus requiring no confusing "pop-up" for the end user) which can also address the cookie security issue known as "Related Domain Cookies"
What follows below is an extract of http://www.browserauth.net/ , a section of the RFC, some interesting commentary, and some criticism on this extension.
Overview of Channel Bound Cookies
Once the underlying TLS channel uses TLS client authentication (with the TLS-OBC extension), the server can bind its cookies to the TLS channel by associating them with the client's public key, and ensuring that the cookies are only ever used over TLS channels authenticated with that public (client) key.
This means that if such a channel-bound cookie is ever stolen off a client's machine, that cookie won't be able to authenticate an HTTP session to the server from other machines. This includes man-in-the-middle attackers that inject themselves into the connection between client and server, perhaps by tricking users into clicking through certificate-mismatch warnings: such a man-in-the-middle will have to generate its own TLS session with the server, which won't match the channel that the cookie is bound it.
Channel Binding
It's up to the server to decide whether to bind cookies to TLS channels. If the client doesn't support TLS-OBC, or if the cookie it's about to set will be used across different origins, then the server will not channel-bind the cookie. If it does decide to channel-bind the cookie, it should associate the cookie with the client's public key. This is similar to RFC 5929, but instead of the client binding data to the server's public key, in this case the server would be binding data (the cookie) to the client's public key. The server can do this either by simply storing, in a backend database, the fact that a certain HTTP session is expected to be authenticated with a certain client public key, or it can use suitable cryptography to encode in the cookie itself which TLS client public key that cookie is bound to.
In the figure above, the server includes the client's public key into a cryptographically signed datastructure that also includes the authenticated user's id. When the server receives the cookie back from the client, it can verify that it indeed issued the cookie (by checking the signature on the cookie), and verify that the cookie was sent over the correct channel (by matching the TLS client key with the key mentioned in the cookie).
To be continued here.... http://www.browserauth.net/channel-bound-cookies
RFC Snip
TLS Origin-Bound Certificates RFC Draft
(Excerpt)
4.3. Cookie Hardening
One way TLS-OBC can be used to strengthen cookie-based
authentication is by "binding" cookies to an origin-bound
certificate. The server, when issuing a cookie for an HTTP
session, would associate the client's origin-bound certificate with
the session (either by encoding information about the certificate
unforgeably in the cookie, or by associating the certificate with
the cookie's session through some other means). That way, if and
when a cookie gets stolen from a client, it cannot be used over a
TLS connection initiated by a different client - the cookie thief
would also have to steal the private key associated with the
client's origin-bound certificate, a task considerably harder
especially when we assume the existence of a Trusted Platform
Module or other Secure Element that can store the
origin-bound-certificate's private key.
Additional Commentary from public-web-security#w3.org
Also, note that somewhat counter-intuitively, channel-bound cookies protect against many related-domain attacks even if the client cert that they are bound to has broader scope than a web origin.
Imagine, for a moment, that a user-agent creates a single self-signed certificate that it uses as a TLS client cert for all connections to all servers (not a good idea in terms of privacy, but follow me along for this thought experiment). The servers then set cookies on their respective top-level domains, but channel-bind them to the user-agent's one-and-only client cert.
So, let's say that an app app.heroku.com sets a (channel-bound) cookie on my browser for domain .heroku.com, and that there is an attacker on attacker.heroku.com. One attack we might be concerned about is that the attacker simply harvests the .heroku.com cookie from my browser by luring me to attacker.heroku.com. They won't be able to actually use the cookie, however, because the cookie is channel-bound to my browser's client cert, not to the attacker's client cert.
Another attack we might be concerned about is that attacker.heroku.com sets an .heroku.com cookie on my user agent in order to make me log into app.heroku.com as himself. Again, assuming that the only way the attacker can obtain the cookies is by getting them from app.heroku.com, this means that the cookies he has at his disposal will be channel-bound to his client cert, not to my client cert - thus when my browser sends them to app.heroku.com they won't be valid.
The TLS-OBC proposal, of course, assumes more fine-grained "scopes" for the client certificates. The reason for that, however, is purely to prevent tracking across unrelated domains. Related-domain attacks are already mitigated even if we used coarse-grained client certificates and coarse-grained (i.e., domain) cookies. I, at least, found this a little counter-intuitive at first, since the other proposed defense it to forbid coarse-grained cookies altogether and use origin cookies instead.
Criticism from public-web-security#w3.org
There are a number of issues that need to be considered for TLS-OBC; I'll highlight a couple here that I'm aware of.
Some SSL handshake logic may need to be modified slightly; see https://bugzilla.mozilla.org/show_bug.cgi?id=681839 for technical discussion.
There are potential privacy considerations; in particular if the unique client certificate is sent in cleartext before the negotiation of the master secret, a passive network observer may be able to uniquely identify a client machine. The attacker would already have the client's IP address, so this isn't a huge problem if the certificate is regenerated on an IP address change, but that would nullify much of the authentication benefit. A proposal to allow a client certificate to be sent after the master secret negotiation has been made. (can't find the bug right now, sorry)
One proposal how #2 could be addressed is here: https://datatracker.ietf.org/doc/html/draft-agl-tls-encryptedclientcerts
There are tricky interactions with SPDY. There will be updates on browserauth.net for this.
Fix the RFCs
The core issue here seems to be that any host can write a cookie that can be overwritten by any other host in the same domain. What if there is a way to write a value to the client that absolutely can not be overwritten by any other host? I haven't found anything like that is also automatically included in the HTTP header (like a cookie)
Here are three solutions that might work, though I like Solution 2 or #3 if browsers implement it correctly
Solution 1: Add more information when uploading cookies to the server
When the client sends cookies to the server, also include the domain of the cookie that was sent. The server then knows what domain and path to look for. This means the client adds a new HTTP header Cookie-Details on every request:
GET /spec.html HTTP/1.1
Host: www.example.org
Cookie: name=value; name2=value2
Cookie-Details: name="value":"domain":"path":"IsSecure":"IsHTTPOnly"; name2="value2":"domain2":"path2":"IsSecure":"IsHTTPOnly"
Accept: */*
The server can then ignore the details field, or prefer it over the one that doesn't provide the details. The reason I included the "value" field in the details section is because the server would not be able to tell the difference between two cookies that have the domain set to example.com and secure.example.com if they both cookies have the same name. Most browsers will send the values in a random order.
Perhaps this can be optimized so that the server can tell the client if this new cookie format is supported or not, and the client can respond accordingly.
Solution 2: Extend HTML5 LocalStorage so that data is (optionally) automatically inserted into the HTTP header
If we could extend HTML5's LocalStorage to allow a Secure/HTTPOnly data, we can imitate what is done in Solution #1 above, but have the change driven by the HTML5 LocalStorage W3C Working Group
The benefit of this is that there is less overhead than solution #1, since the more verbose cookie details are only sent to the server when its needed. In other words if a sensitive value is saved using the "new cookie details" format into LocalStorage, then there is a clear separation of what data needs to be sent, and in what format.
Solution 3 "Cookie validation"
A user visits a web app that has this "special" validation mode enabled.
On the HTTP response some cookies are sent to the browser. The cookies can be HTTP Only, Secure, ...anything)
A alongside the cookies, another header is sent to the cookies: Set-CookieValidationHash. It contains A JSON array of SHA-256 hashed cookie keys and values, and specifies the expiration of the value
The browser then logically ties this header to the cookies with the following behavior
This header is placed into a "Cookie Validation Jar" that can only be written to by the same DNS Domain, and DNS Scope
This header is opaque to the client and is returned to the server on every HTTP request (like a cookie)
The server will use this field to validate the cookies that are sent, and issue an error (or whatever) if the checksum fails.
Sourced from: http://www.w2spconf.com/2011/papers/session-integrity.pdf
5.2. Integrity through Custom Headers
Instead of securing cookies, we can achieve session integrity
by choosing a new method of storing and transmitting session
state. While this could be done using special browser plugins
like Flash, we would rather choose a design with the fewest
dependencies, so we will focus only on basic HTTP tools.
The basic form of an HTTP request has very few places that
are suitable for sending data with integrity. Data in the URL
or entity body of HTTP requests has no integrity, because
those parts of the HTTP request are writable across origins
and thus spoofable by an attacker. Cookies are also weak in
this regard, as they can be overwritten by the attacker in our
threat model. However, through the use of a JavaScript API
called XMLHttpRequest (XHR), we can send data in a custom
header.
Background. XMLHttpRequest (XHR) allows HTTP
requests containing custom headers to be made, and the
responses read, but only to the origin of the executing
JavaScript.(See Note5) As a result, requests made via XHR can be
distinguished by a server as necessarily originating from the
site itself.
Design. We will not use cookies at all, and instead pass
a session identifying token in a custom HTTP header which is
only written via XMLHttpRequest. The server should treat all
requests lacking this custom header, or containing an invalid
token, as belonging to a new, anonymous session. In order to
persist this session identifying token across browser restarts
and between different pages of the same application, the token
can be stored in HTML5 localStorage by JavaScript upon
successful authentication.
Security. Observe that in this model, the session identifying token will only be sent to the origin server, and will
not be included in the URL or entity body. These properties
provide confidentiality and integrity, respectively. Unlike with
cookies, the token cannot be overwritten by the attacker, since
localStorage completely partitions data between origins in
most browsers (See Note 6). A site using HTTPS can ensure that the token
is only sent over HTTPS, thus ensuring the secrecy of the
token even in the presence of an active network attacker. In
addition, because this token is not sent automatically by the
browser, it also serves to protect against CSRF attacks.
Disadvantages. This approach, however, has several disadvantages. First, it requires all requests requiring access to
a user’s session to be made using XMLHttpRequest. Merely
adding a session identifying token explicitly to all requests,
much less doing them over XHR, would require major changes
to most existing websites, and would be cumbersome and
difficult to implement correctly without a framework. This
is even further complicated if requests for sub-resources like
images require access to session state, since it is not trivial
to load images via XHR. Third, since this design depends on
the presence and security of HTML5 localStorage, it will be
impossible to implement on some legacy browsers
(Note 5) Sites may make cross-site requests using XHR if supported by the
browser and authorized by the target server
(Note 6) True in Chrome, Firefox, Safari, and Opera on OS X, several Linux
distributions, and Windows 7. Internet Explorer 8 does not partition HTTP
and HTTPS, but Internet Explorer 9 does.
Main Points
Not give permission for run in the upload directories (an attacker can be from inside).
Get all possible user information's that is connect to the server (cookie is only one).
Monitor the server and alerts (found/see him, stop him, close the door).
Answer to
Suppose the user uploaded a bad file that was accessible only in his
account; the other user then unwittingly downloads that file, running
any executable code that is there.
First of all you must not allow to run anything on your uploaded directories, because even your regular users can upload an aspx page and run it and browse your files. The first step for this is to add on your upload directories this web.config (but also set the permissions to not allow to run anything).
<configuration>
<system.web>
<authorization>
<deny users="*" />
</authorization>
</system.web>
</configuration>
relative : I've been hacked. Evil aspx file uploaded called AspxSpy. They're still trying. Help me trap them‼
Stealing the cookies
Lets see how we can identify the user.
Cookie
Browser ID
Ip including proxy and forward ips.
Browser have javascript enable (or not).
Direct ask for password.
Other file stored on client.
Now, for every logged in session, we can connect the first four information's together, and if any of them change we can log him out and ask again to sign in.
Also is critical to connect some how (with a line on the database) the cookie with the logged in status of the user, so if the user log out, no matter if some steal his cookie, not to be able to see the pages - what I mean is the cookie that let him log in must not the only one that we rely on, but also our data to track the status of the user.
In this case even if some steal the cookie, if the user log out after some actions, the cookie is not useful any more.
So what we do is that we connect the cookie+ip+browser id+javascript information's per login, and if any of them change, we not trust him any more, until log in again.
Cookies
One cookie is not enough. We need at least two cookies, one that work only on secure pages, and is required to be https secure and one that work on all pages. This two cookies must be connected together and this connection must also be hold on server.
So if one of this cookie not exist, or the connection is not match, then user again not have permission and need to log in again (with alert to us)
relative: Can some hacker steal the cookie from a user and login with that name on a web site?
An idea to connect cookies together. With this idea I connect the session cookie with the authentication cookie.
If all fails
There are some steps (or actions) that a hacker follow (do) when get to a site so be able to gain from this window opportunity and left a back door open.
Create a new administrator account.
Upload a file to run as browser and executable.
As I say we never allow to be able to upload a file that can be run, so this is easy.
The other to create a new administrator account is the one that we need to add an extra password that is not saved to any cookie, or nether exist anyway on client.
So for rare actions like, New User from backoffice, change the privilege of the users, delete a user, etc, we need to request a second password every time that only the administrator knows.
So second password is the final messure.
One final idea
One idea that I have not made, is to store some how information's on the client other than the cookie, the can not be stolen like the cookies, or even if he can be stolen is hidden somewhere on all the data that is impossible to find it.
This infomation's can be an extra id of the user together with the cookie, browser data, IP.
I am thing some possible places but not have tested or try them yet in real life. So this is some placed.
Custom extension, or plugin, unfortunately different for every browser that have the ability to save data and then we can use them to communicate with the server. This is required action from the user, to install this plugin - and for regular users this can make him afraid and go.
Hidden code inside a good cached image, on the header of it, eg on etag. This also is easy to not work because we can not be sure that the image request to reload...
Some other browser abilities, eg to read and use a client certificate, that can be used to exchange encrypted data with the server. This request action from the user to install this certificate, and from our part to create them different for every user. Good for bank accounts but not for simple users.
So this is my ideas... any critic, or a solution (for how to store small information's on client other than the cookie) is welcome.
Implementation
To make it real secure we need to keep track of the user on the server, and on database. A table that connect the user, with the ip, with the browser id, and other status, like if now is log in or not, is a measure that we can use to take decisions.
If not be able to use a database, or not like to make it difficult, then the connection can be bu hashing some data together and check if this hash is the same.
For example we can set a cookie with the hash of (Ip+BrowserID) and check if this much or not.
Alerts - Log
All the above they try to be automate. How ever I suggest also to show to the user and to the administrator some information's to diagnose and prevent an attack. This information's can be.
To the user
Last 10 sign in information's (IP, DateTime, Success or not)
Send email on user about some actions (for high security sites), like log in, log out, critical actions.
To the administrator.
Log and show any fail of the connection on the four parametres Cookie+IP+BroserID+Javascript Enable). The more the fail in short time, the more attention required.
Check for fail login.
Check for page read from user with out cookie enable (or save) and javascript is disabled, because this is how scanner identified from my experience.
Source: Origin Cookies Proposal from Web 2.0 Security and Privacy Conference
.
6. Session Integrity in Future Browsers Neither of the previous solutions, nor others considered using existing browser technologies,
provide sufficient security while remaining deployable for existing
sites. Therefore, we propose an extension to cookies called origin
cookies. Origin cookies allow existing web applications to secure
themselves against the described attacks, with very little complexity
of implementation on the part of either the web application or the
browser, with transparent backwards compatibility for browsers that do
not yet implement origin cookies, including legacy browsers that may
never support them, and imposing no burden on existing web sites that
have not enabled origin cookies. This is not a trivial problem to
solve, as evidenced by existing proposals that fail to meet one or
more of the above desired properties. For example, sending the origin
of every cookie on each request is one common idea. This is much more
complicated than necessary, and imposes a much larger burden on web
sites, including ones that don’t even know how to effectively use this
information.
6.1. Origin Cookies The real problem with using cookies for session management is lack of integrity, specifically due to the
ability of other origins to clear and overwrite cookies. While we
cannot disable this functionality from cookies without breaking many
existing sites, we can introduce new cookie-like functionality that
does not allow such cross-site modification.
Design. Origin cookies are cookies that are only sent and only modifiable by requests to and responses from an exact origin. They are
set in HTTP responses in the same way as existing cookies (using the
Set-Cookie header), but with a new attribute named ‘Origin’. In order
to enable web applications to distinguish origin cookies from normal
cookies, origin cookies will be sent in an HTTP request in a new
header ‘OriginCookie’, while normal cookies will continue to be sent
in the existing header ‘Cookie’.
HTTP/1.1 200 OK
...
Set-Cookie: foo=bar; Origin
...
Fig. 2. An HTTP response setting an origin cookie.
GET / HTTP/1.1
Host: www.example.com
...
Origin-Cookie: foo=bar
...
Fig. 3. An HTTP request to a URI for which an origin
cookie has been set.
For example, if in response to a GET request for
http://www.example.com/, a response as in Figure 2 is received, then
an origin cookie would be set with the key ‘foo’ and the value ‘bar’
for the origin http://www.example.com, and would be sent on subsequent
requests to that origin. A subsequent GET request for
http://www.example.com/ would look like Figure 3. Requests made to any
other origin, even https://www.example.com and http://example.com
would be made exactly as if the origin cookie for
http://www.example.com was never set. The Origin attribute extending
the semantics of Set-Cookie itself is subtle and implies several
semantic changes to other settable attributes of cookies. If the
Origin attribute is set, the Domain attribute is no longer
appropriate, and therefore should be ignored. Similarly, the Secure
attribute is no longer appropriate, since it is implied by the scheme
of the origin for the cookie: if the scheme is https, the the origin
cookie effectively has the attribute – since it will only be sent over
a secure channel – and if the scheme is anything else, the cookie does
not have the attribute. Because the same-origin policy considers
different paths to be part of the same origin, the Path attribute of
cookies provides no security and should also be ignored. The semantics
of other attributes, such as HttpOnly, Max-Age, Expires, etc. remain
unchanged for origin cookies. Normal cookies are uniquely identified by
their key, the value of the Domain attribute, and the value of the
Path attribute: this means that setting a cookie with a key, Domain,
and Path that is already set does not add a new cookie, but instead
replaces that existing cookie. Origin cookies should occupy a separate
namespace, and be uniquely identified by their key and the full origin
that set it. This prevents sites from accidentally or maliciously
deleting origin cookies, in addition to the other protections against
reading and modifying, and makes server-side use of origin cookies
significantly easier.
Security. Because origin cookies are isolated between origins, the additional powers of the related-domain attacker and active network
attacker in overwriting cookies are no longer effective, since they
were specifically exploiting the lack of origin isolation with existing
cookies, whether the ‘confusion’ was due to the scheme or domain of
the origin. Absent these additional powers, the related-domain
attacker and active network attacker are equivalent to the web
attacker, who cannot break the security of existing session management
based on the combination of cookies and secret tokens.
Implementation. Integrating origin cookies into existing browsers will not involve significant modifications. As a proof of concept, we
implemented origin cookies in Chrome. The patch totals only 573 lines
Source: W3C Mailing Lists
IETF TLS working group has a proposal to bind cookies to TLS client certificates, so as long as the private key corresponding to the cert is only on one machine, the cookie can only be used on one machine.
If you want to emulate the TLS client cert approach, you could use localStorage to store a private key, and use JS crypto 1 to replace document.cookie with a signed version. It's a little clunky, but it might be made to work. (Obviously, would be better with web crypto 2)
1 for example: http://www.ohdave.com/rsa/
2 http://www.w3.org/community/webcryptoapi/
From http://www.codeproject.com/Articles/16645/ASP-NET-machineKey-Generator
Whenever you make use of ViewState, Session, Forms authentication, or other encrypted and/or secured values, ASP.NET uses a set of keys to do the encryption and decryption. Normally, these keys are hidden and automatically generated by ASP.NET every time your application recycles
If the two websites are different web applications, by default they will have different keys so one will not be able to read encrypted tokens generated by the other. The exception to this is if they are using common values in a global web.config or machine.config.
From machineKey Element, decryptionKey: http://msdn.microsoft.com/en-us/library/w8h3skw9.aspx
AutoGenerate, IsolateApps Specifies that the key is automatically generated. This is the default value. The AutoGenerate modifier specifies that ASP.NET generates a random key and stores it in the Local Security Authority (LSA). The IsolateApps modifier specifies that ASP.NET generates a unique encrypted key for each application using the application ID of each application.
So unless the machineKey element is being used to set the decryptionKey at a global level, the method you described should not work. If it is set, you could override at application level using the web.config file.
You could set a unique machineKey in the web.config for your application. This way only authentication cookies emitted by that application can be decrypted. If the user visits a malicious site on the same domain, this other site might indeed add an cookie authentication cookie with the same name but different value, but it won't be able to encrypt and sign it with the same machine key used by your application and when the user navigates back an exception will be thrown.
The answer is simple: Always bind sessions to a specific client IP (this should be done in any modern web application anyway) and do not use cookies for any other purpose.
Explanation: you always send a single SESSIONID cookie back to the client, which holds no information - its just a very long random string. You store the SESSIONID along with the authenticated user IP within your webapps scope - like i.e. the database. Although the related cookie attack can be used to swap SESSIONID cookies around between different clients, no client can ever masquerade as another user or perform any actions, as the SESSIONID is only considered valid and the privileges are only granted, if its send from the associated IP address.
As long as you do not store actual data which is considered private into the cookie itself, but into the session state on the server side (which is selected solely by the SESSIONID cookie) the related cookie problem should be no problem for you.

Is basic access authentication secure?

Using Apache, it is quite simple to set up a page that uses basic access authentication to prompt a user for a name/password and use those credentials in some way to grant access to that user.
Is this secure, assuming the connection between the client and server is secure?
The worry about basic auth is that the credentials are sent as cleartext and are vulnerable to packet sniffing, if that connection is secured using TLS/SSL then it is as secure as other methods that use encryption.
This is an old thread, and I do not believe the highest voted/chosen answer is correct.
As noted by #Nateowami, the security stack exchange thread outlines a number of issues with basic authentication.
I'd like to point out another one: if you are doing your password verification correctly, then basic authentication makes your server more vulnerable to denial of service. Why? In the old days, it was common belief that salted hash was sufficient for password verification. That is no longer the case. Nowadays, we say that you need to have slow functions to prevent brute forcing passwords in the event that the database becomes exposed (which happens all too often). If you are using basic auth, then you are forcing your server to do these slow computations on every API call, which adds a heavy burden to your server. You are making it more vulnerable to DoS simply by using this dated authentication mechanism.
More generally, passwords are higher value than sessions: compromise of a user password allows hijacking the user's account indefinitely, not to mention the possibility of hijacking other systems that the user accesses due to password reuse; whereas a a user session is time-limited and confined to a single system. Therefore, as a matter of defense in depth, high value data like passwords should not be used repeatedly if not necessary. Basic authentication is a dated technology and should be deprecated.
The reason why most sites prefer OAuth over Basic Auth is that Basic Auth requires users to enter their password in a 3rd party app. This 3rd party app has to store the password in cleartext. The only way to revoke access is for the user to change their password. This, however, would revoke access for all 3rd party apps. So you can see what's the problem here.
On the other hand, OAuth requires a web frame. A user enters their login information at the login page of this particular site itself. The site then generates an access token which the app can use to authenticate itself in the future. Pros:
an access token can be revoked
the 3rd-party app can not see the user's password
an access token can be granted particular permissions (whereas basic auth treats every consumer equally).
if a 3rd-party app turns out to be insecure, the service provider can decide to revoke all access tokens generated for that particular app.
Basic auth over http in an environment that can be sniffed is like no auth, because the password can be easily reversed and then re-used. In response to the snarky comment above about credit cards over ssl being "a bit" more secure, the problem is that basic authentication is used over and over again over the same channel. If you compromise the password once, you compromise the security of every transaction over that channel, not just a single data attribute.
If you knew that you would be passing the same credit card number over a web session over and over, i'd hope that you'd come up with some other control besides just relying on SSL, because chances are that a credit card number used that frequently will be compromised... eventually.
If you are generating passwords with htpasswd consider switching to htdigest.
Digest authentication is secure even over unencrypted connections and its just as easy to set up. Sure, basic authentication is ok when you are going over ssl, but why take the chance when you could just as easily use digest authentication?
As the name itself implies, 'Basic Authentication' is just basic security mechanism. Don't rely on it to provide you with worry free security.
Using SSL on top of it does makes it bit more secure but there are better mechanisms.

MVC 2 AntiForgeryToken - Why symmetric encryption + IPrinciple?

We recently updated our solution to MVC 2, and this has updated the way that the AntiForgeryToken works. Unfortunately this does not fit with our AJAX framework any more.
The problem is that MVC 2 now uses symmetric encryption to encode some properties about the user, including the user's Name property (from IPrincipal). We are able to securely register a new user using AJAX, after which subsequent AJAX calls will be invalid as the anti forgery token will change when the user has been granted a new principal. There are also other cases when this may happen, such as a user updating their name etc.
My main question is why does MVC 2 even bother using symmetric encryption? And then why does it care about the user name property on the principal?
If my understanding is correct then any random shared secret will do. The basic principle is that the user will be sent a cookie with some specific data (HttpOnly!). This cookie is then required to match a form variable sent back with each request that may have side effects (POST's usually). Since this is only meant to protect from cross site attacks it is easy to craft up a response that would easily pass the test, but only if you had full access to the cookie. Since a cross site attacker is not going to have access to your user cookies you are protected.
By using symmetric encryption, what is the advantage in checking the contents of the cookie? That is, if I already have sent an HttpOnly cookie the attacker cannot override it (unless a browser has a major security issue), so why do I then need to check it again?
After having a think about it it appears to be one of those 'added layer of security' cases - but if your first line of defence has fallen (HttpOnly) then the attacker is going to get past the second layer anyway as they have full access to the users cookie collection, and could just impersonate them directly, instead of using an indirect XSS/CSRF attack.
Of course I could be missing a major issue, but I haven't found it yet. If there are some obvious or subtle issues at play here then I would like to be aware of them.
It was added to offer greater protection in the case where you have one subdomain trying to attack another - bad.example.com trying to attack good.example.com. Adding the username makes it more difficult for bad.example.com to contact good.example.com behind the scenes and try to get it to generate a token on your behalf.
Going forward, it's possible that the cookie will be removed as it's not strictly necessary for the proper functioning of the system. (For example, if you're using Forms Authentication, that cookie could serve as the anti-XSRF cookie instead of requiring the system to generate a second cookie.) The cookie might only be issued in the case of anonymous users, for example.
Besides the "evil subdomain"-scenario outlined by Levi, consider an attacker that has an account on the targeted site. If the CSRF-token does not encode user-specific information, the server can not verify that the token has been generated exclusively for the logged-in user. The attacker could then use one of his own legitimately acquired CSRF-tokens when building a forged request.
That being said, anonymous tokens are during certain circumstances accepted by ASP.NET MVC. See Why does ValidateAntiForgeryTokenAttribute allow anonymous tokens?

Can some hacker steal a web browser cookie from a user and login with that name on a web site?

Reading this question,
Different users get the same cookie - value in .ASPXANONYMOUS
and search for a solution, I start thinking, if it is possible for some one to really steal the cookie with some way, and then place it on his browser and login lets say as administrator.
Do you know how form authentication can ensure that even if the cookie is stolen, the hacker does not get to use it in an actual login?
Is there any other alternative automatic defense mechanism?
Is it possible to steal a cookie and
authenticate as an administrator?
Yes it is possible, if the Forms Auth cookie is not encrypted, someone could hack their cookie to give them elevated privileges or if SSL is not require, copy someone another person's cookie. However, there are steps you can take to mitigate these risks:
On the system.web/authentication/forms element:
requireSSL=true. This requires that the cookie only be transmitted over SSL
slidingExpiration=false. When true, an expired ticket can be reactivated.
cookieless=false. Do not use cookieless sessions in an environment where are you trying to enforce security.
enableCrossAppRedirects=false. When false, processing of cookies across apps is not allowed.
protection=all. Encrypts and hashes the Forms Auth cookie using the machine key specified in the machine.config or web.config. This feature would stop someone from hacking their own cookie as this setting tells the system to generate a signature of the cookie and on each authentication request, compare the signature with the passed cookie.
If you so wanted, you could add a small bit of protection by putting some sort of authentication information in Session such as a hash of the user's username (Never the username in plain text nor their password). This would require the attacker to steal both the Session cookie and the Forms Auth cookie.
The scenario where a cookie can be stolen happens in a public wireless environment. While you or I would never operate in such a setup, it may be impossible to prevent your customers from doing so.
If the attacker knows what secure site you're connected to, the idea is that your browser can be tricked into posting to a non-secure version of the same url. At that point your cookie is compromised.
That's why in addition to httpOnlyCookies you'll want to specify requireSSL="true"
<httpCookies httpOnlyCookies="true" requireSSL="true" />
I disagree with The Rook's comment, in that I find it unfair;
#Aristos i updated my answer. But to be honest, if your using a Microsoft development platform your application will be inherently insecure. – The Rook 22 mins ago
Security doesn't happen by accident and it doesn't happen "right out of the box", at least not in my experience. Nothing is secure until it's designed to be so, regardless of the platform or the tools.
There are many ways that a session id can be leaked to an attacker. XSS is the most commonly used attack to hijack a Session ID and you should test for XSS vulnerabilities in your application. . A common method of improving the strength of a session is to check the IP address. When the user logs in, record the ip address. Check the IP address for every request, if the IP changes then its probably a hijacked session. This secuirty measure could prevent legitimate requests, but that is very unlikely.
Do not check the X-Forwarded-For or User-Agent, its trivial for an attacker to modify these values.
I also recommend enabling httpOnlyCookies in your web.config file:
<httpCookies httpOnlyCookies="true"/>
This makes it more difficult for an attacker to hijack a session with javascript, but its still possible.
I don't know the specifics of the cookie in question but it's generally bad practice to store both the username and password in a user cookie. You generally want to only store the username in the cookie along with other non sensitive information. That way the user is prompted to provide their password only when logging in.
I am working on this, and I am coming up with an idea, that I am not sure if it is 100% safe, but is an idea.
My idea is that every user must pass from the login page.
If some one stole the cookie, is not pass the login page, but is go direct inside to the rest pages. He can not pass the login page, because did not know the really password, so if he pass he fail anyway.
So I place an extra session value, that the user have been pass with success the login page.
Now inside every critical page, I check that extra session value and if found it null, I login off and ask again for the password.
Now I do not know, maybe all that done all ready by microsoft, need to check it more.
To check this idea I use this function that direct make a user logged in.
FormsAuthentication.SetAuthCookie("UserName", false);
My second security that I have all ready fix and use, is that I check for different ips and or different cookie from the same logged in user. I have made many think on that, many checks (if is behind proxy, if is from different countries, what is look for, how many times I have see him, etc...) but this is the general idea.
This video show exactly what I try to prevent. By using the trick I have describe here, you can not just set the login cookie only.
Just sharing my ideas...

Resources