I'm looking to prevent session hijacking in my ASP.NET application and came across this great post by Jeff Prosise. However, it's from 2004 and I was wondering if there have been any updates that either perform the same thing, or result in any complications? Also, has anyone used this on a production server and, if so, have there been any issues caused by this? The only problem that could affect my applications is if someone's IP network changes in a short period of time, but I can't imagine this being very likely.
Thanks
This is an interesting approach to session hardening but it does not stop session hijacking. This system has the same problem as HTTPOnly Cookies which is that an attacker can create requests from the victim's browser using xss and there for the attacker doesn't need to know the value of the session id.
This quote is taken from the article you linked to:
SecureSessionModule raises the bar for hackers who hijack sessions using stolen session IDs
This raises the bar, but you still need to patch your XSS and CSRF vulnerabilities.
This is long dead but I have noticed a problem with it that will possibly start affecting more and more servers in the coming years. Part of the MAC that's generated uses the IP address, splitting on the ".", but IPv6 addresses use ":".
I don't have a production server on IPv6 but I've recently upgraded my development machine which is connecting to Cassini via IPv6 and I very quickly get into a non-stop string of session errors.
Related
Cant find this issue anywhere...
Using ASP.NET 3.5, I have 3 web servers, in a web farm, using ASP.NET State Server (on a different Server).
All pages Uses session (they read and do update the session)
Issue: my pages are prone to DDOS attack, it so easy to attack, just go to any page, and HOLD down 'F5' key for 30-60 seconds, and the request will pile up in all web servers.
I read, that if you make multiple call to session each will LOCK the session, hence the other request has to wait to get the same user's session, this waiting, ultimately causes DDOS.
OUR solution has been pretty primitive, from preventing (master page, custom control) to call session and only allow the page to call, to adding javascript that disable's F5 key.
I just realize ASP.NET with session is prone to such DDOS attacks!!
Anyone faced similar issue? any global/elegant solution? please do share
Thanks
Check this:
Dynamic IP Restrictions:
The Dynamic IP Restrictions Extension for IIS provides IT Professionals and Hosters a configurable module that helps mitigate or block Denial of Service Attacks or cracking of passwords through Brute-force by temporarily blocking Internet Protocol (IP) addresses of HTTP clients who follow a pattern that could be conducive to one of such attacks. This module can be configured such that the analysis and blocking could be done at the Web Server or the Web Site level.
Also, Check this:
DoS Attack:
Most sites/datacenters will control (D)DOS attacks via hardware not software. Firewalls, routers, load balancers, etc. It is not effeicent or deesirable to have this at the application level of IIS. I don't want bloat like this slowing down IIS.
Also DDOS preventation is a complex setup with even deadicated hardware boxes just to deal with it with different rules and analysis for them that take a lot of processing power.
Look at your web enviornment infrastucuture and see the setup and see what your hardware provides as protection and if it is a problem look at dedicated hardware solutions. You should block DDOS attacks as soon as possible in the chain, not at the end at the webserver level.
Well, for the most elegant solution; it has to be done on network level.
Since it is "nearly" impossible to differentiate a DDOS attack from a valid session traffic, you need a learning algorithm running on the network traffic; most of the enterprise level web applications need a DDOS defender on network level. Those are quite expensive and more stable solutions for DDOS. You may ask your datacenter, if they have a DDOS defender hardware and if they have, they can put your server traffic behind the device.
Two of the main competitors on this market :
http://www.arbornetworks.com/
http://www.riorey.com/
We had the same issue at work. Its not solved yet but two workarounds we were looking at were:
Changing the session state provider so that it doesn't lock
session. If your application logic would allow this...
Upgrading the session state server so that it was faster (SQL 2016 in-memory session state for example). This makes it a little harder for users to cause issues and means your app should recover faster.
Suppose a user's session is load balanced to server #8 and some state is maintained at server #8. The next action from user needs to be routed to server #8 again because that is the only place with his server state. Is there a standard solution to maintain this mapping from user session to server number for long lived sessions? It seems like this problem of mapping user session to a specific server among many servers should be a common problem with a standard "textbook" solution thats cpu and memory efficient.
An easy solution is to configure your load balancer to use sticky sessions. the load balancer will associate a user session to Server #8, and then subsequent requests from the same session will automatically be forwarded to the same server (Server 8).
The best solution is not to rely on server affinity - it makes your system fragile. I wouldn't expect a textbook answer in the same way I would not expect a textbook answer on how to play with a toaster in the bath nor how to perform brain surgery with a screwdriver.
If you must have sticky routing then how you implement it depends a lot on how you propose to deal with a server not being available - do you failover the requests? Or just stop processing requests which would have been directed to that server?
I initially thought that this was a very dumb question - what's the relevance unless you're writing your own proxy/load-balancer (in which case you should already know he answers) but there are proxies available which allow you to implement your own director.
So ultimately it boils down to what characteristics of the session are visible in the HTTP request. Since an IP adderss can change mid stream, the only practical characteristic you can use is the session idenitifier - usually implemented as a cookie.
I hope every reason is mentioned, I think that performance is the main reason, but I hope every one to mention what he\she knows about this.
It's more recommended that you explain every thing, I'm still a starter.
Thanks in advance :)
It makes pages load slower, at least historically. Nowadays this may not be so relevant.
It's more complex for the server admin to setup and maintain, and perhaps too difficult for the non-professional.
It's costly for small sites to get and regularly renew a valid SSL certificate from the SSL certificate authorities.
It's unnecessary for most of your web browsing.
It disables the HTTP_REFERER field, so sites can't tell where you've come from. Good for privacy, bad for web statistics analysis, advertisers and marketing.
Edit: forget that you also need a separate IP address for each domain using SSL. This is incompatible with name-based virtual hosting, which is widely used for cheap shared web hosting. This might become a non-issue if/when IPv6 takes off, but it makes it impossible for every domain to have SSL using IPv4.
HTTPS is more expensive than plain HTTP:
Certificates issued by trusted issuer are not free
TLS/SSL handshake costs time
TLS/SSL encryption and compression takes time and additional resources (the same for decryption and decompression)
But I guess the first point is the main reason.
Essentially it's as Gumbo posts. But given the advances in power of modern hardware, there's an argument that there's no reason to not use HTTPS any more.
The biggest barrier is the trusted certificate. You can go self-signed, but that then means all visitors to your site get an "unrested certificate" warning. The traffic will still be encrypted, and it is no less secure, but big certificate warnings can put potential visitors off.
I maybe stating the obvious, but not all content needs transport layer security.
I'm new to web security.
Why would I want to use HTTP and then switch to HTTPS for some connections?
Why not stick with HTTPS all the way?
There are interesting configuration improvements that can make SSL/TLS less expensive, as described in this document (apparently based on work from a team from Google: Adam Langley, Nagendra Modadugu and Wan-Teh Chang): http://www.imperialviolet.org/2010/06/25/overclocking-ssl.html
If there's one point that we want to
communicate to the world, it's that
SSL/TLS is not computationally
expensive any more. Ten years ago it
might have been true, but it's just
not the case any more. You too can
afford to enable HTTPS for your users.
In January this year (2010), Gmail
switched to using HTTPS for everything
by default. Previously it had been
introduced as an option, but now all
of our users use HTTPS to secure their
email between their browsers and
Google, all the time. In order to do
this we had to deploy no additional
machines and no special hardware. On
our production frontend machines,
SSL/TLS accounts for less than 1% of
the CPU load, less than 10KB of memory
per connection and less than 2% of
network overhead. Many people believe
that SSL takes a lot of CPU time and
we hope the above numbers (public for
the first time) will help to dispel
that.
If you stop reading now you only need
to remember one thing: SSL/TLS is not
computationally expensive any more.
One false sense of security when using HTTPS only for login pages is that you leave the door open to session hijacking (admittedly, it's better than sending the username/password in clear anyway); this has recently made easier to do (or more popular) using Firesheep for example (although the problem itself has been there for much longer).
Another problem that can slow down HTTPS is the fact that some browsers might not cache the content they retrieve over HTTPS, so they would have to download them again (e.g. background images for the sites you visit frequently).
This being said, if you don't need the transport security (preventing attackers for seeing or altering the data that's exchanged, either way), plain HTTP is fine.
If you're not transmitting data that needs to be secure, the overhead of HTTPS isn't necessary.
Check this SO thread for a very detailed discussion of the differences.
HTTP vs HTTPS performance
Mostly performance reasons. SSL requires extra (server) CPU time.
Edit: However, this overhead is becoming less of a problem these days, some big sites already switched to HTTPS-per-default (e.g. GMail - see Bruno's answer).
And not less important thing. The firewall, don't forget that usually HTTPS implemented on port 443.
In some organization such ports are not configured in firewall or transparent proxies.
HTTPS can be very slow, and unnecessary for things like images.
Even on big-time sites such as Google, I sometimes make a request and the browser just sits there. The hourglass will turn indefinitely until I click again, after which I get a response instantly. So, the response or request is simply getting lost on the internet.
As a developer of ASP.NET web applications, is there any way for me to mitigate this problem, so that users of the sites I develop do not experience this issue? If there is, it seems like Google would do it. Still, I'm hopeful there is a solution.
Edit: I can verify, for our web applications, that every request actually reaching the server is served in a few seconds even in the absolute worst case (e.g. a complex report). I have an email notification sent out if a server ever takes more than 4 seconds to process a request, or if it fails to process a request, and have not received that email in 30 days.
It's possible that a request made from the client took a particular path which happened to not work at that particular moment. These are unavoidable - they're simply a result of the internet, which is built upon unstable components and which TCP manages to ensure a certain kind of guarantee for.
Like someone else said - make sure when a request hits your server, you'll be ready to reply. Everything else is out of your hands.
They get lost because the internet is a big place and sometimes packets get dropped or servers get overloaded. To give your users the best experience make sure you have plenty of hardware, robust software, and a very good network connection.
You cannot control the pipe from the client all the way to your server. There could be network connectivity issues anywhere along the pipeline, including from your PC to your ISP's router which is a likely place to look first.
The bottom line is if you are having issues bringing Google.com up in your browser then you are guaranteed to have the same issue with your own web application at least as often.
That's not to say an ASP application cannot generate the same sort of downtime experience completely on it's own... Test often and code defensively are the key phrases to keep in mind.
Let's not forget browser bugs. They aren't nearly perfect applications themselves...
This problem/situation isn't only ASP related, but it covers the whole concept of keeping your apps up and its called informally the "5 nines" or "99.999% availability".
The wikipedia article is here
If you lookup the 5 nines you'll find tons of useful information, which you can apply as needed to your apps.