Why would you not use https on your public facing website? - http

Why would you not use https on your public facing website?
For SEO purposes? For performance reasons? Why don't more companies use https on their public facing site.
Even the founder of mint.com mentions not using https on his public facing site"
http://cnettv.cnet.com/rr03-mint-ceo-aaron-patzer/9742-1_53-50076867.html
19min into the interview the founder of mint mentions "it is for SEO purposes"

I suppose one example would be that you don't need it (no authentication, for example) and you don't want to shell out the cash for an SSL Certificate?

Performance is the only reason to not force HTTPS (aside from simply not needing it). You shouldn't ever make security decisions based on "SEO".

For login pages, hopefully more will. See The Fundamentally Broken Browser Model.

Not all browsers support HTTPS. Think cell phones and other lightweight devices.

There is a performance hit incurred when visiting sites behind SSL... it's usually not a lot, but sometimes (under some confluence(s) of conditions) it can actually be noticeably slower.

There is a performance hit when first negotiating a connection with the website. This has to do with the handshake that SSL does, sending information back and forth. Try sniffing your browser (HTTP Live Headers) when you're making an SSL connection to see how much goes on behind the scenes.
There is also a computation hit on the server to create the SSL connection (it's CPU intensive, much like all crypto key-related operations).

Let me turn it around and ask you why you would not use http on your public facing website? If all the information is publicly available and there is no reason anyone would want to have it not publicly knowable that they are hitting your site, then there's no reason to go to the trouble of https.

Related

Why all HTTPS communications are visible to other apps on a device? HTTP Toolkit

I noticed that using HTTP Toolkit, you can sniff all HTTPS communications in an unencrypted form, from browsers on Windows and Android OS, plus all applications on a rooted Android device or an emulator or via some workaround on a PC. All fields and data from headers, request bodies, and responses are intercepted without encryption.
I find this to be a significant security flaw as a hacker can easily analyze how an app communicates, thus gaining more knowledge on how the server communicates, plus seeing API keys in the headers.
In addition, installing some spyware to record entered credentials on his PC or a public PC, same way as HTTP Toolkit does.
Is there a reason this is allowed to happen in the first place? Is there a way to prevent this from happening?
It's explicitly allowed because it's extremely useful. It's how all kinds of debugging, testing, and profiling tools are implemented, as well as some kinds of ad blockers and other traffic modifiers.
It's possible because it cannot be prevented in the most general way. A user who fully controls a device can inspect all behavior and traffic on that device. That is what it means to control a device. Traffic is encrypted to protect the user, not to protect apps from their user. If seeing the API would significantly impact the security of the system, the system is already insecure.
Your concern that an attacker may take over a user's machine and observe them is valid, but is far deeper than this. An attacker who has administrative access to the system can observe all kinds of things; mostly commonly by installing a keylogger to watch what they type. There is no way to secure a device that an attacker has complete physical access to.
You can limit TLS sniffing using certificate pinning. Google does not recommend this because it's hard to manage. However, for some situations, it's worth the trouble. See also HTTP Toolkit's discussion on the topic.
You've found a good thing to study. I recommend digging into how HTTP Toolkit works. It will give you a much better understanding of what TLS does and doesn't provide.
I don't think that is too serious.
HTTP Toolkit can not intercept the your normal browser.
It only creates a guest profile of browser, open and intercept it.
This browser does not have related to your own browser and does not share between them.
The same thing happen in Selenium.
Selenium is used widely for automated testing and can be integrated with python, C# and so on.
This also opens their own browser with separated profile and communicate with it from your test code.
Anyway, they can not intercept your normal browser.
If you are serious about the security, then you must not explore the websites with sensitive data via browser that is opened by the HTTP Toolkit or Selenium.
Just use your normal browser.

Why HTTP is far more used than HTTPS?

I hope every reason is mentioned, I think that performance is the main reason, but I hope every one to mention what he\she knows about this.
It's more recommended that you explain every thing, I'm still a starter.
Thanks in advance :)
It makes pages load slower, at least historically. Nowadays this may not be so relevant.
It's more complex for the server admin to setup and maintain, and perhaps too difficult for the non-professional.
It's costly for small sites to get and regularly renew a valid SSL certificate from the SSL certificate authorities.
It's unnecessary for most of your web browsing.
It disables the HTTP_REFERER field, so sites can't tell where you've come from. Good for privacy, bad for web statistics analysis, advertisers and marketing.
Edit: forget that you also need a separate IP address for each domain using SSL. This is incompatible with name-based virtual hosting, which is widely used for cheap shared web hosting. This might become a non-issue if/when IPv6 takes off, but it makes it impossible for every domain to have SSL using IPv4.
HTTPS is more expensive than plain HTTP:
Certificates issued by trusted issuer are not free
TLS/SSL handshake costs time
TLS/SSL encryption and compression takes time and additional resources (the same for decryption and decompression)
But I guess the first point is the main reason.
Essentially it's as Gumbo posts. But given the advances in power of modern hardware, there's an argument that there's no reason to not use HTTPS any more.
The biggest barrier is the trusted certificate. You can go self-signed, but that then means all visitors to your site get an "unrested certificate" warning. The traffic will still be encrypted, and it is no less secure, but big certificate warnings can put potential visitors off.
I maybe stating the obvious, but not all content needs transport layer security.

HTTPS instead of HTTP?

I'm new to web security.
Why would I want to use HTTP and then switch to HTTPS for some connections?
Why not stick with HTTPS all the way?
There are interesting configuration improvements that can make SSL/TLS less expensive, as described in this document (apparently based on work from a team from Google: Adam Langley, Nagendra Modadugu and Wan-Teh Chang): http://www.imperialviolet.org/2010/06/25/overclocking-ssl.html
If there's one point that we want to
communicate to the world, it's that
SSL/TLS is not computationally
expensive any more. Ten years ago it
might have been true, but it's just
not the case any more. You too can
afford to enable HTTPS for your users.
In January this year (2010), Gmail
switched to using HTTPS for everything
by default. Previously it had been
introduced as an option, but now all
of our users use HTTPS to secure their
email between their browsers and
Google, all the time. In order to do
this we had to deploy no additional
machines and no special hardware. On
our production frontend machines,
SSL/TLS accounts for less than 1% of
the CPU load, less than 10KB of memory
per connection and less than 2% of
network overhead. Many people believe
that SSL takes a lot of CPU time and
we hope the above numbers (public for
the first time) will help to dispel
that.
If you stop reading now you only need
to remember one thing: SSL/TLS is not
computationally expensive any more.
One false sense of security when using HTTPS only for login pages is that you leave the door open to session hijacking (admittedly, it's better than sending the username/password in clear anyway); this has recently made easier to do (or more popular) using Firesheep for example (although the problem itself has been there for much longer).
Another problem that can slow down HTTPS is the fact that some browsers might not cache the content they retrieve over HTTPS, so they would have to download them again (e.g. background images for the sites you visit frequently).
This being said, if you don't need the transport security (preventing attackers for seeing or altering the data that's exchanged, either way), plain HTTP is fine.
If you're not transmitting data that needs to be secure, the overhead of HTTPS isn't necessary.
Check this SO thread for a very detailed discussion of the differences.
HTTP vs HTTPS performance
Mostly performance reasons. SSL requires extra (server) CPU time.
Edit: However, this overhead is becoming less of a problem these days, some big sites already switched to HTTPS-per-default (e.g. GMail - see Bruno's answer).
And not less important thing. The firewall, don't forget that usually HTTPS implemented on port 443.
In some organization such ports are not configured in firewall or transparent proxies.
HTTPS can be very slow, and unnecessary for things like images.

Jeff Prosise's session hijack blog - any updates?

I'm looking to prevent session hijacking in my ASP.NET application and came across this great post by Jeff Prosise. However, it's from 2004 and I was wondering if there have been any updates that either perform the same thing, or result in any complications? Also, has anyone used this on a production server and, if so, have there been any issues caused by this? The only problem that could affect my applications is if someone's IP network changes in a short period of time, but I can't imagine this being very likely.
Thanks
This is an interesting approach to session hardening but it does not stop session hijacking. This system has the same problem as HTTPOnly Cookies which is that an attacker can create requests from the victim's browser using xss and there for the attacker doesn't need to know the value of the session id.
This quote is taken from the article you linked to:
SecureSessionModule raises the bar for hackers who hijack sessions using stolen session IDs
This raises the bar, but you still need to patch your XSS and CSRF vulnerabilities.
This is long dead but I have noticed a problem with it that will possibly start affecting more and more servers in the coming years. Part of the MAC that's generated uses the IP address, splitting on the ".", but IPv6 addresses use ":".
I don't have a production server on IPv6 but I've recently upgraded my development machine which is connecting to Cassini via IPv6 and I very quickly get into a non-stop string of session errors.

Why do requests and responses get lost?

Even on big-time sites such as Google, I sometimes make a request and the browser just sits there. The hourglass will turn indefinitely until I click again, after which I get a response instantly. So, the response or request is simply getting lost on the internet.
As a developer of ASP.NET web applications, is there any way for me to mitigate this problem, so that users of the sites I develop do not experience this issue? If there is, it seems like Google would do it. Still, I'm hopeful there is a solution.
Edit: I can verify, for our web applications, that every request actually reaching the server is served in a few seconds even in the absolute worst case (e.g. a complex report). I have an email notification sent out if a server ever takes more than 4 seconds to process a request, or if it fails to process a request, and have not received that email in 30 days.
It's possible that a request made from the client took a particular path which happened to not work at that particular moment. These are unavoidable - they're simply a result of the internet, which is built upon unstable components and which TCP manages to ensure a certain kind of guarantee for.
Like someone else said - make sure when a request hits your server, you'll be ready to reply. Everything else is out of your hands.
They get lost because the internet is a big place and sometimes packets get dropped or servers get overloaded. To give your users the best experience make sure you have plenty of hardware, robust software, and a very good network connection.
You cannot control the pipe from the client all the way to your server. There could be network connectivity issues anywhere along the pipeline, including from your PC to your ISP's router which is a likely place to look first.
The bottom line is if you are having issues bringing Google.com up in your browser then you are guaranteed to have the same issue with your own web application at least as often.
That's not to say an ASP application cannot generate the same sort of downtime experience completely on it's own... Test often and code defensively are the key phrases to keep in mind.
Let's not forget browser bugs. They aren't nearly perfect applications themselves...
This problem/situation isn't only ASP related, but it covers the whole concept of keeping your apps up and its called informally the "5 nines" or "99.999% availability".
The wikipedia article is here
If you lookup the 5 nines you'll find tons of useful information, which you can apply as needed to your apps.

Resources