From what I understand, if a user uses plain HTTP without TLS encryption layer then anyone listening "on the wire" can see the user's session cookie and steal it. So does this mean that it is impossible to guard against session hijacking if the website does not implement HTTP over TLS? Does it mean all websites before https could not guard against session hijacking?
The scenario might look like this.
1. A good guy logs into their account
GET / HTTP/1.1
Host: onlinecommunity.com
Cookie: PHPSESSID=f5avra_=AKMEHO_ga=GA1.2.93f54422f2ac010
2. A bad guy listening "on the wire" sees the plain HTTP request
3. A bad guy sends the same request
GET / HTTP/1.1
Host: onlinecommunity.com
Cookie: PHPSESSID=f5avra_=AKMEHO_ga=GA1.2.93f54422f2ac010
4. Now the bad guy sees the good guy's profile!
How did people prevent SESSION hijacking before HTTPS?
I think I found the answer (in part) to my own question, so I'll share it here for others.
If the website doesn't use encryption then you can use a VPN to encrypt the requests, this way the session cookie will be hidden (encrypted)
You can re-create the session cookie with every request. Although from what I read this can be difficult to implement.
You can check against other header fields and compare them to the last request. If they are too different then you can block access.
You can check against the IP address. But since the person might be on mobile network the IP can change for legitimate reasons so you can at least to check against an IP range to make sure it's coming from the same country as the last request.
I suppose in general, if the website uses plain text HTTP without encryption, you can also steal their username and password by listening on the wire. So you should at least use a VPN (VPNs encrypt all data). I guess - never use plain text HTTP requests for any sensitive information.
Feel free to add or correct me.
Related
I'm developing an API that receives HTTP requests from the Internet. One of the problems I'm facing is how to authenticate those requests in order to know that they are actually coming from a specific IP address.
I have read the X-Forwarded-For header is not safe at all
RSA does not prevent Man in the Middle
What I'm doing is: given an id and a password, both parameters are sent in each request so the system can validate the request, but I think it's not the best option at all.
Any ideas?
Thank you!
Given that it's truly the IP address that you're needing to authenticate, and not any sort of user, then HTTP is too high on the networking stack. You need something like IPSec or WireGuard to authenticate IP addresses.
I suggest to redirect user to authenticate on HTTPS server that generates a token for him and then with that token request the infos on the HTTP server, so even a man in the middle can obtain current session and when the user logout it is useless.
And then you can even validate the IP from login on HTTPS on each HTTP request.
Edit: I think the whole process is explained really well here
I am trying to understand how to bypass HSTS protection. I've read about tools by LeonardoNve ( https://github.com/LeonardoNve/sslstrip2 and https://github.com/LeonardoNve/dns2proxy ). But I quite don't get it.
If the client is requesting for the first time the server, it will work anytime, because sslstrip will simply strip the Strict-Transport-Security: header field. So we're back in the old case with the original sslstrip.
If not ... ? What happens ? The client know it should only interact with the server using HTTPS, so it will automatically try to connect to the server with HTTPS, no ? In that case, MitM is useless ... ><
Looking at the code, I kinda get that sslstrip2 will change the domain name of the ressources needed by the client, so the client will not have to use HSTS since these ressources are not on the same domain (is that true?). The client will send a DNS request that the dns2proxy tool will intercept and sends back the IP address for the real domain name.At the end, the client will just HTTP the ressources it should have done in a HTTPS manner.
Example : From the server response, the client will have to download mail.google.com. The attacker change that to gmail.google.com, so it's not the same (sub) domain. Then client will DNS request for this domain, the dns2proxy will answer with the real IP of mail.google.com. The client will then simply ask this ressource over HTTP.
What I don't get is before that... How the attacker can html-strip while the connection should be HTTPS from the client to server ... ?
A piece is missing ... :s
Thank you
Ok after watching the video, I get a better understanding of the scope of action possible by the dns2proxy tool.
From what I understood :
Most of the users will get on a HTTPS page either by clicking a link, or by redirection. If the user directly fetch the HTTPS version, the attack fails because we are unable to decrypt the traffic without the server certificate.
In the case of a redirection or link with sslstrip+ + dns2proxy enabled with us being in the middle of the connection .. mitm ! ==>
The user goes on google.com
the attacker intercept the traffic from the server to the client and change the link to sign in from "https://account.google.com" to "http://compte.google.com".
The user browser will make a DNS request to "compte.google.com".
the attacker intercept the request, make a real DNS request to the real name "account.google.com" and sends back the response "fake domain name + real IP" back to the user.
When the browser receives the DNS answer, it will search if this domain should be accessed by HTTPS. By checking a Preloaded HSTS list of domains, or by seeing the domain already visited that are in the cache or for the session, dunno. Since the domain is not real, the browser will simply make a HTTP connection the REAL address ip.
==> HTTP traffic at the end ;)
So the real limitations still is that the need for indirect HTTPS links for this to work. Sometime browser directly "re-type" the url entered to an HTTPS link.
Cheers !
we need to protect our webservices with SSL (https) or any other security mechanism. Our problem is that current clients (delphi exe's) have references to our http webservices fixed in code and can not change that code.
I've tried to implement URL redirection rule from http to https but that didn't work because of the "hand shake"...Changing client to use https reference did work but saddly we can not do that for every client.
I know this question is in contradiction with encription theories but i'll fire this question anyway if anyone has any type of suggestion/idea to at least make connection or data transfer more secured (either with or without SSL protocol) without changing client side.
Thanks,
Luke
You need some kind of transparent TCP tunneling software/hardware on the clients, so the encryption occurs without the delphi clients noticing it.
My Google search using "transparent encrypted tunneling" keywords got this vendor of such solutions. There's must other vendors with similar solutions.
This is really an networking question.
PS.: hardcoding the URL is the real problem here. After the tunneling palliative is done, change that because this really will cause more headaches in future.
The client will be connecting over a port (non SSL) that will need to remain. What you could possibly do is that if you allow access both http and https you could possibly only allow http from specific IP addresses if you know them? its still not secure, but least you know where the calls are coming from and can do something about that?
I see facebook sends cookies over http. How are they secure from hijacking? If I were to copy the cookie onto another computer would I be logged in?
You've just described Session Hijacking, and it is a real security issue. It can be avoided in a number of ways. The simplest way to secure the cookies, though, is to ensure they're encrypted over the wire by using HTTPS rather than HTTP.
Cookies sent over HTTP (port 80) are not secure as the HTTP protocol is not encrypted.
Cookies sent over HTTPS (port 443) are secure as HTTPS is encrypted.
So, if Facebook sends/receives cookies via HTTP, they can be stolen and used nefariously.
Cookies sent over HTTP are unsecure, those sent over HTTPS are a bit more secure than HTTP, however they can still be stolen since there are a few methods discovered lately to hack SSL. A complete writeup on session hijacking and all of the session hijacking attacks can be found here: http://cleverlogic.net/tutorials/session-hijacking-0. There is also a bit on preventing Session Hijacking.
How to understand stateless protocol and stateful protocol? HTTP is a stateless protocol and FTP is a stateful protocol. For the web applications requiring a lot of interactions, the underlying protocol should be stateful ones. Is my understanding right?
HTTP is a stateless protocol, in other word the server will forget everything related to client/browser state. Although web applications have made it virtually look like stateful.
A stateless protocol can be forced to behave as if it were stateful. This can be accomplished if the server sends the state to the client, and if the client to sends it back again to the server, every time.
There are three ways this may be accomplished in HTTP:
a) One is cookies, in which case the state is sent and returned in HTTP headers.
b) The second is URL extension, in which case the state is sent as part of the URL as request.
c) The third is "hidden form fields", in which the state is sent to the client as part of the response, and returned to the server as part of a form's hidden data
SCALABILITY AND HIGH AVAILABILITY
One of the major reasons why HTTP scales so well is its Statelessness. Stateless protocol eases the replication concerns, as the state itself doesn't need to be stored on the server.
Stateful protocols are logically heavy to implement in Internet reliably. Stateless servers are also easily scalable, while for stateful servers scalablity is problematic. Stateless request can be sent to any node, at any time, while with Stateful this is not a case.
HTTP as Stateless protocol increases availability for stateless web applications, which otherwise would be difficult or impossible to implement. If there is connection lost, there is no state that is lost, simple request resend will resolve the problem. Stateless requests are also cacheable.
see more here
Since you're asking about a Web application, the protocol will always be stateless -- the protocol for the Web is http (or https), and that's all she wrote.
I think what you're thinking of is providing a state mechanism in your Web application itself. The typical approach to this is that you create a unique identifier for the user's session in your Web application (a sessionID of one form or another is the common practice) which is handed back and forth between browser and server. That's typically done in a cookie, though it can be done, with a bit more hassle for you depending on your platform/framework, on the URL as well.
Your server-side code stores stateful information (again, typically called the user's session) however it wants to using the sessionID to look it up. The http traffic simply hands back the sessionID. As long as that identifier is there, each http transaction is completely independent of all others, hence the protocol traffic itself is stateless.
HTTP is a stateless protocol. All the web-based applications are also stateless.
When a request is send to the server, a connection is established between client and server. The server receives the request, processes the request and sends back the response and then, the connection will be closed.
If another request will be sent, after that, it will be treated as a new request and a new connection will be established.
In order to make HTTP stateful, we use session management techniques.
So that, it uses the data coming from previous request while processing present request i.e, it uses the same connection for a series of client server interactions.
The session management techniques are:
hidden form field
cookie
session
URL-rewriting
Anything that forgets whatever it did in past is stateless, such as http
Anything that can keep the history is statefull, such as database
Http is a stateless protocol, that's why it forgets the user information.
We make http as statefull protocol using jsonWebToken(JWT) i.e. on each request going to server, server will first verify the user using JWT.
Your question is spot on, and yes, it would be great if your web transactions with your bank were done over a stateful connection. Alas, HTTP is stateless due to a quirky bug in FTP and a 12 socket limit in the partial socket table in BSD of 1989. Marcus Ranum explained it all here.
So HTTP throws away the state it inherits from TCP and has to recreate state at the application layer in the form of cookies. Crappy internet security is the result.
The Seif project proposes to fix all that using "secure JSON over TCP". DNS and certificate authorities are not required. The protocol and seifnode.js are finished and on github with an MIT license.
HTTP doesn't 'inherit' from TCP, but rather uses it for a transport. HTTP uses TCP for a stateful connection, but then disconnects. Later it will connect again, if needed. So, while you browse through a web site you create many different connections. Each one of those connections is stateful, but the conversation as a whole is not, since you are dropping the connection with every conversation.
From this link
Basically yes, but you have no choice but use HTTP which is where websites are served in. So you have to deal with compromises to make HTTP stateful, aka session management. Possibilities are basically passing on a session id through each call in the URL so you know when you're talking to someone you've talked about before, or via cookies, which achieve the same goal without cluttering the url. However, most modern web development languages take care of that for you; if you google for the language of your choice + "session management" you should get some ideas of how it's done.