I'm developing an API that receives HTTP requests from the Internet. One of the problems I'm facing is how to authenticate those requests in order to know that they are actually coming from a specific IP address.
I have read the X-Forwarded-For header is not safe at all
RSA does not prevent Man in the Middle
What I'm doing is: given an id and a password, both parameters are sent in each request so the system can validate the request, but I think it's not the best option at all.
Any ideas?
Thank you!
Given that it's truly the IP address that you're needing to authenticate, and not any sort of user, then HTTP is too high on the networking stack. You need something like IPSec or WireGuard to authenticate IP addresses.
I suggest to redirect user to authenticate on HTTPS server that generates a token for him and then with that token request the infos on the HTTP server, so even a man in the middle can obtain current session and when the user logout it is useless.
And then you can even validate the IP from login on HTTPS on each HTTP request.
Edit: I think the whole process is explained really well here
Related
So I was curious as to how I can track the HTTP requests I send to a game as well as the responses? Typically I'd use something like fiddler, but the requests are SSL protected. But since I'm the client, shouldn't there be some way to see what requests I send out and the responses the game gives me back? Thanks in advance!
Setup a HTTPS proxy to MiTM (Man in the middle attack) yourself.
From what I understand, if a user uses plain HTTP without TLS encryption layer then anyone listening "on the wire" can see the user's session cookie and steal it. So does this mean that it is impossible to guard against session hijacking if the website does not implement HTTP over TLS? Does it mean all websites before https could not guard against session hijacking?
The scenario might look like this.
1. A good guy logs into their account
GET / HTTP/1.1
Host: onlinecommunity.com
Cookie: PHPSESSID=f5avra_=AKMEHO_ga=GA1.2.93f54422f2ac010
2. A bad guy listening "on the wire" sees the plain HTTP request
3. A bad guy sends the same request
GET / HTTP/1.1
Host: onlinecommunity.com
Cookie: PHPSESSID=f5avra_=AKMEHO_ga=GA1.2.93f54422f2ac010
4. Now the bad guy sees the good guy's profile!
How did people prevent SESSION hijacking before HTTPS?
I think I found the answer (in part) to my own question, so I'll share it here for others.
If the website doesn't use encryption then you can use a VPN to encrypt the requests, this way the session cookie will be hidden (encrypted)
You can re-create the session cookie with every request. Although from what I read this can be difficult to implement.
You can check against other header fields and compare them to the last request. If they are too different then you can block access.
You can check against the IP address. But since the person might be on mobile network the IP can change for legitimate reasons so you can at least to check against an IP range to make sure it's coming from the same country as the last request.
I suppose in general, if the website uses plain text HTTP without encryption, you can also steal their username and password by listening on the wire. So you should at least use a VPN (VPNs encrypt all data). I guess - never use plain text HTTP requests for any sensitive information.
Feel free to add or correct me.
When a web server receives a http(s) GET request from a client, it has access to some information such as:
The client IP
The request itself :
the headers (including the cookies)
the content
and... that's all ?
I am wondering if there is something else.
Indeed, I am trying to make a server that can access to a page where it can collect some information to update its database. The site denied access to my server but not to web browsers, even if I replicate the IP, the headers and the content.
Thanks for your help.
Yes, it's only what is contained in the request itself. The server cannot reach back to the client to "pull" information, it only has the information contained in the HTTP request and the underlying TCP/IP packet. That's:
the requesting IP address
the HTTP headers, including requested URL and HTTP method
the HTTP request body, if any
if it's HTTPS, any data exchanged during the TLS handshake, which is usually not very relevant for identifying anything significant
All of that information is voluntarily provided by the requesting client.
I am trying to understand how to bypass HSTS protection. I've read about tools by LeonardoNve ( https://github.com/LeonardoNve/sslstrip2 and https://github.com/LeonardoNve/dns2proxy ). But I quite don't get it.
If the client is requesting for the first time the server, it will work anytime, because sslstrip will simply strip the Strict-Transport-Security: header field. So we're back in the old case with the original sslstrip.
If not ... ? What happens ? The client know it should only interact with the server using HTTPS, so it will automatically try to connect to the server with HTTPS, no ? In that case, MitM is useless ... ><
Looking at the code, I kinda get that sslstrip2 will change the domain name of the ressources needed by the client, so the client will not have to use HSTS since these ressources are not on the same domain (is that true?). The client will send a DNS request that the dns2proxy tool will intercept and sends back the IP address for the real domain name.At the end, the client will just HTTP the ressources it should have done in a HTTPS manner.
Example : From the server response, the client will have to download mail.google.com. The attacker change that to gmail.google.com, so it's not the same (sub) domain. Then client will DNS request for this domain, the dns2proxy will answer with the real IP of mail.google.com. The client will then simply ask this ressource over HTTP.
What I don't get is before that... How the attacker can html-strip while the connection should be HTTPS from the client to server ... ?
A piece is missing ... :s
Thank you
Ok after watching the video, I get a better understanding of the scope of action possible by the dns2proxy tool.
From what I understood :
Most of the users will get on a HTTPS page either by clicking a link, or by redirection. If the user directly fetch the HTTPS version, the attack fails because we are unable to decrypt the traffic without the server certificate.
In the case of a redirection or link with sslstrip+ + dns2proxy enabled with us being in the middle of the connection .. mitm ! ==>
The user goes on google.com
the attacker intercept the traffic from the server to the client and change the link to sign in from "https://account.google.com" to "http://compte.google.com".
The user browser will make a DNS request to "compte.google.com".
the attacker intercept the request, make a real DNS request to the real name "account.google.com" and sends back the response "fake domain name + real IP" back to the user.
When the browser receives the DNS answer, it will search if this domain should be accessed by HTTPS. By checking a Preloaded HSTS list of domains, or by seeing the domain already visited that are in the cache or for the session, dunno. Since the domain is not real, the browser will simply make a HTTP connection the REAL address ip.
==> HTTP traffic at the end ;)
So the real limitations still is that the need for indirect HTTPS links for this to work. Sometime browser directly "re-type" the url entered to an HTTPS link.
Cheers !
My website is suffering from an apparent bot which GETs a particular URL 5 times within a second, waits exactly 2 minutes, then repeats. The request is coming from the same IP address each time, and I have not observed any malicious payload, so I'm undecided on whether it is some form of spam bot. The User-Agent claims to be IE6, which is always suspicious in such an obviously non-human request pattern.
Anyway, I have done a reverse lookup on the IP and have located a contact at that domain, but am I wasting my time trying to get in touch with them? If it's a spam bot, won't the IP address be spoofed? How common is IP address spoofing in HTTP spammers? Does the HTTP protocol make it difficult in any way?
If you spoof the IP, you won't get any response to your http request. Other than that, the http protocol doesn't make spoofing any easier or harder.
However, the IP address will be that of the last proxy server or load balancer between the source and your server, so if it is malicious, I would expect they're going through some open proxy and you won't easily be able to trace them back.
If it's just accidental misconfiguration, you're in with more of a chance.
Does the URL they are returning exist on your site?
Can you configure your web server to return an error (401 Forbidden , 500 Internal server error, 301 permanent redirect, perhaps) only to GETs from that address? If the other end starts getting errors maybe they'll investigate and fix things)
You should contact the persons in charge of the domain. Usually, the IP address won't be spoofed (that's hard). Most probably, one of there computers got infected by malicious software, and they definitely want to know that. It's more about doing a favour to them than about your own network security.