How reliable is the IP address logged in an Apache access log? - http

My website is suffering from an apparent bot which GETs a particular URL 5 times within a second, waits exactly 2 minutes, then repeats. The request is coming from the same IP address each time, and I have not observed any malicious payload, so I'm undecided on whether it is some form of spam bot. The User-Agent claims to be IE6, which is always suspicious in such an obviously non-human request pattern.
Anyway, I have done a reverse lookup on the IP and have located a contact at that domain, but am I wasting my time trying to get in touch with them? If it's a spam bot, won't the IP address be spoofed? How common is IP address spoofing in HTTP spammers? Does the HTTP protocol make it difficult in any way?

If you spoof the IP, you won't get any response to your http request. Other than that, the http protocol doesn't make spoofing any easier or harder.
However, the IP address will be that of the last proxy server or load balancer between the source and your server, so if it is malicious, I would expect they're going through some open proxy and you won't easily be able to trace them back.
If it's just accidental misconfiguration, you're in with more of a chance.
Does the URL they are returning exist on your site?
Can you configure your web server to return an error (401 Forbidden , 500 Internal server error, 301 permanent redirect, perhaps) only to GETs from that address? If the other end starts getting errors maybe they'll investigate and fix things)

You should contact the persons in charge of the domain. Usually, the IP address won't be spoofed (that's hard). Most probably, one of there computers got infected by malicious software, and they definitely want to know that. It's more about doing a favour to them than about your own network security.

Related

DDos Attack Under HTTP

We have received a DDOS attack with the same pattern with all requests:
Protocol HTTP
GET
Random IP Address
Loading the home page /
Our server was returning a 301 to all requests and we had problems with the performance, the server was down.
We have blocked all requests coming from the HTTP and we have stopped the attack, we would like to know why we are receiving the attack to our servers under HTTP and not HTTPS from different sources, we would like to know if the source IP could only be changed using HTTP requests?
What's the best way to prevent this kind of attacks?
Our server right now is only working with HTTPS without issues. Server running on Azure Web Apps.
We have blocked all requests coming from the HTTP and we have stopped the attack.
Please note, when people type in your URL in browser manually, the first hit is usually over HTTP. If you turn off HTTP, people will not be able to access the the site by simply typing in your domain name.
we would like to know why we are receiving the attack to our servers under HTTP and not HTTPS from different sources
This is by the attacker to decide. Most probably it was only coincidence that the attack went over HTTP only.
we would like to know if the source IP could only be changed using HTTP requests?
No. For an HTTP request to be performed you need to do TCP handshake first. This means, that you can not fake the IP address easily, as you need to actively participate in the communication and the routers must see you as a valid participants. You can fake the IP while being in the same local network but it would be only for one packet and would not allow to perform a TCP handshake correctly.
What's the best way to prevent this kind of attacks?
We're still struggling with DDOS and there is no 100% solution. An attack of sufficient scale can turn down the internet as it did in the past already. There are some things you can do like:
Rate limiting - put some brakes on the incoming traffic not to kill your infrastructure completely. You will loose some valid traffic, but you will be up and running.
Filtering - pain when dealing with DDOS attacks. Analyse which IP addresses are attacking you constantly. Filter them on your firewall. (Imagine the fun when you are being attacked by 100k IoT devices). A WAF (Web Application Firewall) may allow you to filter not only on IP addresses but also on other request parameters too.
Scaling up - more infrastructure can do more.
In most cases all you need to do is survive till the attack is over.

How can I not just deny, but slow down requests from a crawler from a certain IP [nginx]

I have an overbearing and unauthorized crawler trying to crawl my website at very high request rates. I denied the IP address in my nginx.conf when I saw this, but then a few weeks later I saw the same crawler (it followed the same crawling pattern) coming from another IP address.
I would like to fight back not just by sending back an immediate 403 but by also by slowing down the response time, or something equally inconvenient and frustrating for the programmer behind this crawler. Any suggestions?
I would like to manually target just this crawler. I've tried the nginx limit_req_zone, but it's too tricky to configure in a way that does not affect valid requests.

HSTS bypass with sslstrip+ & dns2proxy

I am trying to understand how to bypass HSTS protection. I've read about tools by LeonardoNve ( https://github.com/LeonardoNve/sslstrip2 and https://github.com/LeonardoNve/dns2proxy ). But I quite don't get it.
If the client is requesting for the first time the server, it will work anytime, because sslstrip will simply strip the Strict-Transport-Security: header field. So we're back in the old case with the original sslstrip.
If not ... ? What happens ? The client know it should only interact with the server using HTTPS, so it will automatically try to connect to the server with HTTPS, no ? In that case, MitM is useless ... ><
Looking at the code, I kinda get that sslstrip2 will change the domain name of the ressources needed by the client, so the client will not have to use HSTS since these ressources are not on the same domain (is that true?). The client will send a DNS request that the dns2proxy tool will intercept and sends back the IP address for the real domain name.At the end, the client will just HTTP the ressources it should have done in a HTTPS manner.
Example : From the server response, the client will have to download mail.google.com. The attacker change that to gmail.google.com, so it's not the same (sub) domain. Then client will DNS request for this domain, the dns2proxy will answer with the real IP of mail.google.com. The client will then simply ask this ressource over HTTP.
What I don't get is before that... How the attacker can html-strip while the connection should be HTTPS from the client to server ... ?
A piece is missing ... :s
Thank you
Ok after watching the video, I get a better understanding of the scope of action possible by the dns2proxy tool.
From what I understood :
Most of the users will get on a HTTPS page either by clicking a link, or by redirection. If the user directly fetch the HTTPS version, the attack fails because we are unable to decrypt the traffic without the server certificate.
In the case of a redirection or link with sslstrip+ + dns2proxy enabled with us being in the middle of the connection .. mitm ! ==>
The user goes on google.com
the attacker intercept the traffic from the server to the client and change the link to sign in from "https://account.google.com" to "http://compte.google.com".
The user browser will make a DNS request to "compte.google.com".
the attacker intercept the request, make a real DNS request to the real name "account.google.com" and sends back the response "fake domain name + real IP" back to the user.
When the browser receives the DNS answer, it will search if this domain should be accessed by HTTPS. By checking a Preloaded HSTS list of domains, or by seeing the domain already visited that are in the cache or for the session, dunno. Since the domain is not real, the browser will simply make a HTTP connection the REAL address ip.
==> HTTP traffic at the end ;)
So the real limitations still is that the need for indirect HTTPS links for this to work. Sometime browser directly "re-type" the url entered to an HTTPS link.
Cheers !

Get domain the server was reached over?

In general on any non-HTTP server. Would there be a way to detect what domain was used to reach the IP?
I know HTTP servers get the domain passed within the request header, but would this be possible with any other server that does not require this information to be received from the client?
I'm especially looking for a way to do this with the minecraft server (Bukkit) so my preferred language (if needed for you to answer) would be Java. But I'd like to not have the theories about this language specific.
In general, no, which is why the HTTP protocol includes it in the headers.
In order to reach your server, first a DNS lookup is performed to resolve your IP, which is then followed by the connection itself. These two steps are separate, and hard to link together.
Logging what domain was last requested by a client is tricky, too, as DNS information is often cached, so the DNS request may not even reach your DNS server before being answered.
If it isn't cached, it also often isn't directly looked up by the end client, but rather by a caching DNS server operated, for instance, by the ISP.
No. The only way to get the DNS name used to connect to a server is to have the client provide it.
No, if there are no means for this in the protocol itself like the Host header in HTTP you cannot find out which hostname was used on the client to resolve your IP address.

DNS HTTP Requests

If I was to send a URL to a DNS server, lets say: "dev.example.com/?username=daniel",
what is exactly sent to the DNS server? The whole URL (including any passed parameters) or is it just website section "dev.example.com"? I want to know so that I know what parameters I should be hiding in a URL.
The reason I am asking is because I just don't want confidential information sent to DNS servers. I am using https for all URLs but when someone asks to go to a URL, I want all parameter information from the URLs to be hidden from all DNS servers. I just am not sure what is sent to a DNS server to establish an SSL connection. Since I have a site that needs just about every parameter encrypted I am concerned about how to hide this information if DNS reads it.
dev.example.com may be resolved (if it is not already in the local cache) by sending it to your DNS server (which will almost certainly refer to another DNS Server).
Only the "dev.example.com" is sent, the rest will be passed only to the resolved IP number as an HTTP request.
So, you do not need to hide any parameters, except of course that these parameters could well end up on another website if a user visits it from your page (as a referer). If these parameters are really sensitive encode them or (ab)use POST,
The Domain Name System (DNS) resolves hostnames to IP addresses, so only the value of the hostname is sent.
DNS is agnostic of protocol. The value sent is just the hostname, so in this case dev.example.com.
I'm also not sure what this has to do with "parameter hiding" but if you could expand on that we might be able to provide more helpful advice.
Edit (based on your update): Ah. Well then you shoud be good to go, as only the domain name itself is sent.
If the DNS server happens to be a web server which root web application happens to answer to the "username" query, then you might get something back. Other than that, DNS is another kind of animal.

Resources