DDos Attack Under HTTP - http

We have received a DDOS attack with the same pattern with all requests:
Protocol HTTP
GET
Random IP Address
Loading the home page /
Our server was returning a 301 to all requests and we had problems with the performance, the server was down.
We have blocked all requests coming from the HTTP and we have stopped the attack, we would like to know why we are receiving the attack to our servers under HTTP and not HTTPS from different sources, we would like to know if the source IP could only be changed using HTTP requests?
What's the best way to prevent this kind of attacks?
Our server right now is only working with HTTPS without issues. Server running on Azure Web Apps.

We have blocked all requests coming from the HTTP and we have stopped the attack.
Please note, when people type in your URL in browser manually, the first hit is usually over HTTP. If you turn off HTTP, people will not be able to access the the site by simply typing in your domain name.
we would like to know why we are receiving the attack to our servers under HTTP and not HTTPS from different sources
This is by the attacker to decide. Most probably it was only coincidence that the attack went over HTTP only.
we would like to know if the source IP could only be changed using HTTP requests?
No. For an HTTP request to be performed you need to do TCP handshake first. This means, that you can not fake the IP address easily, as you need to actively participate in the communication and the routers must see you as a valid participants. You can fake the IP while being in the same local network but it would be only for one packet and would not allow to perform a TCP handshake correctly.
What's the best way to prevent this kind of attacks?
We're still struggling with DDOS and there is no 100% solution. An attack of sufficient scale can turn down the internet as it did in the past already. There are some things you can do like:
Rate limiting - put some brakes on the incoming traffic not to kill your infrastructure completely. You will loose some valid traffic, but you will be up and running.
Filtering - pain when dealing with DDOS attacks. Analyse which IP addresses are attacking you constantly. Filter them on your firewall. (Imagine the fun when you are being attacked by 100k IoT devices). A WAF (Web Application Firewall) may allow you to filter not only on IP addresses but also on other request parameters too.
Scaling up - more infrastructure can do more.
In most cases all you need to do is survive till the attack is over.

Related

intercepting http proxy - disadvantages compared to a normal proxy

I would like to know how "realistic" is to consider implementing an intercepting proxy(with cache support) for the purpose of web filtering. I would like to support also IPv6, authentication of clients and caching.
Reading to the list of disadvantages from squid wiki http://wiki.squid-cache.org/SquidFaq/InterceptionProxy that implements an intercepting proxy, it mentions some things to consider as disadvantages when using it(that I want to clarify):
Requires IPv4 with NAT - proxy intercepting does not support IPv6, why ?
it causes path-MTU (PMTUD) to possibly fail - why ?
Proxy authentication does not work - client thinks it's talking directly to the originating server, in there a way to do authentication in this case ?
Interception Caching only supports the HTTP protocol, not gopher, SSL, or FTP. You cannot setup a redirection-rule to the proxy server for other protocols other than HTTP since it will not know how to deal with it - This seems quite plausible as the way redirecting of traffic to proxy is done in this case is by a firewall changing the destination address of a packet from the originating server to the proxy's own address(Destination NAT). How would in this case, if i want to intercept other protocols besides http know where the connection was intended to go so I can relay it to that destination ?
Traffic may be intercepted in many ways. It does not necessarily need to use NAT (which is not supported in IPv6). A transparent interception will surely not use NAT for example (transparent in the sense that the Proxy will not generate requests with his own address but with the client address, spoofing the IP address).
PMTUD is used to detect the largest MTU size available in the path between the client and server and vise versa, it is useful for avoiding fragmentation of Ip packets on the path between the client and server. When you use a Proxy in the middle, even if the MTU is detected, it not necessarily the same as the one from the client to the proxy and from the proxy to the server. But this is not always relevant, it depends on what traffic is being served and how the proxy is behaving.
If the proxy is authenticating in the client behalf, it needs to be aware of the authentication method, and it will probably need some cookies that exist in the client. Think of it this way... If a proxy can authenticate an access to a restricted resource on your behalf, it means anyone can do it on your behalf, and the purpose of a good authentication is to protect you from such possibilities.
I guess this was a very old post from the Squid guys, but the technology exists to redirect anything you want to a specific server. One simple way to do it is by placing your server as a Default Gateway for the network, then all packets pass through it and you could redirect the packets you like to your application (or another server). And you are not limited to HTTP, BUT you are limited to the way the application protocol works.

How do you load balance TCP traffic?

I'm trying to determine how to load balance TCP traffic. I understand how HTTP load balancing works because it is a simple Request / Response architecture. However, I'm unsure of how you load balance TCP traffic when your servers are trying to write data to other clients. I've attached an image of the work flow for a simple TCP chat server where we want to balance traffic across N application servers. Are there any load balancers out there that can do what I'm trying to do, or do I need to research a different topic? Thanks.
Firstly, your diagram assumes that the load balancer is acting as a (TCP) proxy, which is not always the case. Often Direct Routing (or Direct Server Return) is used, or Destination NAT is performed. In both cases the connection between backend server and the client is direct. So in this case it is essentially the TCP handshake that is distributed amongst backend servers. See the following for more info:
http://www.linuxvirtualserver.org/VS-DRouting.html
http://www.linuxvirtualserver.org/VS-NAT.html
Obviously TCP proxies do exist (HAProxy being one), in which case the proxy manages both sides of the connecton, so your app would need to be able to identify the client by the incoming IP/Port (which would happen to be from the proxy rather than the client). The proxy will handle getting the messages back to the client.
Either way, it comes down to application design as I would imagine the tricky bit is having a common session store (a database of some kind, or key=>value store such as Redis), so that when your app server says "I need to send a message to Frank" it can determine which backend server Frank is connected to (from DB), and signal that server to send it the message. You reduce the problem of connections (from the same client) moving around different backend servers by having persistent connections (all load balancers can do this), or by using something intrinsically persistent like a websocket.
This is probably a vast oversimplification as I have no experience with chat software. Obviously DB servers themselves can be distributed amongst several machines, for fault-tolerance and load balancing.

Are there any advantages to using HTTP over HTTPS?

I am managing a web application that dynamically flips between http and https depending on the page. I want to get rid a ton of extra code used to flip between http and https but I want to understand any implications before I continue.
Is there any advantage to serving part of a site using http over https?
Of course there is some performance drop when using https, but it is not significant unless you have an extremely busy server. See
HTTP vs HTTPS performance
HTTP is not a secure protocol and anyone can intercept the transmitted data in cleartext (e.g. session cookies, passwords, credit card numbers, sexual fetishes). If you can, you should provide consistent HTTPS service throughout.
That said, by the design of the public/private key security, you can only use HTTPS on a server where you have complete and sole control over the IP address, since the client first looks up the IP address, then requests the secure protocol, and only then makes the HTTP query. That means that you cannot deploy HTTPS on virtual hosts (shared hosting).
(Since you already have a partial HTTPS solution, I imagine that's not a problem for you, though.)
The other downside is that the secure handshake and later encryption require computing resources, so that if you have bazillions of connections, you may feel quite a hit on your server performance. That's for you to consider, though.
Short form: If you have a dedicated IP address and enough computing resources, always and exclusively use HTTPS.
Using http is faster than https obviously since you do not have the ssl handshake overhead during connection establishment or the extra encryption/decryption delay.
If you only need parts of your web site to be secure e.g. just encrypt the login credentials, then it makes sense to have the code for the redirection so that the interaction after that is faster due to plain-text http.
If there are many areas of your site that need to be secure, then you could make measurements using https completely and see if the performance is significantly affected.
If you see no significant performance issues (or the performance is acceptable), then you could simplify your software design and remove the redirection logic between http<->https and use https everywhere.
One of the differences between HTTP and HTTPS is that with HTTPS, you loose the ability to have intermediaries (caches, proxies, etc) between the client and server do anything useful with requests and responses because the content is encrypted. From a security point of view, this is a good thing because it prevents intermediaries from snooping or tampering with traffic. On the other hand, you reduce the opportunities for dealing with things like scalability, performance and evolvability.

How reliable is the IP address logged in an Apache access log?

My website is suffering from an apparent bot which GETs a particular URL 5 times within a second, waits exactly 2 minutes, then repeats. The request is coming from the same IP address each time, and I have not observed any malicious payload, so I'm undecided on whether it is some form of spam bot. The User-Agent claims to be IE6, which is always suspicious in such an obviously non-human request pattern.
Anyway, I have done a reverse lookup on the IP and have located a contact at that domain, but am I wasting my time trying to get in touch with them? If it's a spam bot, won't the IP address be spoofed? How common is IP address spoofing in HTTP spammers? Does the HTTP protocol make it difficult in any way?
If you spoof the IP, you won't get any response to your http request. Other than that, the http protocol doesn't make spoofing any easier or harder.
However, the IP address will be that of the last proxy server or load balancer between the source and your server, so if it is malicious, I would expect they're going through some open proxy and you won't easily be able to trace them back.
If it's just accidental misconfiguration, you're in with more of a chance.
Does the URL they are returning exist on your site?
Can you configure your web server to return an error (401 Forbidden , 500 Internal server error, 301 permanent redirect, perhaps) only to GETs from that address? If the other end starts getting errors maybe they'll investigate and fix things)
You should contact the persons in charge of the domain. Usually, the IP address won't be spoofed (that's hard). Most probably, one of there computers got infected by malicious software, and they definitely want to know that. It's more about doing a favour to them than about your own network security.

Maintaining simultaneous connections in HTTP?

I need to maintain multiple active long-pooling AJAX connections to the Webserver.
I know that most browsers don't allow more then 2 simultaneous connections to the same server. This is what the HTTP 1.1 protocol states:
Clients that use persistent
connections SHOULD limit the number of
simultaneous connections that they
maintain to a given server. A
single-user client SHOULD NOT maintain
more than 2 connections with any
server or proxy. A proxy SHOULD use up
to 2*N connections to another server
or proxy, where N is the number of
simultaneously active users. These
guidelines are intended to improve
HTTP response times and avoid
congestion.
Supposing that I have 2 sub-domains Server1.MyWebSite.Com and Server2.MyWebSite.Com sharing the same IP address, will I be able to make 2x2 simultaneous connections?
It does appear that different hostnames on the same IP can be useful. You may run into issues when making the AJAX connections due to Same Origin Policy.
Edit: As per your document.domain question (from Google's Browser Security Handbook):
Checks for XMLHttpRequest targets do not take document.domain into account...
It will be 100% browser dependent. Some might base the 2 connection limit on domain name, some might on IP address.
Others will let you do as many as you like.
No browser bases its connection limit on IP address. All browsers base the limit on the specified FQDN.
Hence, yes, it would be entirely fine to have a DNS alias to your server, although the earlier answer is correct that XHR will require that you use the page's domain name for XHR, and use the alias to download the static content (images, etc) in the page.
Incidentally, modern browsers typically raise the connection limit to 6 or 8 connections per host.

Resources