Max Http concurrent connections per client - http

While I am reading this post, I realize there is constraint in browser side to conform HTTP 1.1 RFC that is to limit concurrent connections to one domain. So I wonder, is there any similar constraint or limitation implemented in server side to conform this rule? For example, each IP only allowed specific number of Http concurrent connections established to the server.
I looked into Tomcat documentation, there is maxConnection and maxThread settings but none of them will actually enforce the rule up to IP level. If there is no such control in server side, does it mean there is a possibility that some clients could establish thousands of Http connections concurrently by using non-browser way (browser has limitation)? This is quite unsafe as some people will do this to attack the server. Can anyone clarify this?

Yes, Tomcat will allow thousands of connections from a single client. maxConnections and maxThreads do not care about the source of those connections.
You could implement a throttling Valve or Filter that would enforce QoS constraints, but no such component comes with Tomcat out of the box.

Related

DDos Attack Under HTTP

We have received a DDOS attack with the same pattern with all requests:
Protocol HTTP
GET
Random IP Address
Loading the home page /
Our server was returning a 301 to all requests and we had problems with the performance, the server was down.
We have blocked all requests coming from the HTTP and we have stopped the attack, we would like to know why we are receiving the attack to our servers under HTTP and not HTTPS from different sources, we would like to know if the source IP could only be changed using HTTP requests?
What's the best way to prevent this kind of attacks?
Our server right now is only working with HTTPS without issues. Server running on Azure Web Apps.
We have blocked all requests coming from the HTTP and we have stopped the attack.
Please note, when people type in your URL in browser manually, the first hit is usually over HTTP. If you turn off HTTP, people will not be able to access the the site by simply typing in your domain name.
we would like to know why we are receiving the attack to our servers under HTTP and not HTTPS from different sources
This is by the attacker to decide. Most probably it was only coincidence that the attack went over HTTP only.
we would like to know if the source IP could only be changed using HTTP requests?
No. For an HTTP request to be performed you need to do TCP handshake first. This means, that you can not fake the IP address easily, as you need to actively participate in the communication and the routers must see you as a valid participants. You can fake the IP while being in the same local network but it would be only for one packet and would not allow to perform a TCP handshake correctly.
What's the best way to prevent this kind of attacks?
We're still struggling with DDOS and there is no 100% solution. An attack of sufficient scale can turn down the internet as it did in the past already. There are some things you can do like:
Rate limiting - put some brakes on the incoming traffic not to kill your infrastructure completely. You will loose some valid traffic, but you will be up and running.
Filtering - pain when dealing with DDOS attacks. Analyse which IP addresses are attacking you constantly. Filter them on your firewall. (Imagine the fun when you are being attacked by 100k IoT devices). A WAF (Web Application Firewall) may allow you to filter not only on IP addresses but also on other request parameters too.
Scaling up - more infrastructure can do more.
In most cases all you need to do is survive till the attack is over.

How are pools of HTTPS connections managed?

Consider an application that access a remote HTTPS server, sending POST of JSON-formatted requests at an URL on the server, and receiving JSON-formatted answers. The server does not support HTTP/2 multiplexing.
There are many requests, with widely varying workload (from idle to hundreds TPS). JSON messages are in the order of 1 kbyte. Client and server are authenticated by certificates+private keys. The requests can be considered independent (in particular, the server treats requests alike for all HTTPS channels opened with the same client certificate).
HTTP/1.1 does not allow* multiple concurrent POST requests over the same connection. Therefore the throughput can't exceed N/(Tr+Ts) TPS, where N is the number of opened HTTPS/TLS channels in use, Tr is the network round trip delay, and Ts is processing time on the server side (in the order of 30 ms under low load, due to database access and other factors). Opening an HTTPS connection costs at least 4 Tr, and sizable CPU time on both sides. It looks like something is needed to manage a pool of HTTPS connections on the client side.
How is this issue usually handled?
What are common libraries or background daemon/services, automatically opening new HTTPS connections as needed, reusing them when possible?
It would be nice if that detected when the server becomes unresponsive, and handled fallback to a backup server at a different URL, with return to the main server when it is up again.
Note: Next step would be load balancing, but then my load-balancing layer must somewhat handle an affinity between the requests, since they are not fully independent (sending a dependent request to the wrong server is reliably detected by the server, though).
[*] Due to how RFC 2616 is interpreted, I'm told.

Are there performance advantages in http2 over http1.1 for service-to-service communication?

I'm just curious if I'm missing something in http2 that would make it more efficient in service-to-service communication, for example in a microservice architecture.
Are its improvements just related to end-users (browsers)?
If you are issuing many concurrent requests between microservices, then there's benefit from connection multiplexing. You do not need to manage TCP connection pools on the client, and restrict the number of incoming TCP connections at the service side.
Some services might benefit from server push, though it really depends on what the service does.
Headers compression can be useful if you have high traffic volumes to the service with repeated meta-data. More information can be found here.
In summary, yes, it is designed more with end users in mind, but there's value for RESTful microservices as well, especially due to connection multiplexing.
HTTP/2 adds an additional aspect to service-to-service communication that was not mandatory with HTTP/1.1. And that is security in form of SSL/TLS.
Although not required by the RFC standard, almost all HTTP/2 servers and clients will only support HTTP/2 over TLS, which makes encryption de facto mandatory.
So if you want to offer and consume microservices over HTTP/2, you have to think about ways to create, manage and distribute SSL-certificates to servers and clients.
Consequently, moving to HTTP/2 means introducing a new stack of technology, e.g. a public key infrastructure, to your service eco system.
Another way to make your services HTTP/2-ready for your service consumers would be to place a reverse proxy between your HTTP/2-enabled consumers and your HTTP/1.1 services.
The proxy would terminate the HTTP/2 connections from the consumers and translate them into HTTP/1.1 requests for your servers (and vise-versa).
This would implement a separation of concern, where your services would only be responsible for their business-logic stuff, while the proxies would handle the certificates and encryption. But again, you would add more complexity to your system.
More Complexity, but also better use of network resources
More complexity is what you are paying with. But you get a smarter consumption of network resources for it. With HTTP/1.1 you can have multiple TCP connections between one client and a server. And opening multiple connection is almost always necessary to overcome HTTP/1.1's performance drawbacks.
Establishing TCP connections is an expensive task, though. In order to create them DNS lookup, TCP handshake and SSL handshake are necessary.
HTTP/2 limits the number of open TCP-connections between one client and one server to exactly one (1). But at the same time, HTTP/2 brings us connection multiplexing, i.e. you can have multiple HTTP conversations simultaneously over the same TCP connection (HTTP/1.1: 1 TCP-connection = 1 HTTP connection).

HTTP and Sessions

I just went through the specification of http 1.1 at http://www.w3.org/Protocols/rfc2616/rfc2616.html and came across a section about connections http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8 that says
" A significant difference between HTTP/1.1 and earlier versions of HTTP is that persistent connections are the default behavior of any HTTP connection. That is, unless otherwise indicated, the client SHOULD assume that the server will maintain a persistent connection, even after error responses from the server.
Persistent connections provide a mechanism by which a client and a server can signal the close of a TCP connection. This signaling takes place using the Connection header field (section 14.10). Once a close has been signaled, the client MUST NOT send any more requests on that connection. "
Then I also went through a section on http state management at https://www.rfc-editor.org/rfc/rfc2965 that says in its section 2 that
"Currently, HTTP servers respond to each client request without relating that request to previous or subsequent requests;"
A section about the need to have persistent connections in the RFC 2616 also said that prior to persistent connections every time a client wished to fetch a url it had to establish a new TCP connection for each and every new request.
Now my question is, if we have persistent connections in http/1.1 then as mentioned above a client does not need to make a new connection for every new request. It can send multiple requests over the same connection. So if the server knows that every subsequent request is coming over the same connection, would it not be obvious that the request is from the same client? And hence would this just not suffice to maintain the state and would this just nit be enough for the server to understand that the request was from the same client ? In this case then why is a separate state management mechanism required at all ?
Basically, yes, it would make sense, but HTTP persistent connections are used to eliminate administrative TCP/IP overhead of connection handling (e.g. connect/disconnect/reconnect, etc.). It is not meant to say anything about the state of the data moving across the connection, which is what you're talking about.
No. For instance, there might an intermediate (such as a proxy or a reverse proxy) in the request path that aggregates requests from multiple TCP connections.
See http://greenbytes.de/tech/webdav/draft-ietf-httpbis-p1-messaging-21.html#intermediaries.

Maintaining simultaneous connections in HTTP?

I need to maintain multiple active long-pooling AJAX connections to the Webserver.
I know that most browsers don't allow more then 2 simultaneous connections to the same server. This is what the HTTP 1.1 protocol states:
Clients that use persistent
connections SHOULD limit the number of
simultaneous connections that they
maintain to a given server. A
single-user client SHOULD NOT maintain
more than 2 connections with any
server or proxy. A proxy SHOULD use up
to 2*N connections to another server
or proxy, where N is the number of
simultaneously active users. These
guidelines are intended to improve
HTTP response times and avoid
congestion.
Supposing that I have 2 sub-domains Server1.MyWebSite.Com and Server2.MyWebSite.Com sharing the same IP address, will I be able to make 2x2 simultaneous connections?
It does appear that different hostnames on the same IP can be useful. You may run into issues when making the AJAX connections due to Same Origin Policy.
Edit: As per your document.domain question (from Google's Browser Security Handbook):
Checks for XMLHttpRequest targets do not take document.domain into account...
It will be 100% browser dependent. Some might base the 2 connection limit on domain name, some might on IP address.
Others will let you do as many as you like.
No browser bases its connection limit on IP address. All browsers base the limit on the specified FQDN.
Hence, yes, it would be entirely fine to have a DNS alias to your server, although the earlier answer is correct that XHR will require that you use the page's domain name for XHR, and use the alias to download the static content (images, etc) in the page.
Incidentally, modern browsers typically raise the connection limit to 6 or 8 connections per host.

Resources