HTTP 2.0 - one TCP/IP connections vs 6 parallel - http

Its said that one of the advantages of HTTP 2 over HTTP 1 is that HTTP2 has streams of data. It can be up to 256 different streams in ONE TCP/IP connection. However, in HTTP 1 there can be up to 6 parallel connections. It's an improvement that HTTP 2 enables reading data from 256 resources, but still I think that 6 connections (in HTTP 1) have a better throughput that one TCP/IP connection (in HTTP 2). Still, HTTP2 is considered faster than HTTP 1. So... what don't I understand correctly ?

6 physical connections would have more throughput than one physical connection, all else being equal.
However the same doesn't apply to 6 different TCP/IP connections between the same computers, as these are virtual connections (assuming you don't have two network cards). The limiting factor is usually the latency and bandwidth of your internet connection rather than TCP/IP protocol itself.
In fact, due to the way TCP connections are created and handled, its actually much more efficient to have one TCP/IP connection. This is because of the cost of the initial connection (a three way TCP handshake, the HTTPS handshake and the fact that TCP connections use a process called Slow Start to slowly build up its capacity to the maximum speed that the network can handle) but also in the ongoing upkeep of the connection (as the Slow Start process happens again periodically unless the connection is fully utilised all the time - which is much more likely to happen with one connection that is used for everything, than it is to happen when your requests are split across 6 connections).
Also HTTP/1.1 only allows one request in flight at a time, so the connection is not able to be used until the response is returned (ignoring pipelining which is not really supported at all in HTTP/1.1). This not only limits the usefulness of the 6 connections, but also means that it's more likely that the connections will be underused which, given the issues with underused connections in TCP mentioned above, means they are likely to be slower as they throttle back and have to go through the Slow Start process again to build up to maximum capacity. HTTP/2 however allows those 256 streams to allow requests to be in flight at the same time. Which is both better than just 6 connections, and allows true multiplexing.
If you want to know more, then Ilya Grigorik had written an excellent book on the subject called High Performance Browser Networking which is even available online for free.

Related

Why does HTTP/1.1 recommend to be conservative with opening connections?

With HTTP/1.0, there used to be a recommended limit of 2 connections per domain. More recent HTTP RFCs have relaxed this limitation but still warn to be conservative when opening multiple connections:
According to RFC 7230, section 6.4, "a client ought to limit the number of simultaneous open connections that it maintains to a given server".
More specifically, besides HTTP/2, these days, browsers impose a per-domain limit of 6-8 connections when using HTTP/1.1. From what I'm reading, these guidelines are intended to improve HTTP response times and avoid congestion.
Can someone help me understand what would happen with congestion and response times if many connections were opened by domain? It doesn't sound like an HTTP server problem since the amount of connection they can handle seems like an implementation detail. The explanation above seems to say it's about TCP performance? I can't find any more precise explanations for why HTTP clients limit the number of connections per domains.
The primary reasoning for this is resources on the server side.
Imagine that you have a server running Apache with the default of 256 worker threads. Imagine that this server is hosting an index page that has 20 images on it. Now imagine that 20 clients simultaneously connect and download the index page; each of these clients closes these connections after obtaining the page.
Since each of them will now establish connections to download the image, you likely see that the connections increase exponentially (or multiplicatively, I suppose). Consider what happens if every client is configured to establish up to ten simultaneous connections in parallel to optimize the display of the page with images. This takes us very quickly to 400 simultaneous connections. This is nearly double the number of worker processes that Apache has available (again, by default, with a pre-fork).
For the server, resources must be balanced to be able to serve the most likely load, but the clients help with this tremendously by throttling connections. If every client felt free to establish 100+ connections to a server in parallel, we would very quickly DoS lots of hosts. :)

Can I have multiple open SSE channels when using HTTP/2?

So far I only used HTTP/1.1, but recently I switched to HTTP/2. On 1.1 I ran into request number limit issues, but HTTP/2 uses one connection with multiplexing, does that mean that I can keep multiple SSE channels open with no problems, or should I still use only one with some internal message routing solution?
If you want to be safe: Use just one channel or only a few of them and multiplex internally.
Longer answer: The reason that more channels caused problems with HTTP/1.1 is that each channel required a dedicated TCP connection, and browsers limited the number of concurrent TCP connections for each tab (I think to something around 10). With HTTP/2 making concurrent HTTP requests is possible on a single connection. therefore opening multiple concurrent SSE streams is more likely be possible. However browsers (and also webservers) may still limit the number of concurrent HTTP/2 streams they support over a TCP connection. HTTP/2 even supports that by allowing each peer in a HTTP/2 setting to communicate the maximum amount of concurrent streams it supports (SETTINGS_MAX_CONCURRENT_STREAMS). To be safe you would need to figure out what the limit is that your target browsers and your web server supports and use a lower number of SSE streams. I unfortunately don't know whether it's part of any HTML or browser specification, that they all should support at least a well-specified number of concurrent requests over HTTP/2. If you keep the number of requests low you avoid to run into problems.
One other advantage for using only a few channels is that you can still support HTTP/1.1 clients well. And not only those which might be directly connected to your server but also those which might connect through a proxy-server (which means the connection browser<->proxy uses HTTP/1.1 and proxy<->webserver uses HTTP/2).

In normal operation, how long will the TCP connection be kept open with websocket

Websocket is often done via a protocol upgrade from http - but even then - it is a layer on top (directly) of TCP.
Given that - how long can I expect a single TCP connection to be used underlying the websocket? how often will that connection have to be replaced (all without the client/server necessarily being aware).
As Joakim Erdfelt commented, there is no way to know the exact timeout value of a TCP connection, because it is a value which can be configured individually on each system which takes part in the connection.
But note that TCP connections are usually only dropped when there was no communication for an extended period of time. You can prevent this from happening by sending small, meaningless keep-alive messages at regular intervals.
In my current application I am doing this by pinging the client every 10 seconds. This has the advantage that I also get a good statistic of network latency.

TCP Persistent Connections with HTTP?

So i thought with HTTP 1.1 your TCP connections are sustained for as long are you are communicating with that server? How does it actually work, how does the TCP connection know when you are done writing into the socket? Any formation would be awesome, i have done research but i cant find what im looking for short of reading the RFC.
The typical implementation is that the HTTP server will have a timeout (typically called KeepAliveTimeout or such) after which it will close an idle connection.
A server which reserves a thread or an entire process per connection (such as apache with the usual mpm_prefork or mpm_worker), keepalives are usually disabled entirely or kept quite short (a few seconds). For an event-based server such as nginx which uses much less memory per connection, the keepalive timeout can be left at a much higher value (typically a minute or so).
See section 8.1 of RFC 2616. Basically, HTTP 1.1 treats all connections as persistent but the langauage of the RFC doesn't mandate this behaviour, since it uses the word "SHOULD". If it was mandated, it would use "MUST".
However, the RFC does not specify in detail how an implementation does this. As can be seen from the HTTP Persistent Connection page on Wikipedia, Apache's default timeout (beyond which it returns persistent connections for other uses) may be as low as five seconds. (though this is almost certainly configurable, given all the other knobs and dials that Apache provides).
In other words, it's meant for numerous requests to the same address within a short time frame, so as to not waste time opening and closing a bucket-load of sessions where one will do. Increasing this timeout is not a "free ride", since resources are tied up while the connection is held open. In an environment where you expect lots of incoming clients, tying up these resources can be fatal to performance.

Web Browser Parallel Downloads vs Pipelining

I always knew web browsers could do parallel downloads. But then the other day I heard about pipelining. I thought pipelining was just another name for parallel downloads, but then found out even firefox has pipelining disabled by default. What is the difference between these things and how do work together?
As I under stand it, "parallel downloads" are requests going out on multiple sockets. They can be to totally unrelated servers but they don't have to be.
Pipelining is an HTTP/1.1 feature that lets you make multiple requests on the same socket before receiving a response. When connecting to the same server, this reduces the number of sockets, conserving resources.
I think this MDC article explains HTTP pipelining pretty darn well.
What is HTTP pipelining?
Normally, HTTP requests are issued sequentially, with the next request being issued only after the response to the current request has been completely received. Depending on network latencies and bandwidth limitations, this can result in a significant delay before the next request is seen by the server.
HTTP/1.1 allows multiple HTTP requests to be written out to a socket together without waiting for the corresponding responses. The requestor then waits for the responses to arrive in the order in which they were requested. The act of pipelining the requests can result in a dramatic improvement in page loading times, especially over high latency connections.
Pipelining can also dramatically reduce the number of TCP/IP packets. With a typical MSS (maximum segment size) in the range of 536 to 1460 bytes, it is possible to pack several HTTP requests into one TCP/IP packet. Reducing the number of packets required to load a page benefits the internet as a whole, as fewer packets naturally reduces the burden on IP routers and networks.
HTTP/1.1 conforming servers are required to support pipelining. This does not mean that servers are required to pipeline responses, but that they are required to not fail if a client chooses to pipeline requests. This obviously has the potential to introduce a new category of evangelism bugs, since no other popular web browsers implement pipelining.
I recommend reading the whole article since there's more than what I copied into my answer.

Resources