Are large HTTP envelopes split across several partial http requests? - http

I'm reading a book that looks at different web service architectures, including an overview of how the SOAP protocol can be implemented in via HTTP. This was interesting to me because I do a lot of WCF development and didn't realize how client/server communication was implemented.
Since protocols like TCP and whatever is lower than that have fixed maximum packet sizes and as such have to split messages into packets, I just assumed that HTTP was similar. Is that not the case?
I.e. If I make a GET/POST request with a 20MB body, will a single HTTP envelope be sent and reassembled on the server?
If this is the case, what is the largest practical size of an http request? i have previously configured Nginx servers to allow 20mb file transfers and I'm wondering if this is too high...

From HTTP specification point of view, there is no limit for HTTP payload. According to RFC7230:
Any Content-Length field value greater than or equal to zero is valid. Since there is no predefined limit to the length of a payload, a recipient MUST anticipate potentially large decimal numerals and prevent parsing errors due to integer conversion overflows.
However, to prevent attack via very long or very slow stream of data, a web server may reject such HTTP request and return 413 Payload Too Large response.
"Since protocols like TCP and whatever is lower than that have fixed maximum packet sizes and as such have to split messages into packets, I just assumed that HTTP was similar. Is that not the case?"
No. HTTP is an application level protocol and is totally different. As HTTP is based on TCP, when the data is transferring, it would automatically split into packets on TCP level. There is no need to split the request on HTTP level.

An HTTP body can be as large as you want it to be, there is no download size limit, the size limit is usually set for uploads, to prevent someone uploading massive files to your server.
You can ask for a section of a resource using the Range header, if you only want part of it.
IE had limits of 2 and 4 GB at times, but these have been fixed since. Source

Related

HTTP Request parsing

I would like to understand the usage of 'Transfer-encoding: Chunked' in case of HTTP requests.
Is it common for requests to be chunked?
My thinking is no since requests need to be completely read before processing, it does not make sense to be sending chunked requests.
It is not that common, but it can be very useful for large request bodies.
My thinking is no since requests need to be completely read before processing, it does not make sense to be sending chunked requests.
(1) No, they don't need to be read completely.
(2) ...and the main reason to compress it to save bytes on the wire anyway.
For an HTTP agent acting as a reverse proxy or a forward proxy, so taking a message from one side and sending it on the other side, using a chunked transmission means you can send the parts of the message you have without storing it locally. You avoid the 'buffering' problems, slowdown and storage.
You also have some optimizations based on each actor preferred size of data blocks, like you could have an actor which likes sending packets of 8000 bytes, because that's the good number for his own kernel settings (tcp windows, internal http server buffer size, etc), while another actor on the message transmission using smaller chunks of 2048 bytes.
Finally, you do not need to compute the size of the message, the message will end on the end-of-stream marker, that's all. Which is also usefull if you are sending something which is compressed on the fly, you may not know the final size until everything is compressed.
Chunked transmission is used a lot. It is the default mode of most HTTP servers if you ask for HTTP/1.1 mode and not HTTP/1.0.

How is a browser able to resume downloads?

Do downloads use HTTP? How can they resume downloads after they have been suspended for several minutes? Can they request a certain part of the file?
Downloads are done over either HTTP or FTP.
For a single, small file, FTP is slightly faster (though you'll barely notice a differece). For downloading large files, HTTP is faster due to automatic compression. For multiple files, HTTP is always faster due to reusing existing connections and pipelining.
Parts of a file can indeed be requested independent of the whole file, and this is actually how downloads work. This is a process known as 'Chunked Encoding'. A browser requests individual parts of a file, downloads them independently, and assembles them in the correct order once all parts have been downloaded:
In chunked transfer encoding, the data stream is divided into a series of non-overlapping "chunks". The chunks are sent out and received independently of one another. No knowledge of the data stream outside the currently-being-processed chunk is necessary for both the sender and the receiver at any given time.
And according to FTP vs HTTP:
During a "chunked encoding" transfer, the sending party sends a stream of [size-of-data][data] blocks over the wire until there is no more data to send and then it sends a zero-size chunk to signal the end of it.
This is combined with a process called 'Byte Serving' to allow for resuming of downloads:
Byte serving begins when an HTTP server advertises its willingness to serve partial requests using the Accept-Ranges response header. A client then requests a specific part of a file from the server using the Range request header. If the range is valid, the server sends it to the client with a 206 Partial Content status code and a Content-Range header listing the range sent.
Do downloads use HTTP?
Yes. Especially since major browsers had deprecated FTP.
How can they resume downloads after they have been suspended for several minutes?
Not all downloads can resume after this long. If the (TCP or SSL/TLS) connection had been closed, another one has to be initiated to resume the download. (If it's HTTP/3 over QUIC, then it's another story.)
Can they request a certain part of the file?
Yes. This can be done with Range Requests. But it require server-side support (especially when the requested resource is provided by a dynamic script).
That other answer mentioning chunked transfer had mistaken it for the underlaying mechanism of TCP. Chunked transfer is not designed for the purpose of resuming partial downloads. It's designed for delimiting message boundary when the Content-Length header is not present, and when the communicating parties wish to reuse the connection. It is also used when the protocol version is HTTP/1.1 and there's a trailer fields section (which is similar to header fields section, but comes after the message body). HTTP/2 and HTTP/3 have their own way to convey trailers.
Even if multiple non-overlapping "chunks" of the resource is requested, it's encapsulated in a multipart/* message.

What is the recommended HTTP POST content length?

I have several clients that constantly post data to a REST service. REST service is put behind a network load balancer. Each client sends 100 - 500 MB a day and I need to support 500+ clients.
I can POST either very large packets, this will reduce overhead for TCP/IP session set up and HTTP headers. This will, however, firmly tie one client to a particular server and limit my scalability options. Alternatively, I can send small HTTP packets, which I can load balance well, but I will get more overhead for TCP/IP session set up and HTTP headers.
What is the recommended packet size for HTTP POST? Or how can I calculate one for my environment?
There is no recommended size.
While HTTP POST size is not constrained by the RFCs, since HTTP is a commodity protocol implementing request / response type messaging, most of the infrastructure is configured around the idea that TCP connections are not particularly long lasting / does not carry significant amounts of data. i.e. there will be factors outside your control which may impact the service - although HTTP supports range requests for responses, there is no corollary for requests.
You can get around a lot of these (although not all) by using HTTPS. However you still need to think about how you detect/manage outages - are you happy to wait for a TCP timeout?
With 500+ clients presumably using the system quite heavily, the congestion avoidance limits shouldn't be a problem - whether TCP window scaling is likely to be an issue depends on how the system is used. HTTP handshakes should not be an issue unless you restrict the request size to something silly.
If the service is highly dependant on clients pushing lots of data on to your server, then I'd encourage you to look at parsing the data on the client (given the volume, presumably it's coming from files - implying a signed java applet or javascript with UniversalBrowserRead privilege) then sending it over a bi-directional communication channel (e.g. websocket).
Leaving that aside for now, the only way you can find out what the route between your clients and your server will support is to measure it - and monitor it. I would expect that a 2Mb upload size would work pretty much anywhere, while a 10Mb size would work most of the time within the US or Europe - and that you could probably increase this to 50Mb as long as there's no mobile clients.
But if you want to maintain the effectiveness of the service you'll need to monitor bandwidth, packet loss and lost connections.

Maximum HTTP Packet Size

What is the maximum size of HTTP packets? I am interested in the size of the response to an HTTP GET request (Not this! This question is about the request size). Is there a size at all? If I download a 1GiB file, does that end up as just 1 HTTP GET request? (Intuitively, I do not think this happens - Also, partial downloads / multi threaded downloaders would not work).
I know that there is a maximum length of an IP packet and TCP packets that are longer than that are fragmented across multiple IP packets. Does such a thing happen with HTTP as well? The reason I am looking for an answer to this question is to figure out the AWS S3 billing scheme which charges 1c / 10K get requests. So how many GET requests are begin served for 1GiB.
I got an answer on AWS S3 Forum by Seungyoung Kim
If you download the 5GiB file using GET request. S3's billing system
will count it as 1 GET request and 5GiB data transfer. FYI, if you
stop downloading file at 2GiB out of 5GiB, then they will count only
2GiB data transfer.
GET request is not fragmented normally unless you split up the call
into several consecutive GET range calls.
So, if you issue 5 individual GET requests using range header for each
1GiB data block, they will count it as 5 GET request but still same
5GiB(1GiB x 5 times) data transfer. Technically little more than 5GiB
due to additional HTTP protocol headers.
If this is right, there is no packet fragmentation of HTTP GET Response (although they come across several TCP packets and a lost TCP packet is handled by TCP and does not illicit another GET request to resume)

counting HTTP packets

What is relation between number of HTTP packets and number of objects in a web page?
What is relation between number of HTTP packets and number of objects in a web page?
The short answer is there is obviously some relation, but there is no way you can accurately predict one from the other.
For a longer answer, we first need to correct some misconceptions in the question:
There is no such thing as an "HTTP packet". HTTP is a message oriented application protocol with one request message and one response message per "resource" fetched). This sits on top of a reliable byte stream protocol (with flow control, etc) called TCP. This in turn sits on top of a packet switching protocol called IP. An HTTP request/response exchange takes an unpredictable number of IP packets ... depending on message sizes AND network conditions. Other HTTP features such as compression, keeping connections alive, caching and so on make things even more complicated.
The idea of an "object" is ill-defined. An "object" could have a one-to-one correspondence between HTTP request / response pairs (i.e. a "resource" in the above) then that part is simple. OTOH, a "resource" could be a rendering of multiple "objects" in the application domain of the webserver.
On top of that, you've also got to account for the fact that a typical HTML resource has references to other resources (Scripts, CSS, images, etc) and may even involve Ajax callbacks. Each of these is a "resource", that may or may not need to be fetched ... depending on caching, etc.
Finally, there is an implicit assumption that all "objects" are the same size. This might be true in some application domains, but it is not true in general.
So to summarize, there are far to many variables and unknowns for it to be feasible to predict the number of network packets required to fetch a certain number of "objects".
A more practical approach is to attach a packet-level network analyser to your network and get it to count the number of packets sent and received.
If you make the following assumptions:
"HTTP packets" are HTTP messages,
"objects" are resources,
a resource doesn't require other resources (Scripts, CSS, images, etc) to render,
there is no caching,
the server is not doing redirects.
then one "object" requires two "HTTP packets".
But frankly, you've simplified the problem to a point where the answer is next to useless for predicting actual performance of real web-servers. (For instance, any one of those "objects" could be tiny ... or huge. And if you allow for arbitrary javascript, or content such as links to video streams, then the number of "packets" of one kind or another is potentially unbounded.)
A GET request is issued for every file referred in a HTML page, all of which, usually, fit in one TCP stream segment. HTTP is a state machine, so, many requests/response can be pipelined in one request/response.
The number of packets sent in response vary in the size of the objects and in caching parameters. For example, if a file is already in the browser cache, it will make a conditional get and will receive a HTTP/1.1 304 Not Modified response code, which does not contain any data. Moreover, many HTTP/1.1 304 can be issued in one segment, as this response is very tiny compared to segments' maximum size. Another example, if a file is bigger than the maximum segment size, the file may (and it probably will) be divided in many segments.
Is this what you wish to know?

Resources