How can I download a single file from multiple locations via HTTP? - http

I need to download a big file quickly, but all sources I can find have throttled bandwidth. Each of them seem to support HTTP 1.1 Byte Serving (Range Requests), since I can pause and resume the downloads. How can I download it from multiple sources in parallel?

Assuming this is a programming question (given that this is StackOverflow) I am going to explain how instead of just linking to a download accelerator that takes advantage of this.
What is needed in terms of the server to do this?
A server that supports Range HTTP header.
A server that allows for concurrent connections. It is possible to support Range while not allowing multiple simultaneous connection by using either endpoint or IP based restrictions server side. For this reason, I recommend you set up a simple test server instead of downloading from a file sharing site while testing this.
What is the Range Header?
Data transmission over HTTP is sent in order starting from the beginning of the file if the Range header is not set. The first byte of the file on the server will be the first byte of the HTTP response and the last byte of the file on the server will be the last byte of the HTTP response. The Range header allows you to specify where the bytes should start sending from allowing you to "skip" the beginning of the response.
Actual Answer Example
Our Situation
The response is plain text. The response content is just one word "StackOverflow!!" encoding ASCII, meaning each character is one byte. Therefore, the Content-Length header's value is 15 octets (another term for bytes).
We are going to download this file using 3 requests. For the sake of this example, we are going to say it will be 3 times faster but you should realize that this method will make downloads slower for very small files. This is because HTTP headers must be sent with each request as well as the 3-way handshake. We will also assume that the server supports HEAD requests and that the Content-Length header is sent with the download response. Finally, this request will be preformed using GET for reasons of HEAD requests. However, there are workarounds for POST.
Juicy Details
First, perform an HTTP HEAD request. Take the "Content-Length" header and divide that value by the amount of concurrent parallel connections you wish to make. For this example, the Content-Length is 15 and we wish to make 3 connections so the divided value will be 5.
Now preform the amount of requests you wished to preform parallel. With each request, set the Range header to "Range: bytes=" followe by how many requests have already been made times the divided value found above. Then append "-" followed by the value you just determined plus the divided value.
For this example, each request should have the header set as followed.
Range: bytes=0-5
Range: bytes=5-10
Range: bytes=10-15
The response of each of these requests should be
Stack
Overf
low!!
In essence, we are just conforming to Range specification (section 3.12 of RFC 2616) as well as Byte Range specification (section 14.35 of RFC 2616).
Finally, append the bytes of each request to form the final response data.
Disclaimer: I've never actually tried this but it should work in theory

I can't say if wget is able to put a file together again, if fetched from multiple sources.
The following example shows how to do it with aria2c.
You would build a download description file and then pass that to aria, like so:
aria2c -i uri.txt --split=5 --min-split-size=1M --max-connection-per-server=5
where uri.txt might contain
http://a.com/file1.iso http://mirror-1.com/file1.iso http://mirror-2.com/file1.iso
dir=/downloads
out=file1.iso
This would fetch the same file, from 3 different locations and place it into the downloads folder (dir) with the name file1.iso (out).

Related

How is a browser able to resume downloads?

Do downloads use HTTP? How can they resume downloads after they have been suspended for several minutes? Can they request a certain part of the file?
Downloads are done over either HTTP or FTP.
For a single, small file, FTP is slightly faster (though you'll barely notice a differece). For downloading large files, HTTP is faster due to automatic compression. For multiple files, HTTP is always faster due to reusing existing connections and pipelining.
Parts of a file can indeed be requested independent of the whole file, and this is actually how downloads work. This is a process known as 'Chunked Encoding'. A browser requests individual parts of a file, downloads them independently, and assembles them in the correct order once all parts have been downloaded:
In chunked transfer encoding, the data stream is divided into a series of non-overlapping "chunks". The chunks are sent out and received independently of one another. No knowledge of the data stream outside the currently-being-processed chunk is necessary for both the sender and the receiver at any given time.
And according to FTP vs HTTP:
During a "chunked encoding" transfer, the sending party sends a stream of [size-of-data][data] blocks over the wire until there is no more data to send and then it sends a zero-size chunk to signal the end of it.
This is combined with a process called 'Byte Serving' to allow for resuming of downloads:
Byte serving begins when an HTTP server advertises its willingness to serve partial requests using the Accept-Ranges response header. A client then requests a specific part of a file from the server using the Range request header. If the range is valid, the server sends it to the client with a 206 Partial Content status code and a Content-Range header listing the range sent.
Do downloads use HTTP?
Yes. Especially since major browsers had deprecated FTP.
How can they resume downloads after they have been suspended for several minutes?
Not all downloads can resume after this long. If the (TCP or SSL/TLS) connection had been closed, another one has to be initiated to resume the download. (If it's HTTP/3 over QUIC, then it's another story.)
Can they request a certain part of the file?
Yes. This can be done with Range Requests. But it require server-side support (especially when the requested resource is provided by a dynamic script).
That other answer mentioning chunked transfer had mistaken it for the underlaying mechanism of TCP. Chunked transfer is not designed for the purpose of resuming partial downloads. It's designed for delimiting message boundary when the Content-Length header is not present, and when the communicating parties wish to reuse the connection. It is also used when the protocol version is HTTP/1.1 and there's a trailer fields section (which is similar to header fields section, but comes after the message body). HTTP/2 and HTTP/3 have their own way to convey trailers.
Even if multiple non-overlapping "chunks" of the resource is requested, it's encapsulated in a multipart/* message.

Are large HTTP envelopes split across several partial http requests?

I'm reading a book that looks at different web service architectures, including an overview of how the SOAP protocol can be implemented in via HTTP. This was interesting to me because I do a lot of WCF development and didn't realize how client/server communication was implemented.
Since protocols like TCP and whatever is lower than that have fixed maximum packet sizes and as such have to split messages into packets, I just assumed that HTTP was similar. Is that not the case?
I.e. If I make a GET/POST request with a 20MB body, will a single HTTP envelope be sent and reassembled on the server?
If this is the case, what is the largest practical size of an http request? i have previously configured Nginx servers to allow 20mb file transfers and I'm wondering if this is too high...
From HTTP specification point of view, there is no limit for HTTP payload. According to RFC7230:
Any Content-Length field value greater than or equal to zero is valid. Since there is no predefined limit to the length of a payload, a recipient MUST anticipate potentially large decimal numerals and prevent parsing errors due to integer conversion overflows.
However, to prevent attack via very long or very slow stream of data, a web server may reject such HTTP request and return 413 Payload Too Large response.
"Since protocols like TCP and whatever is lower than that have fixed maximum packet sizes and as such have to split messages into packets, I just assumed that HTTP was similar. Is that not the case?"
No. HTTP is an application level protocol and is totally different. As HTTP is based on TCP, when the data is transferring, it would automatically split into packets on TCP level. There is no need to split the request on HTTP level.
An HTTP body can be as large as you want it to be, there is no download size limit, the size limit is usually set for uploads, to prevent someone uploading massive files to your server.
You can ask for a section of a resource using the Range header, if you only want part of it.
IE had limits of 2 and 4 GB at times, but these have been fixed since. Source

How to tell a proxy a connection is still used using HTTP communication?

I have a client side GUI app for human usage that consumes some SOAP web services and uses cURL as the underlying HTTP communication lib. Depending on the input, processing a request can take some large amount of time, even one hour. Neither the client nor server time out for that reason on their own and that's tested and works. Most of the requests get processed in some minutes anyway, so this is an edge case.
One of my users is forced to use a proxy between my client app and my server and for various reasons has no control over it. That proxy has a time out configured and closes the connection to my client after 4 minutes of no data transfer. So the user can (and did) upload data for e.g. 30 minutes, afterwards the server starts to process the data and after 4 minutes the proxy closes the connection, the server will silently continue to process the request, but the user is left with some error message AND won't get the processing result. My app already uses TCP Keep Alive, so that shouldn't be the problem, but instead the time out seems to be defined for higher level data. It works the same like the option read_timeout for squid, which I used to reproduce the behaviour in our internal setup.
What I would like to do now is start a background thread in my web service which simply outputs some garbage data to my client over all the time the request is processed, which is ignored by the client and tells the proxy that the connection is still active. I can recognize my client using the user agent and can configure if to ouput that data or not server side and such, so other clients consuming the web service wouldn't get a problem.
What I'm asking for is, if there's any HTTP compliant method to output such garbage data before the actual HTTP response? So e.g. would it be enough to simply output \r\n without any additional content over and over again to be HTTP compliant with all requesting libs? Or maybe even binary 0? Or some full fledged HTTP headers stating something like "real answer about to come, please be patient"? From my investigation this pretty much sounds like chunked HTTP encoding, but I'm not sure yet if this is applicable.
I would like to have the following, where all those "Wait" stuff is simply ignored in the end and the real HTTP response at the end contains Content-Length and such.
Wait...\r\n
Wait...\r\n
Wait...\r\n
[...]
HTTP/1.1 200 OK\r\n
Server: Apache/2.4.23 (Win64) mod_jk/1.2.41\r\n
[...]
<?xml version="1.0" encoding="UTF-8"?><soap:Envelope[...]
Is that possible in some standard HTTP way and if so, what's the approach I need to take? Thanks!
HTTP Status 102
Isn't HTTP Status 102 exactly what I need? As I understand the spec, I can simply print that response line over and over again until the final response is available?
HTTP Status 102 was a dead-end, two things might work, depending on the proxy used: A NPH script can be used to regularly print headers directly to the client. The important thing is that NPH scripts normally bypass header buffers from the web server and can therefore be transferred over the wire as needed. They "only" need be correct HTTP headers and depending on the web server and proxy and such it might be a good idea to create incrementing, unique headers. Simply by adding some counter in the header name.
The second thing is chunked transfer-encoding, in which case small chunks of dummy data can be printed to the client in the response body. The good thing is that such small amount of data can be transferred over the wire as needed using server side flush and such, the bad thing is that the client receives this data and by default behaves as if it was part of the expected response body. That might break the application of course, but most HTTP libs provide callbacks for processing received data and if you print some unique one, the client should be able to filter the garbage out.
In my case the web service is spawning some background thread and depending on the entry point of the service requested it either prints headers using NPH or chunks of data. In both cases the data can be the same, so a NPH-header can be used for chunked transfer-encoding as well.
My NPH solution doesn't work with Squid, but the chunked one does. The problem with Squid is that its read_timeout setting is not low level for the connection to receive data at all, but instead some logical HTTP thing. This means that Squid does receive my headers, but it expects a complete HTTP header within the period of time defined using read_timeout. With my NPH approach this isn't the case, simply because by design I only want to send some garbage headers to ignore until the real headers arrive.
Additionally, one has to be careful about NPH in Apache httpd, but in my use case it works. I can see the individual headers in Squid's log and without any garbage after the response body or such. Avoid the Action directive.
Apache2 sends two HTTP headers with a mapped "nph-" CGI

Why is the total amount of character in a GET limited?

I want to ask some question about the following quote taken from Head First Servlets and JSP, Second Edition book:
The total amount of characters in a GET is really limited (depending
on the server). If the user types, say, a long passage into a “search”
input box, the GET might not work.
Why is the total amount of characters in a GET limited?
How can I learn about the total amount of character in a Get?
When I said a long text into any input box, and GET is not working.
How many solution do I have to fix this problem.
Why is the get method limited?
There is no specific limit to the length of a GET request. Different servers can have different limits. If you need to send more data to the server, use POST instead of GET. A recommended minimum to be supported by servers and browsers is 8,000 bytes, but this is not required.
RFC 7230's Section 3.1.1 "Request Line" says
HTTP does not place a predefined limit on the length of a request-line, as described in Section 2.5. A server that receives a method longer than any that it implements SHOULD respond with a 501 (Not Implemented) status code. A server that receives a request-target longer than any URI it wishes to parse MUST respond with a 414 (URI Too Long) status code (see Section 6.5.12 of RFC7231).
Various ad hoc limitations on request-line length are found in practice. It is RECOMMENDED that all HTTP senders and recipients support, at a minimum, request-line lengths of 8000 octets.
Section 2.5 "Conformance and Error Handling" says
HTTP does not have specific length limitations for many of its protocol elements because the lengths that might be appropriate will vary widely, depending on the deployment context and purpose of the implementation. Hence, interoperability between senders and recipients depends on shared expectations regarding what is a reasonable length for each protocol element. Furthermore, what is commonly understood to be a reasonable length for some protocol elements has changed over the course of the past two decades of HTTP use and is expected to continue changing in the future.
and RFC 7231's Section 6.5.12 "414 URI Too Long" says
The 414 (URI Too Long) status code indicates that the server is
refusing to service the request because the request-target (Section
5.3 of [RFC7230]) is longer than the server is willing to interpret.
This rare condition is only likely to occur when a client has
improperly converted a POST request to a GET request with long query
information, when the client has descended into a "black hole" of
redirection (e.g., a redirected URI prefix that points to a suffix of
itself) or when the server is under attack by a client attempting to
exploit potential security holes.
The 'get'-data is send in the query string - which also has a maximum length.
You can do all kind of things with the query string, e.g. bookmark it. Would you really like to bookmark a real huge text?
It is possible to configure moste servers to use larger length - some clients will accept them, some will throw errors.
"Note: Servers ought to be cautious about depending on URI lengths above 255 bytes, because some older client or proxy implementations might not properly support these lengths." HTTP 1.1 specification chapter 3.2.1:.
There is also a status code "414 Request-URI Too Long" - if you get this you will know that you have put to many chars in the get. (If you hit the server limit, if the client limit is lower then the server limit each browser will react in it's own way).
Generally it would be wise to set a limit for each data being send to a server - just if someones tries to make huge workload or slow down the server (e.g. send a huge file - 1 server connection is used. slow down the transmission, make additional sends - at some point the server wil have a lot of connections open. Use multiple clients and you have an attack scenario on a server).

counting HTTP packets

What is relation between number of HTTP packets and number of objects in a web page?
What is relation between number of HTTP packets and number of objects in a web page?
The short answer is there is obviously some relation, but there is no way you can accurately predict one from the other.
For a longer answer, we first need to correct some misconceptions in the question:
There is no such thing as an "HTTP packet". HTTP is a message oriented application protocol with one request message and one response message per "resource" fetched). This sits on top of a reliable byte stream protocol (with flow control, etc) called TCP. This in turn sits on top of a packet switching protocol called IP. An HTTP request/response exchange takes an unpredictable number of IP packets ... depending on message sizes AND network conditions. Other HTTP features such as compression, keeping connections alive, caching and so on make things even more complicated.
The idea of an "object" is ill-defined. An "object" could have a one-to-one correspondence between HTTP request / response pairs (i.e. a "resource" in the above) then that part is simple. OTOH, a "resource" could be a rendering of multiple "objects" in the application domain of the webserver.
On top of that, you've also got to account for the fact that a typical HTML resource has references to other resources (Scripts, CSS, images, etc) and may even involve Ajax callbacks. Each of these is a "resource", that may or may not need to be fetched ... depending on caching, etc.
Finally, there is an implicit assumption that all "objects" are the same size. This might be true in some application domains, but it is not true in general.
So to summarize, there are far to many variables and unknowns for it to be feasible to predict the number of network packets required to fetch a certain number of "objects".
A more practical approach is to attach a packet-level network analyser to your network and get it to count the number of packets sent and received.
If you make the following assumptions:
"HTTP packets" are HTTP messages,
"objects" are resources,
a resource doesn't require other resources (Scripts, CSS, images, etc) to render,
there is no caching,
the server is not doing redirects.
then one "object" requires two "HTTP packets".
But frankly, you've simplified the problem to a point where the answer is next to useless for predicting actual performance of real web-servers. (For instance, any one of those "objects" could be tiny ... or huge. And if you allow for arbitrary javascript, or content such as links to video streams, then the number of "packets" of one kind or another is potentially unbounded.)
A GET request is issued for every file referred in a HTML page, all of which, usually, fit in one TCP stream segment. HTTP is a state machine, so, many requests/response can be pipelined in one request/response.
The number of packets sent in response vary in the size of the objects and in caching parameters. For example, if a file is already in the browser cache, it will make a conditional get and will receive a HTTP/1.1 304 Not Modified response code, which does not contain any data. Moreover, many HTTP/1.1 304 can be issued in one segment, as this response is very tiny compared to segments' maximum size. Another example, if a file is bigger than the maximum segment size, the file may (and it probably will) be divided in many segments.
Is this what you wish to know?

Resources