Transfer-Encoding: gzip vs. Content-Encoding: gzip - http

What is the current state of affairs when it comes to whether to do
Transfer-Encoding: gzip
or a
Content-Encoding: gzip
when I want to allow clients with e.g. limited bandwidth to signal their willingness to accept a compressed response and the server have the final say whether or not to compress.
The latter is what e.g. Apache's mod_deflate and IIS do, if you let it take care of compression. Depending on the size of the content to be compressed, it will do the additional Transfer-Encoding: chunked.
It will also include a Vary: Accept-Encoding, which already hints at the problem. Content-Encoding seems to be part of the entity, so changing the Content-Encoding amounts to a change of the entity, i.e. a different Accept-Encoding header means e.g. a cache cannot use its cached version of the otherwise identical entity.
Is there a definite answer on this that I have missed (and that's not buried inside a message in a long thread in some apache newsgroup)?
My current impression is:
Transfer-Encoding would in fact be the right way to do what is mostly done with Content-Encoding by existing server and client implentations
Content-Encoding, because of its semantic implications, carries a couple of issues (what should the server do to the ETag when it transparently compresses a response?)
The reason is chicken'n'egg: Browsers don't support it because servers don't because browsers don't
So I am assuming the right way would be a Transfer-Encoding: gzip (or, if I additionally chunk the body, it would become Transfer-Encoding: gzip, chunked). And no reason to touch Vary or ETag or any other header in that case as it's a transport-level thing.
For now I don't care too much about the 'hop-by-hop'-ness of Transfer-Encoding, something that others seem to be concerned about first and foremost, because proxies might uncompress and forward uncompressed to the client. However, proxies might just as well forward it as-is (compressed), if the original request has the proper Accept-Encoding header, which in case of all browsers that I know is a given.
Btw, this issue is at least a decade old, see e.g.
https://bugzilla.mozilla.org/show_bug.cgi?id=68517 .
Any clarification on this will be appreciated. Both in terms of what is considered standards-compliant and what is considered practical. For example, HTTP client libraries only supporting transparent "Content-Encoding" would be an argument against practicality.

The correct usage, as defined in RFC 2616 and actually implemented in the wild, is for the client to send an Accept-Encoding request header (the client may specify multiple encodings). The server may then, and only then, encode the response according to the client's supported encodings (if the file data is not already stored in that encoding), indicate in the Content-Encoding response header which encoding is being used. The client can then read data off of the socket based on the Transfer-Encoding (ie, chunked) and then decode it based on the Content-Encoding (ie: gzip).
So, in your case, the client would send an Accept-Encoding: gzip request header, and then the server may decide to compress (if not already) and send a Content-Encoding: gzip and optionally Transfer-Encoding: chunked response header.
And yes, the Transfer-Encoding header can be used in requests, but only for HTTP 1.1, which requires that both client and server implementations support the chunked encoding in both directions.
ETag uniquely identifies the resource data on the server, not the data actually being transmitted. If a given URL resource changes its ETag value, it means the server-side data for that resource has changed.

Quoting Roy T. Fielding, one of the authors of RFC 2616:
changing content-encoding on the fly in an inconsistent manner
(neither "never" nor "always) makes it impossible for later requests
regarding that content (e.g., PUT or conditional GET) to be handled
correctly. This is, of course, why performing on-the-fly
content-encoding is a stupid idea, and why I added Transfer-Encoding
to HTTP as the proper way to do on-the-fly encoding without changing
the resource.
Source: https://issues.apache.org/bugzilla/show_bug.cgi?id=39727#c31
In other words: Don't do on-the-fly Content-Encoding, use Transfer-Encoding instead!
Edit: That is, unless you want to serve gzipped content to clients that only understand Content-Encoding. Which, unfortunately, seems to be most of them. But be aware that you leave the realms of the spec and might run into issues such as the one mentioned by Fielding as well as others, e.g. when caching proxies are involved.

Related

Content-Encoding vs Transfer Encoding in HTTP

I have a question on usage of Content-Encoding and Transfer-Encoding:
Please let me know if my below understanding is right:
Client in its request can specify which encoding types it is willing to accept using accept-encoding header. So, if Server wishes to encode the message before transmission, eg. gzip, it can zip the entity (content) and add content-encoding: gzip and send across the HTTP response. On reception, client can receive and decompress and parse the entity.
In case of Transfer Encoding, Client may specify what kind of encoding it is willing to accept and perform its action on fly. i.e. if Client sends a TE: gzip; q=1, it means that if Server wishes, it can send a 200 OK with Transfer-Encoding: gzip and as it tries sending the stream, it can compress and send across, and client upon receiving the content, can decompress on fly and perform its parsing.
Is my understanding right here? Please comment.
Also, what is the basic advantage of compressing the entity on fly vs compressing the entity first and then transmitting it across? Is transfer-encoding valid only for chunked responses as we do not know the size of the entity before transmission?
The difference really is not about on-the-fly or not -- Content-Encoding can be both pre-computed and on the fly.
The differences are:
Transfer Encoding is hop-by-hop, not end-to-end
Transfer Encodings other than "chunked" (sadly) aren't implemented in practice
Transfer Encoding is on the message layer, Content Encoding on the payload layer
Using Content Encoding affects entity tags etc.
See http://greenbytes.de/tech/webdav/rfc7230.html#transfer.codings and http://greenbytes.de/tech/webdav/rfc7231.html#data.encoding.

HEAD headers differ from GET, chunked transfer

A web application under test behaves in an odd way. A HEAD request returns the header Content-Length, but the consequent GET returns Transfer-Encoding: chunked. I expected the headers to be equal, and RFC says SHOULD, so my question is: how legit and how common is this behaviour?
UPDATE It turns out, that the root cause of the problem is HAProxy's behaviour. If that's a HEAD request, the response is propagated as is from the application underneath. But for GET it applies the compression and sets the chunked transfer. I'll close this question as an off-topic and perhaps will ask at ServerFault.
If the server use chunked encoding for GET, but returns Content-Length for HEAD this is IMHO an indication that the information returned for HEAD is unlikely to be correct.
The HEAD method response does not return entity-body but GET responds with an entity-body, if the HTTP server has the "Chunked transfer encoding" enabled does not send the "Content-Length" in the response because is not used, the server does not need to know the length of the content before it starts transmitting a response to the client. The server can begin transmitting dynamically-generated content to the client before knowing the total size of that content. Perhaps this is the most likely explanation.

gzip compression of chunked encoding response?

I'm trying to get my webserver to correctly gzip an http response that is chunk encoding.
my understanding of the non-gzip response is that it looks like this:
<the response headers>
and then for each chunk,
<chunk length in hex>\r\n<chunk>\r\n
and finally, a zero length chunk:
0\r\n\r\n
I've tried to get gzip compression working and I could use some help figuring out what should actually be returned. This documentation implies that the entire response should be gzipped, as opposed to gzipping each chunk:
HTTP servers sometimes use compression (gzip) or deflate methods to optimize transmission.
Chunked transfer encoding can be used to delimit parts of the compressed object.
In this case the chunks are not individually compressed. Instead, the complete payload
is compressed and the output of the compression process is chunk encoded.
I tried to gzip the entire thing and return the response even without chunked, and it didn't work. I tried setting the Content-Encoding header to "gzip". Can someone explain what changes must be made to the above scheme to support gzipping of chunks? Thanks.
In case the other answers weren't clear enough:
First you gzip the body with zlib (this can be done in a stream so you don't need the whole thing in memory at once, which is the whole point of chunking).
Then you send that compressed body in chunks (presumably the ones provided by the gzip stream, with the chunk header to declare how long it is), with the Content-Encoding: gzip and Transfer-Encoding: chunked headers (and no Content-Length header).
If you're using gzip or zcat or some such utility for the compression, it probably won't work. Needs to be zlib. If you're creating the chunks and then compressing them, that definitely won't work. If you think you're doing this right and it's not working, you might try taking a packet trace and asking questions based on that and any error messages you're getting.
You gzip the content, and only then apply the chunked encoding:
"Since "chunked" is the only transfer-coding required to be understood by HTTP/1.1 recipients, it plays a crucial role in delimiting messages on a persistent connection. Whenever a transfer-coding is applied to a payload body in a request, the final transfer-coding applied MUST be "chunked". If a transfer-coding is applied to a response payload body, then either the final transfer-coding applied MUST be "chunked" or the message MUST be terminated by closing the connection. When the "chunked" transfer-coding is used, it MUST be the last transfer-coding applied to form the message-body. The "chunked" transfer-coding MUST NOT be applied more than once in a message-body."
(HTTPbis Part1, Section 6.2.1)
Likely you are not really sending an appropriately gzipped response.
Try setting the window bits to 31 in zlib. And use deflateInit2().

Chunked encoding and content-length header

Is it possible to set the content-length header and also use chunked transfer encoding? and does doing so solve the problem of not knowing the length of the response at the client side when using chunked?
the scenario I'm thinking about is when you have a large file to transfer and there's no problem in determining its size, but it's too large to be buffered completely.
(If you're not using chunked, then the whole response must get buffered first? Right??)
thanks.
No:
"Messages MUST NOT include both a Content-Length header field and a non-identity transfer-coding. If the message does include a non-identity transfer-coding, the Content-Length MUST be ignored." (RFC 2616, Section 4.4)
And no, you can use Content-Length and stream; the protocol doesn't constrain how your implementation works.
Well, you can always send a header stating the size of the file.
Something like response.addHeader("File-Size","size of the file");
And ignore the Content-Length header.
The client implementation has to be tweaked to read this value, but hey you can achieve both the things you want :)
You have to use either Content-Length or chunking, but not both.
If you know the length in advance, you can use Content-Length instead of chunking even if you generate the content on the fly and never have it all at once in your buffer.
However, you should not do that if the data is really large because a proxy might not be able to handle it. For large data, chunking is safer.
This headers can be cause of Postman Parse Error:
"Content-Length" and "Transfer-Encoding" can't be present in the response headers together.
Using parametrized ResponseEntity<?> except raw ResponseEntity in controller can fixed the issue.
The question asks:
Is it possible to set the content-length header and also use chunked transfer encoding?
The RFC HTTP/1.1 spec, quoted in Julian's answer, says:
Messages MUST NOT include both a Content-Length header field and a non-identity transfer-coding.
There is an important difference between what's possible, and what's allowed by a protocol. It is certainly possible, for example, for you to write your own HTTP/1.1 client which sends malformed messages with both headers. You would be violating the HTTP/1.1 spec in doing so, and so you'd imagine some alarm bells would go off and a bunch of Internet police would burst into your house and say, "Stop, arrest that client!" But that doesn't happen, of course. Your request will get sent to wherever it's going.
OK, so you can send a malformed message. So what? Surely on the receiving end, the server will detect the HTTP/1.1 protocol client-side violation, vanquish your malformed request, and serve you back a stern 400 response telling you that you are due in court the following Monday for violating the protocol. But no, actually, that probably won't happen. Of course, it's beyond the scope of HTTP/1.1 to prescribe what happens to misbehaving clients; i.e. while the HTTP/1.1 protocol is analogous to the "law", there is nothing in HTTP/1.1 analogous to the judicial system.
The best that the HTTP/1.1 protocol can do is dictate how a server must act/respond in the case of receiving such a malformed request. However, it's quite lenient in this case. In particular, the server does not have to reject such malformed requests. In fact, in such a scenario, the rule is:
If the message does include a non-identity transfer-coding, the Content-Length MUST be ignored.
Unfortunately, though, some HTTP servers will violate that part of the HTTP/1.1 protocol and will actually give precedence to the Content-Length header, if both headers are present. This can cause a serious problem, if the message visits two servers in sequence in the same system and they disagree about where one HTTP message ends and the next one starts. It leaves the system vulnerable to HTTP Desync attacks a.k.a. Request Smuggling.

How can I set Transfer-Encoding to chunked, explicitly or implicitly, in an ASP.NET response?

Can I simply set the Transfer-Encoding header?
Will calling Response.Flush() at some point cause this to occur implicitly?
EDIT
No, I Cannot call Response.Headers.Add("Transfer-Encoding","anything"); That throws.
any other suggestions?
Related:
Enable Chunked Transfer Encoding in ASP.NET
TL;DR: Specifying the content-length is the best way to achieve a fast first byte; you'll allow chunking at TCP rather than HTTP level. If you don't know the content-length, setting context.Response.BufferOutput to false will send output as it's written the the output stream using chunked transfer-encoding.
Why do you want to set Transfer-Encoding: chunked? Chunked transfers are essentially a work-around to permit sending documents whose content-length is not known in advance. ASP.NET, however, by default buffers the entire output and hence does know the overall content length.
Of course, HTTP is layered over TCP, and behind the scene TCP is "chunking" anyhow by splitting even a monolithic HTTP response into packets - meaning that if you specify the content-length up front and disable output buffering, you'll get the best latency without requiring HTTP-level chunking. Thus, you don't need HTTP-level chunking to provide a fast first byte when you know the content-length.
Although I'm not an expert on HTTP, I have implemented a simple streaming media server with seeking support, dynamic compression, caching etc. and I do have a reasonable grasp of the relevance of a fast first byte - and chunking is generally an inferior option if you know the content-length - which is almost certainly why ASP.NET won't let you set it manually - it's just not necessary.
However, if you don't know the HTTP content length before transmission and buffering is too expensive, you turn off output buffering and presumably the server will use a chunked transfer encoding by necessity.
When does the server use chunked transfer encoding? I just tested, and indeed if context.Response.BufferOutput is set to false, and when the content length is not set, the response is chunked; such a response is 1-2% larger in my entirely non-scientific quick test of a 1.7MB content-encoding: gzip xml document. Since gzip relies on context to reduce redundancy, I'd expected the compression ratio to suffer more, but it seems that chunking doesn't necessarily greatly reduce compression ratios.
If you look at the framework code in reflector, it seems that the transfer encoding is indeed set automatically as needed - i.e. if buffering is off AND no content length is known AND the response is to an HTTP/1.1 request, chunked transfer encoding is used. However, if the server is IIS7 and this is a worker request (?integrated mode?), the code branches to a native method - probably with the same behavior, but I can't verify that.
It looks like you need to setup IIS for this. IIS 6 has a property AspEnableChunkedEncoding in the metabase and you can see the IIS 7 mappings for this on MSDN at http://msdn.microsoft.com/en-us/library/aa965021(VS.90).aspx.
This will enable you to set TRANSFER-ENCODING: chunked in your header. I hope this helps.
Although you set Buffer to false and leave empty the content length, you need to make sure that you have disabled "Dynamic Content Compressing" feature for IIS7 to make chunked response working. Also, client browser should have at least HTTP 1.1 .. Chunked mode won't be working for HTTP 1.0
Response.Buffer = False
This will set HTTP Header "Tranfer-Encoding:Chuncked" and send the response each callled response.write

Resources