Windows 8 apparently removes content-encoding header from compressed HTTP responses - http

I'm not completely sure whether this belongs on SO, but I don't know where else to ask.
While I was checking the loading speed of a web app of mine I noticed that apparently no HTTP response (no matter what type - html, css, js) is gzip/deflate compressed. That is, no response header like "Content-Encoding: gzip" is present in any request and the browser reports that the resource is not compressed.
tested and confirmed in multiple browsers (IE10, FF 17, Chrome 23, Opera 12.10, Safari 5.x)
tested and confirmed on two machines running Windows 8 Pro
double checked with Fiddler - the response is not compressed and does not contain a content-encoding header
this doesn't only happen for my web apps, no other web site I tested appears to send compressed responses (according to the browser)
on Windows 7 the responses arrive compressed and with all headers
HTTPS responses are compressed
Here's an example of the response headers (note the lack of the content-encoding header):
I also checked the server side. The server is running Windows Server 2008 R2/IIS 7.5. I used Failed Request Tracing to find out what the server is sending. The resource appears to be compressed:
Also, the server seems to send the proper headers:
My conclusion: it must be Windows 8 who is intervening here. Apparently it modifies HTTP responses. I suppose that Windows 8 is receiving the compressed response, decompresses it, removes the content-encoding header and passes the modified response further down the pipeline.
Now my questions:
Can anybody confirm that Windows 8 modifies HTTP responses and that it works the way I described?
Is there a way to monitor or even disable this behavior?
Thanks in advance for your answers.
Regards,
Andre
Update:
I used Wireshark to see what arrives at the client. As I expected the resources are compressed and the content-encoding header is still present. The image below shows the wireshark protocol and in the bottom right the response as received by Chrome.
This confirms my assumption that Windows 8 is intervening.

It turned out that the culprit was my antivirus software, Avast, more specifically the integrated real-time network-shield. Turning it off causes responses to appear compressed in the browsers again.
What remains interesting is that Avast was running on the Windows 7 machines as well, even though on those machines responses where compressed where applicable during my tests.

Related

HTTP1.1 to HTTP/2: what about headers?

In HTTP 1.1, the status line was
scheme/version code reason
HTTP/1.1 200 OK
I see :scheme and :status headers in the HPACK spec. I don't however see anything for version or reason? Is there not one?
In a request in HTTP 1.1, the request line was
method uri scheme/version
POST http://myhost.com HTTP/1.1
I see :method and I see :path, which I think is just a relative path, which is not the same as the full absolute path (and since Chrome and Firefox are pushing HTTPS for HTTP/2, this may make sense). I do not see version header though.
Is there a version header? Or is it seen that this will always be known before the protocol decision such that it is not really needed?
What about reason codes? Is it assumed these are pretty constant so that goes away (I am guessing here)?
In HTTP/1, the version token was needed to differentiate HTTP/1.0 from HTTP/1.1, since they had the same wire representation, but were supporting different features.
For example, a client declaring HTTP/1.1 implicitly tells the server that it supports persistent connections and content chunking.
With HTTP/2, the protocol version is negotiated.
In clear-text HTTP/2, the Upgrade header reports h2c, where the 2 means version 2 of the protocol. I imagine that for HTTP/3 the token will change to h3c.
Similarly happens for encrypted HTTP/2 where the token h2 is negotiated via ALPN.
Reason messages have been dropped as being redundant, as the status code was already conveying all the necessary information (not to mention that they could be attack vectors).
For these reasons, HTTP/2 does not have neither version nor reason pseudo-headers.

When a browser says that an http request is aborted what has actually happened?

On some occasions an http request appears to be aborted by the browser. Using Firebug or something in the status column where it might normally say, for example, 200 OK it says "aborted" (in red). When this occurs in Internet Explorer the user may see an IE generated message "Internet Explorer cannot display this page".
What has happened here?
I don't think it is a timeout issue as this occurs in quite a short time frame and I believe that I can get a successful response (e.g. a 200) when the response takes longer.
And it isn't to do with the server; the request is aborted by the browser. It isn't that we have had a server error back. (E.g. 500).
Also; the same request (to the same URL with the same method) usually works. So it isn't something to do say with SSL being misconfigured.
I am assuming that this is something to do with internet connectivity. But I don't know enough about networking / the internet to know what that really means.
So. The specific question is; what cases could cause this error?
This can happen when the browser is using an outdated SSL/TLS version and requests a resource that requires a secure connection
The server, your browser or any machine (or operating system) in between can drop the underlying TCP connection for any reason (timeouts, digging machines, intrusion detection).
You won't get a server error from those situations, because the server either didn't receive your request, it did but it took too long to process, or the server sent its (proper) response but it wasn't fully transmitted.
This can happer when a post are fired during a get (for example during dowload of a image), or when some image tag have not a src

Is replying to client before receiving complete request allowed for HTTP 1.0 server?

I couldn't find RFC that may answer this question. Perhaps you guys can point me to right direction.
I'm implementing strippeddown http server whose only function is to accept big multi-part encoded uploads.
In certain cases, such as file is too big or client is not authorized to upload, I want server to reply with error and close connection immediately.
It looks like Chrome browser doesn't like it because it thinks server returned http code zero.
Could not get any response
This seems to be like an error connecting to http://my_ubuntu:8080/api/upload. The response status was 0.
Check out the W3C XMLHttpRequest Level 2 spec for more details about when this happens.
Therefore question:
Is replying to client before receiving complete request allowed for HTTP server ?
update: Just tested it with iOS 6 client. Same thing, it thinks server abruptly closed connection :(
This is a great question and apparently it is very ambiguous. You will probably enjoy reading this article on the "Million Dollar Bug" - http://jacquesmattheij.com/the-several-million-dollar-bug
I think this is certificate trust issue. Try manually trusting the site and subsequent requests should work.

Browser audio-playback fails when Accept-Ranges not set in http headers; WHY?

I recently discovered something (that surprises me) when opening an audio file in firefox or chrome. If I don't specify the HTTP response header "Accept-Ranges: bytes", firefox won't be able to determine the length of the ogg file (in seconds) before reaching the end in playback. Chrome will discover the length of the ogg file (in seconds), but the audio player appears to crash when it reaches the end, and refuses to re-play the file after the crash. Other browsers were not tested.
Working Http response headers:
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Type: application/ogg
Content-Length: 245646
Failing Http response headers:
HTTP/1.1 200 OK
Content-Type: application/ogg
Content-Length: 245646
This is strange to me, because I'm not using any partial-content ranges. My server implementation doesn't even support them (so I think my server might be lying when it says "Accept-Ranges: bytes"). I certainly don't see why this header should be required for playback within the browser. Do both browsers just have bugs, which are exposed when I don't set the Accept-Ranges header? This seems unlikely to me. Can anyone explain?
Thanks!
Not sure this is a bug or just a skewed interpretation of the standard - section 14.5 . The standard states MAY and not MUST...
OTOH it can be that for being able to use an audio-stream and/or seeking etc. the implemented audio playback modules need this header... you can try out what happens if you submit "Accept-Ranges: none"... if they are a bit http 1.1 conforming than it just works...

http response message

I want to know that when browser sends a request do the server sends back the contents explicitly? And how would i confirm it?
There are several toolbars in Firefox that show exactly what are coming and going when making an HTTP request.
For firefox i use the following plugins:
Firebug
Web Developer
You could also install a utility called WireShark. It will "sniff" all the network traffic on your computer and show you at a packet level how it all works.
Browser plugins such as firebug (for firefox) let you see exactly what the server is returning; that's quite instructive and recommended! You'll see a bunch of headers followed by the response body in any of several formats (could be chunked, etc, etc).
In a Windows environment you can use Fiddler.
Fiddler includes a fair amount of documentation and is easy to use.

Resources