IIS 7 compression - iis-7

I know that IIS allows you to compress the files being server. Any idea what the compression ratio is?
Thanks!

Compression always depends heavily on what you're compressing. An HTML file will shrink considerably more than a JPEG file, for instance.
Furthermore, both the server and the web browser must support a compression method in order for this compression method to be used in a HTTP transfer. gzip and deflate are the most common (see this page). The compression ratio depends on which method is used and also the actual data being compressed.

Related

Http Compression on Binary Data

I am serving binary data through http.
For the time being I use Content-Disposition: attachment.
Can I use the built in http compression (having the client request data with the header Accept-Encoding) to compress the attachment?
Or should I compress the attachment manually?
What is the proper way of serving compressed byte arrays through http?
Content-disposition is merely a header instructing the browser to either render the response, or to offer it as a download to the user. It doesn't change any HTTP semantics, and it doesn't change how the response body is transferred or interpreted.
So just use the built-in compression that compresses the response body according to the request header Accept-encoding.

What are the options for the gzip_proxied directive for?

The gzip_proxied directive allows for the following options (non-exhaustive):
expired
enables compression if a response header includes the “Expires” field with a value that disables caching;
no-cache
enables compression if a response header includes the “Cache-Control” field with the “no-cache” parameter;
no-store
enables compression if a response header includes the “Cache-Control” field with the “no-store” parameter;
private
enables compression if a response header includes the “Cache-Control” field with the “private” parameter;
no_last_modified
enables compression if a response header does not include the “Last-Modified” field;
no_etag
enables compression if a response header does not include the “ETag” field;
auth
enables compression if a request header includes the “Authorization” field;
I can't see any rational reason to use most of these options. For example, why would whether or not a proxied request contains the Authorization header, or Cache-Control: private, affect whether or not I want to gzip it?
Given that old versions of Nginx strip ETags from responses when gzipping them, I can see a use case for no_etag: if you don't have Nginx configured to generate ETags for your gzipped responses, you may prefer to pass on an uncompressed response with an ETag rather than generate a compressed one without an ETag.
I can't figure out the others, though.
What are the intended use cases of each of these options?
From the admin guide: (emphasis mine)
The directive has a number of parameters specifying which kinds of proxied requests NGINX should compress. For example, it is reasonable to compress responses only to requests that will not be cached on the proxy server. For this purpose the gzip_proxied directive has parameters that instruct NGINX to check the Cache-Control header field in a response and compress the response if the value is no-cache, no-store, or private. In addition, you must include the expired parameter to check the value of the Expires header field. These parameters are set in the following example, along with the auth parameter, which checks for the presence of the Authorization header field (an authorized response is specific to the end user and is not typically cached)
I'd agree that not compressing cacheable responses is reasonable. Consider that the primary savings of caching at a proxy is to increase performance (response time) and reduce the time and bandwidth that the proxy spends in requesting the upstream resource. The tradeoff to gain these performance benefits is the cost of cache storage. Here are some use cases where not compressing cacheable responses make sense:
In the normal web traffic of many sites, non-personalized responses (which constitute the majority of cacheable responses) have already been optimized through techniques like script minification, image size optimization, etc., in a web build process. While these static resources might shrink slightly from compression, the CPU cost of trying to gzip them smaller is probably not an efficient use of the proxy layer machine resources. But dynamically generated pages, served to logged-in users, containing tons of application-generated content would very likely benefit from compression (and would typically not be cacheable).
You are setting up a proxy in front of a costly upstream service, but you are serving responses to another proxy that will be responsible for compression for each user agent. For example, if you have a CDN that makes multiple requests to the same costly upstream resource (from separate geographical edge locations) and you want to ensure that you can reuse the costly response. If the CDN caches uncompressed versions (to service both compressed and uncompressed user agents) you may be compressing at your proxy only to have them uncompress again at the CDN, which is simply a waste of hardware and electricity on both sides, to reduce bandwidth in the highest-bandwidth part of the chain. (Response gzip compression is most beneficial at the last mile, to get the response data to your user's phone which has dropped to one dot of signal as they enter the subway.)
For sizable response entities, requests may come in (from various user agents, but often via downstream CDN intermediaries) for byte ranges of the resource, to user agents that don't support compression. The CDN is likely to serve byte range requests from its own cache, provided that it has an uncompressed version already in its cache.

How can I set Transfer-Encoding to chunked, explicitly or implicitly, in an ASP.NET response?

Can I simply set the Transfer-Encoding header?
Will calling Response.Flush() at some point cause this to occur implicitly?
EDIT
No, I Cannot call Response.Headers.Add("Transfer-Encoding","anything"); That throws.
any other suggestions?
Related:
Enable Chunked Transfer Encoding in ASP.NET
TL;DR: Specifying the content-length is the best way to achieve a fast first byte; you'll allow chunking at TCP rather than HTTP level. If you don't know the content-length, setting context.Response.BufferOutput to false will send output as it's written the the output stream using chunked transfer-encoding.
Why do you want to set Transfer-Encoding: chunked? Chunked transfers are essentially a work-around to permit sending documents whose content-length is not known in advance. ASP.NET, however, by default buffers the entire output and hence does know the overall content length.
Of course, HTTP is layered over TCP, and behind the scene TCP is "chunking" anyhow by splitting even a monolithic HTTP response into packets - meaning that if you specify the content-length up front and disable output buffering, you'll get the best latency without requiring HTTP-level chunking. Thus, you don't need HTTP-level chunking to provide a fast first byte when you know the content-length.
Although I'm not an expert on HTTP, I have implemented a simple streaming media server with seeking support, dynamic compression, caching etc. and I do have a reasonable grasp of the relevance of a fast first byte - and chunking is generally an inferior option if you know the content-length - which is almost certainly why ASP.NET won't let you set it manually - it's just not necessary.
However, if you don't know the HTTP content length before transmission and buffering is too expensive, you turn off output buffering and presumably the server will use a chunked transfer encoding by necessity.
When does the server use chunked transfer encoding? I just tested, and indeed if context.Response.BufferOutput is set to false, and when the content length is not set, the response is chunked; such a response is 1-2% larger in my entirely non-scientific quick test of a 1.7MB content-encoding: gzip xml document. Since gzip relies on context to reduce redundancy, I'd expected the compression ratio to suffer more, but it seems that chunking doesn't necessarily greatly reduce compression ratios.
If you look at the framework code in reflector, it seems that the transfer encoding is indeed set automatically as needed - i.e. if buffering is off AND no content length is known AND the response is to an HTTP/1.1 request, chunked transfer encoding is used. However, if the server is IIS7 and this is a worker request (?integrated mode?), the code branches to a native method - probably with the same behavior, but I can't verify that.
It looks like you need to setup IIS for this. IIS 6 has a property AspEnableChunkedEncoding in the metabase and you can see the IIS 7 mappings for this on MSDN at http://msdn.microsoft.com/en-us/library/aa965021(VS.90).aspx.
This will enable you to set TRANSFER-ENCODING: chunked in your header. I hope this helps.
Although you set Buffer to false and leave empty the content length, you need to make sure that you have disabled "Dynamic Content Compressing" feature for IIS7 to make chunked response working. Also, client browser should have at least HTTP 1.1 .. Chunked mode won't be working for HTTP 1.0
Response.Buffer = False
This will set HTTP Header "Tranfer-Encoding:Chuncked" and send the response each callled response.write

when a web application serves a video file, is it stream automatically? what options are there?

when a web application serves a video file, is it stream automatically? what options are there?
Your question is vague. The behavior you get will depend on what Content-Type header your "web application" (or container) gives your file.
Different types will do different things depending the browser.
http://en.wikipedia.org/wiki/Internet_media_type
If you want to check what headers your application is sending, use Firefox + Live HTTP Headers.
By default, video files are sent from IIS to clients in burst mode, at the highest bandwidth that the connection can support.
There is an extension for IIS that provides support for streaming and bit rate throttling:
http://www.iis.net/extensions/BitRateThrottling
It support many media types, but not all of them--although it is extensible.

Compress file before upload via http

Is it possible to compress data being sent from the client's browser (a file upload) to the server?
Flash, silverlight and other technology is ok!
Browsers never compress uploaded data because they have no way of knowing whether the server supports it.
Downloaded content can be compressed because the Accept-Encoding request header allows the browser to indicate to the server that it supports compressed content. Unfortunately, there's no equivalent protocol that works the other way and allows the server to indicate to the browser that it supports compression.
If you have control over the server and client (e.g. using silverlight, flash) then you could make use of compressed request bodies.
For Silverlight there is a library called Xceed which amongst other things "Lets you compress data as it is being uploaded.", it is not free though. I believe that this can only be done via a technology such as Flash or Silverlight and not natively on the browser.
I disagree with the above poster about browsers doing this automatically and I believe this only happens with standard HTML/CSS/Text files and only if the server and browser both have compression enabled (gzip, deflate).

Resources