Is gzip compression useful for mobile devices? - http

I'm wondering if anyone has a clue on whether the gzip compression is as much useful on mobile devices than it is on a desktop computer.
Will the phone use more battery?
Or will it save some because of the bandwidth saving?
Will the page page load faster or is the uncompress process slow on those limited devices?
Does the compressed data actually reach the end-user or is it uncompressed somewhere by the 3G provider? (this may be a stupid question, sorry).
Thank you.

Not a stupid question at all.
The correct trade-of is in favor of GZip.
It turns out that the Lempel-Ziv decompression is fairly cheap (much unlike the compression), while bandwidth is usually quite expensive, esspecially for roaming consumers, and also takes much battery power and transfer time.

It always depends on where your bottleneck is.
If it is a very weak cpu, anything
that puts a bigger burden on it is
bad.
If is your network connection,
compressed data transfer is a huge
performance boost.
The strain on the battery should be negilible in any case
With today's mobile devices, cpu power is certainly weaker that that of a desktop pc, but usually strong enough for gzip compression and decompression. In most cases, the bottleneck will be the network connection, so gzip compression is certainly useful. There will be rare cases though, where the opposite is true.
You just need to use a little common sense to see if my answer applies to your special case ;-)

One question you may also want to investigate is whether or not the mobile browsers you are considering even support compression. For example, I just checked the request headers sent by my BlackBerry Storm and it does not send any "Accept-Encoding" headers -- which means the server should not send back a compressed response.

Related

What make http/2 faster than http/1 beyond multiplexing and server push?

I could understand why multiplexing and server push help speed up web page loading and reduce workload on server side. But I have also learned that binary protocol, header compression, and prioritization of requests also contribute to performance improvements of http/2 over http/1. How do these three features actually contribute to the improvements?
Binary protocol
This actually doesn’t help that much IMHO other than the allowing of multiplexing (which DOES help a lot with performance). Yes it’s easier for a program to parse binary packets than text but I don’t think that’s going to make a massive performance boast. The main reason to go binary, as I say are for the other benefits (multiplexing and header compression) and to make parsing easier, than for performance.
Header compression
This can have a big potential impact. Most requests (and responses) repeat a LOT of data. So by compressing headers (which works by replacing repeated headers with references across requests rather than by compressing within requests like HTTP body compression works) can significantly reduce the size of request (but less so for responses where the headers are often not a significant portion of the total response).
Prioritisation of requests
This is one of the more interesting parts of HTTP/2 which has huge potential but has not been optimised for yet. Think of it like this: imagine you have 3 critical CSS files and 3 huge images to download. Under HTTP/1.1, 6 connections would be opened and all 6 items would download in parallel. This may seem fine but it means the less critical image files are using up bandwidth that would be better spent on the critical CSS files. With HTTP/2 you can say “download the critical CSS first with high priority and only when they are done, look at those 3 image files”. Unfortunately, despite the fact that HTTP/2 has a prioritisation model that allows as complex prioritisation as you want (too complex some argue!) browsers and servers don’t currently use it well (and website owners and web developers currently have very little way to influence it at all). In fact bad prioritisation decisions can actually make HTTP/2 slower than HTTP/1.1 as the 6 connection limit is lifted and hundreds of resources can all download in parallel, all fighting over the same bandwidth. I suspect there will be a lot more research and change here in implementations, but there shouldn’t need to be much change in the spec as it already allows for very complex prioritisation as I mentioned.
We’ve been optimising for HTTP/1.1 for decades and have squeezed a lot out of it. I suspect we’ve a lot more to get out of HTTP/2 (and HTTP/3 when it comes along too). Check out my upcoming book if interested in finding out more on this topic.

Why is HTTP/2 slower for me in FireFox?

There's a very interesting HTTP/2 demo that Akamai have on their site:
https://http2.akamai.com/demo
HTTP/2 (the future of HTTP) allows for concurrently downloaded assets over a single TCP connection reducing the need for spritesheets and concatenation... As I understand it, it should always be quicker on sites with lots of requests (like in the demo).
When I try the demo in Chrome or Safari it is indeed much faster, but when I've tested it in FireFox it's consistently SLOWER. Same computer, same connection.
Why is this?
HTTP/2 is apparently supported by all major browsers, including FireFox, so it should work fine, but in this real world demonstration it is slower 80% of the time. (In Chrome and Safari it's faster 100% of the time.)
I tried again on the following Monday after ensuring I'd cleared all my caches:
My OS: El Capitan Version 10.11.3 (15D21) with FireFox Version 44.0.2
UPDATE (APR 2016)
Now running Firefox 45.0.1:
Still slower!
You seem to have a pretty small latency and a very fast network.
My typical results for HTTP/1.1 are latency=40ms, load_time=3.5s, and HTTP/2 is consistently 3 times faster.
With a network such as yours, other effects may come into play.
In my experience one of the most important is the cipher that is actually negotiated.
HTTP/2 mandates the use of very strong ciphers, while HTTP/1.1 (over TLS) allows for far weaker, and therefore faster, ciphers.
In order to compare apples to apples, you would need to make sure that the same cipher is used. For me, for this Akamai demo, the same cipher was used.
The other thing that may be important is that the HTTP/1.1 sources are downloaded from http1.akamai.com, while for HTTP/2 they are downloaded from http2.akamai.com. For me they resolve to different addresses.
One should also analyze how precise is the time reported in the demo :)
The definitive answer can only come from a network trace with tools like Wireshark.
For networks worse than yours, probably the majority, HTTP/2 is typically a clear winner due to HTTP/2 optimizations related to latency (in particular, multiplexing).
Latency matters more than absolute load time if you're mixing small and big resources. E.g. if you're loading a very large image but also a small stylesheets then HTTP2's multiplexing over a single connection that can have the stylesheets finish while the image is still loading. The page can be rendered with the final styles and - assuming that the image is progressive - will also display a low-res version of the image.
In other words, the tail end of a load might be much less important if it's caused by a few big resources.
That said, the demo page actually loads http2 faster for me on FF nightly most of the time, although there is some variance. You might need better measurements.

Good tools to understand / reverse engineer a top layer network protocol

There is an interesting problem at hand. I have a role-playing MMOG running through a client application (not a browser) which sends the actions of my player to a server which keeps all the players in sync by sending packets back.
Now, the game uses a top layer protocol over TCP/IP to send the data. However, wireshark does not know what protocol is being used and shows everything beyond the TCP header as a dump.
Further, this dump does not have any plain text strings. Although the game has a chat feature, the chat string being sent is not seen in this dump as plain text anywhere.
My task is to reverse engineer the protocol a little to find some very basic stuff about the data contained in the packets.
Does anybody know why is the chat string not visible as plain text and whether it is likely that a standard top level protocol is being used?
Also, are there any tools which can help to get the data from the dump?
If it's encrypted you do have a chance (in fact, you have a 100% chance if you handle it right): the key must reside somewhere on your computer. Just pop open your favorite debugger, watch for a bit (err, a hundred bytes or so I'd hope) of data to come in from a socket, set a watchpoint on that data, and look at the stack traces of things that access it. If you're really lucky, you might even see it get decrypted in place. If not, you'll probably pick up on the fact that they're using a standard encryption algorithm (they'd be fools not to from a theoretical security standpoint) either by looking at stack traces (if you're lucky) or by using one of the IV / S-box profilers out there (avoid the academic ones, most of them don't work without a lot of trouble). Many encryption algorithms use blocks of "standard data" that can be detected (these are the IVs / S-boxes), these are what you look for in the absence of other information. Whatever you find, google it, and try to override their encryption library to dump the data that's being encrypted/decrypted. From these dumps, it should be relatively easy to see what's going on.
REing an encrypted session can be a lot of fun, but it requires skill with your debugger and lots of reading. It can be frustrating but you won't be sorry if you spend the time to learn how to do it :)
Best guess: encryption, or compression.
Even telnet supports compression over the wire, even though the whole protocol is entirely text based (well, very nearly).
You could try running the data stream through some common compression utilities, but I doubt that'd do much for you, since in all likelihood they don't transmit compression headers, there's simply some predefined values enforced.
If it's infact encryption, then you're pretty much screwed (without much, much more effort that I'm not even going to start to get into).
It's most likely either compressed or encrypted.
If it's encrypted you won't have a chance.
If it's compressed you'll have to somehow figure out which parts of the data are compressed, where the compressed parts start and what the compression algorithm is. If your lucky there will be standard headers that you can identify, although they are probably stripped out to save space.
None of this is simple. Reverse engineering is hard. There aren't any standard tools to help you, you'll just have to investigate and try things until you figure it out. My advice would be to ask the developers for a protocol spec and see if they are willing to help support what you are trying to do.

Will HTTP Compression (GZip or deflate) on a low traffic site actually be beneficial?

I have a web application where the client will be running off a local server (i.e. - requests will not be going out over the net). The site will be quite low traffic and so I am trying to figure out if the actual de-compression is expensive in this type of a system. Performance is an issue so I will have caching set up, but I was considering compression as well. I will not have bandwidth issues as the site is very low traffic. So, I am just trying to figure out if compression will do more harm than good in this type of system.
Here's a good article on the subject.
On pretty much any modern system with a solid web stack, compression will not be expensive, but it seems to me that you won't be gaining any positive effects from it whatsoever, no matter how minor the overhead. I wouldn't bother.
When you measured the performance, how did the numbers compare? Was it faster when you had compression enabled, or not?
I have used compression but users were running over a wireless 3G network at various remote locations. Compression made a significant different to the bandwidth usage in this case.
For users running locally, and with bandwidth not an issue, I don't think it is worth it.
For cachable resources (.js, .html, .css) files, I think it doesn't make sense after the browser caches these resources.
But for non-cachable resources (e.g. json response) I think it makes sense.

Is it possible to downsample an audio stream at runtime with Flash or FMS?

I'm no expert in audio, so if any of you folks are, I'd appreciate your insights on this.
My client has a handful of MP3 podcasts stored at a relatively high bit rate, and I'd like to be able to serve those files to her users at "different" bit rates depending on that user's credentials. (For example, if you're an authenticated user, you might get the full, unaltered stream, but if you're not, you'd get a lower-bit-rate version -- or at least a purposely tweaked lower-quality version than the original.)
Seems like there are two options: downsampling at the source and downsampling at the client. In this case, knowing of course that the source stream would arrive at the client at a high bit rate (and that there are considerations to be made about that, which I realize), I'd prefer to alter the stream at the client somehow, rather than on the server, for several reasons.
Is doing so possible with the Flash Player and ActionScript alone, at runtime (even with a third-party library), or does a scenario like this one require a server-based solution? If the latter, can Flash Media Server handle this requirement specifically? Again, I'd like to avoid using FMS if I can, since she doesn't really have the budget for it, but if that's the only option and it's really an option, I'm open to considering it.
Thanks in advance...
Note: Please don't question the sanity of the request -- I realize it might sound a bit strange, but the requirements are what they are. In that light, for purposes of answering the question, you can ignore the source and delivery path of the bits; all I'm really looking for is an explanation of whether (and ideally how) a Flash client can downsample an MP3 audio stream at runtime, irrespective of whether the audio's arriving over a network connection or being read directly from disk. Thanks much!
I'd prefer to alter the stream at the client somehow, rather than on the server, for several reasons.
Please elucidate the reasons, because resampling on the client end would normally be considered crazy: wasting bandwidth sending the higher-quality version to a user who cannot hear it, and risking a canny user ripping the higher-quality stream at it comes in through the network.
In any case the Flash Player doesn't give you the tools to process audio, only play it.
You shouldn't need FMS to process audio at the server end. You could have a server-side script that loaded the newly-uploaded podcasts and saved them back out as lower-bitrate files which could be served to lowly users via a normal web server. For Python see eg. PyMedia, py-lame; or even a shell script using lame or ffmpeg or something from the command line should be pretty easy to pull off.
If storage is at a premium, have you looked into AAC audio? I believe Flash 9 and 10 on desktop browsers will play it. AAC in my experience takes only half of the size of the comparable MP3 (i.e. a 80kbps AAC will sound the same as a 160kbps MP3).
As for playback quality, if I recall correctly there's audio playback settings in the Publish Settings section in the Flash editor. Wether or not the playback bitrate can be changed at runtime is something I'm not sure of.

Resources