Why is HTTP/2 slower for me in FireFox? - http

There's a very interesting HTTP/2 demo that Akamai have on their site:
https://http2.akamai.com/demo
HTTP/2 (the future of HTTP) allows for concurrently downloaded assets over a single TCP connection reducing the need for spritesheets and concatenation... As I understand it, it should always be quicker on sites with lots of requests (like in the demo).
When I try the demo in Chrome or Safari it is indeed much faster, but when I've tested it in FireFox it's consistently SLOWER. Same computer, same connection.
Why is this?
HTTP/2 is apparently supported by all major browsers, including FireFox, so it should work fine, but in this real world demonstration it is slower 80% of the time. (In Chrome and Safari it's faster 100% of the time.)
I tried again on the following Monday after ensuring I'd cleared all my caches:
My OS: El Capitan Version 10.11.3 (15D21) with FireFox Version 44.0.2
UPDATE (APR 2016)
Now running Firefox 45.0.1:
Still slower!

You seem to have a pretty small latency and a very fast network.
My typical results for HTTP/1.1 are latency=40ms, load_time=3.5s, and HTTP/2 is consistently 3 times faster.
With a network such as yours, other effects may come into play.
In my experience one of the most important is the cipher that is actually negotiated.
HTTP/2 mandates the use of very strong ciphers, while HTTP/1.1 (over TLS) allows for far weaker, and therefore faster, ciphers.
In order to compare apples to apples, you would need to make sure that the same cipher is used. For me, for this Akamai demo, the same cipher was used.
The other thing that may be important is that the HTTP/1.1 sources are downloaded from http1.akamai.com, while for HTTP/2 they are downloaded from http2.akamai.com. For me they resolve to different addresses.
One should also analyze how precise is the time reported in the demo :)
The definitive answer can only come from a network trace with tools like Wireshark.
For networks worse than yours, probably the majority, HTTP/2 is typically a clear winner due to HTTP/2 optimizations related to latency (in particular, multiplexing).

Latency matters more than absolute load time if you're mixing small and big resources. E.g. if you're loading a very large image but also a small stylesheets then HTTP2's multiplexing over a single connection that can have the stylesheets finish while the image is still loading. The page can be rendered with the final styles and - assuming that the image is progressive - will also display a low-res version of the image.
In other words, the tail end of a load might be much less important if it's caused by a few big resources.
That said, the demo page actually loads http2 faster for me on FF nightly most of the time, although there is some variance. You might need better measurements.

Related

What make http/2 faster than http/1 beyond multiplexing and server push?

I could understand why multiplexing and server push help speed up web page loading and reduce workload on server side. But I have also learned that binary protocol, header compression, and prioritization of requests also contribute to performance improvements of http/2 over http/1. How do these three features actually contribute to the improvements?
Binary protocol
This actually doesn’t help that much IMHO other than the allowing of multiplexing (which DOES help a lot with performance). Yes it’s easier for a program to parse binary packets than text but I don’t think that’s going to make a massive performance boast. The main reason to go binary, as I say are for the other benefits (multiplexing and header compression) and to make parsing easier, than for performance.
Header compression
This can have a big potential impact. Most requests (and responses) repeat a LOT of data. So by compressing headers (which works by replacing repeated headers with references across requests rather than by compressing within requests like HTTP body compression works) can significantly reduce the size of request (but less so for responses where the headers are often not a significant portion of the total response).
Prioritisation of requests
This is one of the more interesting parts of HTTP/2 which has huge potential but has not been optimised for yet. Think of it like this: imagine you have 3 critical CSS files and 3 huge images to download. Under HTTP/1.1, 6 connections would be opened and all 6 items would download in parallel. This may seem fine but it means the less critical image files are using up bandwidth that would be better spent on the critical CSS files. With HTTP/2 you can say “download the critical CSS first with high priority and only when they are done, look at those 3 image files”. Unfortunately, despite the fact that HTTP/2 has a prioritisation model that allows as complex prioritisation as you want (too complex some argue!) browsers and servers don’t currently use it well (and website owners and web developers currently have very little way to influence it at all). In fact bad prioritisation decisions can actually make HTTP/2 slower than HTTP/1.1 as the 6 connection limit is lifted and hundreds of resources can all download in parallel, all fighting over the same bandwidth. I suspect there will be a lot more research and change here in implementations, but there shouldn’t need to be much change in the spec as it already allows for very complex prioritisation as I mentioned.
We’ve been optimising for HTTP/1.1 for decades and have squeezed a lot out of it. I suspect we’ve a lot more to get out of HTTP/2 (and HTTP/3 when it comes along too). Check out my upcoming book if interested in finding out more on this topic.

Is there any change in the Browserpaint if you use http2?

We are thinking about moving a server with many websites to http2. Now one concerns was that if you use http2 and download all ressources parallel that it could take longer for the browser to begin with painting / rendering the page as with only http since it is waiting for all ressources to be downloaded instead of just beginning with what is already there and continue to repaint stuff as it gets downloaded.
I think this is wrong but i found no article or good explaining so i could prove it to the ones that think this could be the case.
The browser will paint when it has the resources needed to paint and this will mostly not change under HTTP/2.
I am not sure why you think a browser would wait to download all the resources under HTTP/2 but not under HTTP/1.1?
Certain resources (e.g. CSS and Javascript unless set with async attribute) are render blocking and they must be downloaded before the initial paint will happen. In theory HTTP/2 is faster for multiple downloads so all that should happen if you move to HTTP/2 is these will download sooner and so it will paint earlier.
Now the limited number of connections that browsers used under HTTP/1.1 (typically 6-8) created a natural queuing mechanism and the browser had to prioritize these critical resources over non-critical resources like images and send them first. With HTTP/2 there is a much higher limit (typically 100-120 parallel downloads depending on the server), so the browser no longer prioritizes and there is a concern that if all the resources are downloaded in parallel then they could slow each other down. For example downloading 50 large print-quality images will use up a lot of bandwidth and might make a more critical CSS resource downloading at the same time take longer to download. In fact some early movers to HTTP/2 saw this scenario.
This is addressed with prioritization and dependencies in HTTP/2 - where the server can send some resource types (e.g. CSS, JavaScript) with a higher priority than others (e.g. images) rather than send everything with same priority. So even though all 51 resources are in flight at the same time the CSS data should be sent first, with the images after. The client can also suggest a prioritization but it's the server that ultimately decides. This does depend on the server implementation to have a good prioritization strategy so it is good to test before switching over.
The other thing worth bearing in mind is that how to measure this changes under HTTP/2. If a low priority image is queued for 4 seconds under HTTP/1 waiting for one of the limited number of HTTP/1 connections to become free and then downloads in 2 seconds you may have previously measured that as a 2 second download time (which is technically not correct as you weren't including the queuing time so it was actually 6 seconds). So if this shows as the 5 seconds under HTTP/2 as it is sent immediately you may think it is 3 seconds slower when in fact it's a full second faster. Just something to be aware of when analysis the impact of any move to HTTP/2. It's much better to look at the overall key metrics (first paint, document complete...etc.) rather than individual requests when measuring the impact because of this.
Incidentally this is a very interesting topic that goes beyond what can reasonably be expected to be covered in a StackOverflow answer. It's a shameless plug, but I cover a lot of this in a book I am writing on the topic if interested in finding out more on this.
What you mentioned should ideally not happen if the web server obeys the priorities that browser requests with. On http2, browser typically requests css with highest priority and async js, images with lower priority. This should ensure that even if your images, js and css are requested at the same time - the server sends css back first.
The only case this should not happen is if browser is not configured correctly.
You can watch priority of various resources for any page within chrome devtools.

Since HTTP 2.0 is rolling out, are tricks like asset bundle still necessary?

How can we know how many browsers support HTTP 2.0?
How can we know how many browsers support HTTP 2.0?
A simple Wikipedia search will tell you. They cover at least 60% of the market and probably more once you pick apart the less than 10% browsers. That's pretty good for something that's only been a standard for a month.
This is a standard people have been waiting for for a long time. It's based on an existing protocol, SPDY, that's had some real world vetting. It gives some immediate performance boosts, and performance in browsers is king. Rapid adoption by browsers and servers is likely. Everyone wants this. Nobody wants to allow their competitors such a significant performance edge.
Since http 2.0 is rolling out, does tricks like asset bundle still be necessary?
HTTP/2 is designed to solve many of the existing performance problems of HTTP/1.1. There should be less need for tricks to bundle multiple assets together into one HTTP request.
With HTTP/2 multiple requests can be performed in a single connection. An HTTP/2 server can also push extra content to the client before the client requests, allowing it to pre-load page assets in a single request and even before the HTML is downloaded and parsed.
This article has more details.
When can we move on to the future of technologies and stop those dirty optimizations designed mainly for HTTP 1?
Three things have to happen.
Chrome has to turn on their support by default.
This will happen quickly. Then give a little time for the upgrade to trickle out to your users.
You have to use HTTPS everywhere.
Most browsers right now only support HTTP/2 over TLS. I think everyone was expecting HTTP/2 to only work encrypted to force everyone to secure their web sites. Sort of a carrot/stick, "you want better performance? Turn on basic security." I think the browser makers are going to stick with the "encrypted only" plan anyway. It's in their best interest to promote a secure web.
You have to decide what percentage of your users get degraded performance.
Unlike something like CSS support, HTTP/2 support does not affect your content. Its benefits are mostly performance. You don't need HTTP/1.1 hacks. Your site will still look and act the same for HTTP/1.1 if you get rid of them. It's up to you when you want to stop putting in the extra work to maintain.
Like any other hack, hopefully your web framework is doing it for you. If you're manually stitching together icons into a single image, you're doing it wrong. There are all sorts of frameworks which should make this all transparent to you.
It doesn't have to be an all-or-nothing thing either. As the percentage of HTTP/1.1 connections to your site drops, you can do a cost/benefit analysis and start removing the HTTP/1.1 optimizations which are the most hassle and the least benefit. The ones that are basically free, leave them in.
Like any other web protocol, the question is how fast will people upgrade? These days, most browsers update automatically. Mobile users, and desktop Firefox and Chrome users, will upgrade quickly. That's 60-80% of the market.
As always, IE is the problem. While the newest version of IE already supports HTTP/2, it's only available in Windows 10 which isn't even out yet. All those existing Windows users will likely never upgrade. It's not in Microsoft's best interest to backport support into old versions of Windows or IE. In fact, they just announced they're replacing IE. So that's probably 20% of the web population permanently left behind. The statistics for your site will vary.
Large institutional installations like governments, universities and corporations will also be slow to upgrade. Regardless of what browser they have standardized on, they often disable automatic updates in order to more tightly control their environment. If this is a large chunk of your users, you may not be willing to drop the HTTP/1.1 hacks for years.
It will be up to you to monitor how people are connecting to your web site, and how much effort you want to put into optimizing it for an increasingly shrinking portion of your users. The answer is "it depends on who your users are" and "whenever you decide you're ready".

Are there any browsers that support HTML5's Canvas that don't default to an 'Accept-Encoding' of gzip?

I'm creating a webapp where upon connecting to my server, you will have one simple HTML page downloaded with one Canvas element in said page. If your browser doesn't support Canvas, you'll get a message telling you to upgrade your browser in it's place. If Canvas works, then there'll be some interactivity between my server and the canvas element.
Since I'm writing my own server, I don't really feel like properly adhering to the W3C standards for dealing with 'Accept-Encoding', since writing a function to properly check which compression is ok is something I'd rather avoid (since there are a lot of other things I'd rather work on in my webapp). However, I feel like if a browser can support HTML5's canvas, then I can assume that it'll deal just fine with Gzipping, and I can have all the interactivity between the browser and my site be Gzipped without worrying about failure.
Does anybody know of any browsers that have HTML5 capabilities (specifically Canvas in my case) but take issue with Gzipped HTTP responses?
NOTE - I have had 0 experience with non-desktop browsers. My app isn't targeting mobile devices (resolution isn't large enough for what I'm working on), but I would be curious to know whether or not this holds for mobile browsers as well.
Best, and thanks for any responses in advance,Sami
Note that while I cannot think of any browsers with this limit, HTTP proxies might impose the limit. Since this is at the transport layer, you can't guarantee support for optional portions.
I would advise against making any such assumptions.
The browser in question may support Canvas, but it could still sit behind a proxy which for some unknown reason does not support gzipped responses.
You could instead put your custom web server behind a proxy that is widely used, such as Apache or Squid, and let that proxy negotiate with the client for you. This way your own web server would only have to deal with a single client, which could simplify its implementation significantly. This intermediate proxy could also take care of many security issues for you so that you won't have to worry quite as much about hackers pwning your web server.
Here's an article indicating the 10% of browsers did not support gzip as of 2009: http://www.stevesouders.com/blog/2009/11/11/whos-not-getting-gzip/
That being said, I would think any browser that has support for canvas would also support gzip (it is an easy piece of code to add).

Is gzip compression useful for mobile devices?

I'm wondering if anyone has a clue on whether the gzip compression is as much useful on mobile devices than it is on a desktop computer.
Will the phone use more battery?
Or will it save some because of the bandwidth saving?
Will the page page load faster or is the uncompress process slow on those limited devices?
Does the compressed data actually reach the end-user or is it uncompressed somewhere by the 3G provider? (this may be a stupid question, sorry).
Thank you.
Not a stupid question at all.
The correct trade-of is in favor of GZip.
It turns out that the Lempel-Ziv decompression is fairly cheap (much unlike the compression), while bandwidth is usually quite expensive, esspecially for roaming consumers, and also takes much battery power and transfer time.
It always depends on where your bottleneck is.
If it is a very weak cpu, anything
that puts a bigger burden on it is
bad.
If is your network connection,
compressed data transfer is a huge
performance boost.
The strain on the battery should be negilible in any case
With today's mobile devices, cpu power is certainly weaker that that of a desktop pc, but usually strong enough for gzip compression and decompression. In most cases, the bottleneck will be the network connection, so gzip compression is certainly useful. There will be rare cases though, where the opposite is true.
You just need to use a little common sense to see if my answer applies to your special case ;-)
One question you may also want to investigate is whether or not the mobile browsers you are considering even support compression. For example, I just checked the request headers sent by my BlackBerry Storm and it does not send any "Accept-Encoding" headers -- which means the server should not send back a compressed response.

Resources