We are thinking about moving a server with many websites to http2. Now one concerns was that if you use http2 and download all ressources parallel that it could take longer for the browser to begin with painting / rendering the page as with only http since it is waiting for all ressources to be downloaded instead of just beginning with what is already there and continue to repaint stuff as it gets downloaded.
I think this is wrong but i found no article or good explaining so i could prove it to the ones that think this could be the case.
The browser will paint when it has the resources needed to paint and this will mostly not change under HTTP/2.
I am not sure why you think a browser would wait to download all the resources under HTTP/2 but not under HTTP/1.1?
Certain resources (e.g. CSS and Javascript unless set with async attribute) are render blocking and they must be downloaded before the initial paint will happen. In theory HTTP/2 is faster for multiple downloads so all that should happen if you move to HTTP/2 is these will download sooner and so it will paint earlier.
Now the limited number of connections that browsers used under HTTP/1.1 (typically 6-8) created a natural queuing mechanism and the browser had to prioritize these critical resources over non-critical resources like images and send them first. With HTTP/2 there is a much higher limit (typically 100-120 parallel downloads depending on the server), so the browser no longer prioritizes and there is a concern that if all the resources are downloaded in parallel then they could slow each other down. For example downloading 50 large print-quality images will use up a lot of bandwidth and might make a more critical CSS resource downloading at the same time take longer to download. In fact some early movers to HTTP/2 saw this scenario.
This is addressed with prioritization and dependencies in HTTP/2 - where the server can send some resource types (e.g. CSS, JavaScript) with a higher priority than others (e.g. images) rather than send everything with same priority. So even though all 51 resources are in flight at the same time the CSS data should be sent first, with the images after. The client can also suggest a prioritization but it's the server that ultimately decides. This does depend on the server implementation to have a good prioritization strategy so it is good to test before switching over.
The other thing worth bearing in mind is that how to measure this changes under HTTP/2. If a low priority image is queued for 4 seconds under HTTP/1 waiting for one of the limited number of HTTP/1 connections to become free and then downloads in 2 seconds you may have previously measured that as a 2 second download time (which is technically not correct as you weren't including the queuing time so it was actually 6 seconds). So if this shows as the 5 seconds under HTTP/2 as it is sent immediately you may think it is 3 seconds slower when in fact it's a full second faster. Just something to be aware of when analysis the impact of any move to HTTP/2. It's much better to look at the overall key metrics (first paint, document complete...etc.) rather than individual requests when measuring the impact because of this.
Incidentally this is a very interesting topic that goes beyond what can reasonably be expected to be covered in a StackOverflow answer. It's a shameless plug, but I cover a lot of this in a book I am writing on the topic if interested in finding out more on this.
What you mentioned should ideally not happen if the web server obeys the priorities that browser requests with. On http2, browser typically requests css with highest priority and async js, images with lower priority. This should ensure that even if your images, js and css are requested at the same time - the server sends css back first.
The only case this should not happen is if browser is not configured correctly.
You can watch priority of various resources for any page within chrome devtools.
Related
I could understand why multiplexing and server push help speed up web page loading and reduce workload on server side. But I have also learned that binary protocol, header compression, and prioritization of requests also contribute to performance improvements of http/2 over http/1. How do these three features actually contribute to the improvements?
Binary protocol
This actually doesn’t help that much IMHO other than the allowing of multiplexing (which DOES help a lot with performance). Yes it’s easier for a program to parse binary packets than text but I don’t think that’s going to make a massive performance boast. The main reason to go binary, as I say are for the other benefits (multiplexing and header compression) and to make parsing easier, than for performance.
Header compression
This can have a big potential impact. Most requests (and responses) repeat a LOT of data. So by compressing headers (which works by replacing repeated headers with references across requests rather than by compressing within requests like HTTP body compression works) can significantly reduce the size of request (but less so for responses where the headers are often not a significant portion of the total response).
Prioritisation of requests
This is one of the more interesting parts of HTTP/2 which has huge potential but has not been optimised for yet. Think of it like this: imagine you have 3 critical CSS files and 3 huge images to download. Under HTTP/1.1, 6 connections would be opened and all 6 items would download in parallel. This may seem fine but it means the less critical image files are using up bandwidth that would be better spent on the critical CSS files. With HTTP/2 you can say “download the critical CSS first with high priority and only when they are done, look at those 3 image files”. Unfortunately, despite the fact that HTTP/2 has a prioritisation model that allows as complex prioritisation as you want (too complex some argue!) browsers and servers don’t currently use it well (and website owners and web developers currently have very little way to influence it at all). In fact bad prioritisation decisions can actually make HTTP/2 slower than HTTP/1.1 as the 6 connection limit is lifted and hundreds of resources can all download in parallel, all fighting over the same bandwidth. I suspect there will be a lot more research and change here in implementations, but there shouldn’t need to be much change in the spec as it already allows for very complex prioritisation as I mentioned.
We’ve been optimising for HTTP/1.1 for decades and have squeezed a lot out of it. I suspect we’ve a lot more to get out of HTTP/2 (and HTTP/3 when it comes along too). Check out my upcoming book if interested in finding out more on this topic.
I would like to bring your attention to something that I re-think for days. The new features and impact of HTTP/2 protocol for web development. I would also like to ask some related questions, because my annual planning is getting less accurate because of HTTP/2.
Since HTTP/2 uses a single, multiplexed connection, instead of multiple connections in HTTP 1.x domain sharding techniques will not be needed any more.
With HTTP/1.x you may have already put files in different domains to increase parallelism in file transfer to the web browser; content domain networks (CDNs) do this automatically. But it doesn't help – and can hurt – performance under HTTP/2.
Q1: Will HTTP/2 minimize the need for CDNs?
Code files concatenating. Code chunks that would normally be maintained and transferred as separate files are combined into one. The browser then finds and runs the needed code within the concatenated file as needed.
Q2. Will HTTP/2 eliminate the need to concatenate files with similar extensions (css, javascript) and the usage of great Grunt and Gulp tools to do so?
Q. Also, in order to simplify and keep the question more compact, I would ask quite generally what may be other impacts of HTTP/2 on web development as you can foresee?
Q1: Will HTTP/2 minimize to need for CDNs?
It will certainly shift the balance a bit, provided that you use the right software. I talk about balance because CDNs cost money and management time.
If you are using CDNs to offload traffic you still will need them to offload traffic.
If you are a smallish website (and most websites are, in numerical terms), you will have less of a reason to use a CDN, as latency can be hidden quite effectively with HTTP/2 (provided that you deploy it correctly). HTTP/2 is even better than SPDY, and check this article for a use case regarding SPDY.
Also, most of the third-party content that we incorporate into our sites already uses CDNs.
Q2. Will HTTP/2 eliminate the need to concatenate files with similar extensions (css, javascript) and the usage of great Grunt and Gulp tools to do so?
Unfortunately not. Concatenating things won't be needed, unless the files you are delivering are extremely small, say a few hundred bytes. Everything else is still relevant, including minification and adding those ugly query strings for cache busting.
Q3 . Also, in order to simplify and keep the question more compact, I would ask quite general what may be other impacts of HTTP/2 on web development as you can foresee?
This is a tricky question. In one hand HTTP/2 arrives at a moment when the web is mature, and developers have whole stacks of things to take care of. HTTP/2 can be seen as a tiny piece to change in such a way that the entire stack doesn't crumble. Indeed, I can imagine many teams selling HTTP/2 to management this way ("It won't be a problem, we promise!").
But from a technical standpoint, HTTP/2 allows for better development workflows. For example, the multiplexing nature of HTTP/2 means that most of the contents of a site can be served over a single connection, allowing some servers to learn about interactions between assets by just observing browser behaviors. The information can be used together with other features of HTTP/2 and the modern web (specifically, HTTP/2 PUSH and the pre-open headers) to hide a lot of latency. Think about how much work that can save developers interested in performance.
Q1: Will HTTP/2 minimize to need for CDNs?
No. CDN's are primarily to co-locate content close to the user based on geographic location. Closer your are to the server, faster you will get the contet.
Q2. Will HTTP/2 eliminate the need to concatenate files with similar extensions (css, javascript) and the usage of great Grunt and Gulp tools to do so?
Concatenation is only a part of things a tool like is Grunt/Gulp does. Linting, conversions, runnings tests are other things you would still need a tool for. So they will stay. In terms of concat, you would ideally move away from creating a single large concat file per type and move to creating smaller concatenated files per module.
Q3. Also, in order to simplify and keep the question more compact, I would ask quite general what may be other impacts of HTTP/2 on web development as you can foresee?
General idea is HTTP/2 will not make a huge change to the way we develop things as its a protocol level change. Developers would ideally remove optimizations (like compacting, sharding) which are not optimization techniques with http/2
How can we know how many browsers support HTTP 2.0?
How can we know how many browsers support HTTP 2.0?
A simple Wikipedia search will tell you. They cover at least 60% of the market and probably more once you pick apart the less than 10% browsers. That's pretty good for something that's only been a standard for a month.
This is a standard people have been waiting for for a long time. It's based on an existing protocol, SPDY, that's had some real world vetting. It gives some immediate performance boosts, and performance in browsers is king. Rapid adoption by browsers and servers is likely. Everyone wants this. Nobody wants to allow their competitors such a significant performance edge.
Since http 2.0 is rolling out, does tricks like asset bundle still be necessary?
HTTP/2 is designed to solve many of the existing performance problems of HTTP/1.1. There should be less need for tricks to bundle multiple assets together into one HTTP request.
With HTTP/2 multiple requests can be performed in a single connection. An HTTP/2 server can also push extra content to the client before the client requests, allowing it to pre-load page assets in a single request and even before the HTML is downloaded and parsed.
This article has more details.
When can we move on to the future of technologies and stop those dirty optimizations designed mainly for HTTP 1?
Three things have to happen.
Chrome has to turn on their support by default.
This will happen quickly. Then give a little time for the upgrade to trickle out to your users.
You have to use HTTPS everywhere.
Most browsers right now only support HTTP/2 over TLS. I think everyone was expecting HTTP/2 to only work encrypted to force everyone to secure their web sites. Sort of a carrot/stick, "you want better performance? Turn on basic security." I think the browser makers are going to stick with the "encrypted only" plan anyway. It's in their best interest to promote a secure web.
You have to decide what percentage of your users get degraded performance.
Unlike something like CSS support, HTTP/2 support does not affect your content. Its benefits are mostly performance. You don't need HTTP/1.1 hacks. Your site will still look and act the same for HTTP/1.1 if you get rid of them. It's up to you when you want to stop putting in the extra work to maintain.
Like any other hack, hopefully your web framework is doing it for you. If you're manually stitching together icons into a single image, you're doing it wrong. There are all sorts of frameworks which should make this all transparent to you.
It doesn't have to be an all-or-nothing thing either. As the percentage of HTTP/1.1 connections to your site drops, you can do a cost/benefit analysis and start removing the HTTP/1.1 optimizations which are the most hassle and the least benefit. The ones that are basically free, leave them in.
Like any other web protocol, the question is how fast will people upgrade? These days, most browsers update automatically. Mobile users, and desktop Firefox and Chrome users, will upgrade quickly. That's 60-80% of the market.
As always, IE is the problem. While the newest version of IE already supports HTTP/2, it's only available in Windows 10 which isn't even out yet. All those existing Windows users will likely never upgrade. It's not in Microsoft's best interest to backport support into old versions of Windows or IE. In fact, they just announced they're replacing IE. So that's probably 20% of the web population permanently left behind. The statistics for your site will vary.
Large institutional installations like governments, universities and corporations will also be slow to upgrade. Regardless of what browser they have standardized on, they often disable automatic updates in order to more tightly control their environment. If this is a large chunk of your users, you may not be willing to drop the HTTP/1.1 hacks for years.
It will be up to you to monitor how people are connecting to your web site, and how much effort you want to put into optimizing it for an increasingly shrinking portion of your users. The answer is "it depends on who your users are" and "whenever you decide you're ready".
So I'm running a static landing page for a product/service I'm selling, and we're advertising using AdWords & similar. Naturally, page load speed is a huge factor here to maximize conversions.
Pros of HTTP/2:
Data is more compressed.
Server Push allows to send all resources at once without requests, which has MANY benefits such as replacing base64 inline images, sprites...etc.
Multiplexing over a single connection significantly improves load time.
Cons of HTTP/2:
1) Mandatory TLS, which slows down load speed.
So I'm torn. On one side, HTTP/2 has many improvements. On the other, maybe it would be faster to keep avoiding unnecessary TLS and continue using base64/sprites to reduce requests.
The total page size is ~1MB.
Would it be worth it?
The performance impact of TLS on modern hardware is negligible. Transfer times will most likely be network-bound. It is true that additional network round-trips are required to establish a TLS session but compared to the time required to transfer 1MB, it is probably negligible (and TLS session tickets, which are widely supported, also save a round-trip).
The evidence is that reducing load speed is definitely worth the effort (see the business case for speed).
The TLS session is a pain and it is unfortunate that the browser vendors are insisting on it, as there is nothing in HTTP2 that prevents plain text. For a low load system, were CPU costs are not the limiting factor, TLS essentially costs you one RTT (round trip time on network).
HTTP/2 and specially HTTP/2 push can save you many RTTs and thus can be a big win even with the TLS cost. But the best way to determine this is to try it for your page. Make sure you use a HTTP/2 server that supports push (eg Jetty) otherwise you don't get all the benefits. Here is a good demo of push with SPDY (which is that same mechanism as in HTTP/2):
How many HTTP requests does these 1000 kb require? With a page that large, I don't think it matters much for the end user experience. TLS is here to stay though... I don't think you should NOT use it because it may slow your site down. If you do it right, it won't slow your site down.
Read more about SSL not being slow anymore: https://istlsfastyet.com/
Mandatory TLS doesn't slow down page load speed if it's SPDY/3.1 or HTTP/2 based due to both supporting multiplexing request streams. Only non-SPDY or non-HTTP/2 based TLS would be slower than non-https.
Check out https://community.centminmod.com/threads/nginx-spdy-3-1-vs-h2o-http-2-vs-non-https-benchmarks-tests.2543/ clearly illustrates why SPDY/3.1 and HTTP/2 over TLS is faster for overall page loads. HTTP/2 allows multiplexing over several hosts at same time while SPDY/3.1 allows multoplexing per host.
Best thing to do is test both non-https and HTTP/2 or SPDY/3.1 https and see which is best for you. Since you have a static landing page it makes testing that much easier to do. You can do something similar to page at https://h2ohttp2.centminmod.com/flags.html where you setup both HTTP/2, SPDY and non-https on same server and be able to test all combinations and compare them.
I'm developing a service where people can stream multiple audio files at the same time.
Unfortunately, when streaming about 4 streams simultaneously, Chrome's HTTP connection limit seems to kick in: new stream requests only arive at the server when a previous connection was closed.
Interestingly enough, I can play 10+ videos at the same time on YouTube.
What kind of technique could YouTube have used here to circumvent the browser's simultaneous http connection limit?
The crucial point here I suppose is that YouTube streams are not directly controlled by the browser, they are embedded Flash players which use streams that are handled by Flash. If you want to hand off the streaming process to an external app/library (Flash, Java etc) you can circumvent these limitations quite easily.
The other point is that YouTube has a huge CDN, so there is no guarantee you're getting any two videos from the same server, which would also help to circumvent concurrency limitations (to a point, at least).
I'm not surprised that Chrome stops you after a while because Google did a load of research and experiments regarding browser concurrency and relative efficiency a while ago, and I remember reading somewhere that they concluded that 3-4 concurrent connections to the same server represented the the most efficient data transfer architecture over straight HTTP. Annoyingly, I can't find a reputable source to reference with that (although I got it from one in the first place), however this is related and probably part of the same research program.
This is also the sort of research Facebook get quite heavily involved in, and you might find some useful information in over at http://developers.facebook.com/ if you can be bothered sifting through the rubbish to find it...