Since HTTP 2.0 is rolling out, are tricks like asset bundle still necessary? - http

How can we know how many browsers support HTTP 2.0?

How can we know how many browsers support HTTP 2.0?
A simple Wikipedia search will tell you. They cover at least 60% of the market and probably more once you pick apart the less than 10% browsers. That's pretty good for something that's only been a standard for a month.
This is a standard people have been waiting for for a long time. It's based on an existing protocol, SPDY, that's had some real world vetting. It gives some immediate performance boosts, and performance in browsers is king. Rapid adoption by browsers and servers is likely. Everyone wants this. Nobody wants to allow their competitors such a significant performance edge.
Since http 2.0 is rolling out, does tricks like asset bundle still be necessary?
HTTP/2 is designed to solve many of the existing performance problems of HTTP/1.1. There should be less need for tricks to bundle multiple assets together into one HTTP request.
With HTTP/2 multiple requests can be performed in a single connection. An HTTP/2 server can also push extra content to the client before the client requests, allowing it to pre-load page assets in a single request and even before the HTML is downloaded and parsed.
This article has more details.
When can we move on to the future of technologies and stop those dirty optimizations designed mainly for HTTP 1?
Three things have to happen.
Chrome has to turn on their support by default.
This will happen quickly. Then give a little time for the upgrade to trickle out to your users.
You have to use HTTPS everywhere.
Most browsers right now only support HTTP/2 over TLS. I think everyone was expecting HTTP/2 to only work encrypted to force everyone to secure their web sites. Sort of a carrot/stick, "you want better performance? Turn on basic security." I think the browser makers are going to stick with the "encrypted only" plan anyway. It's in their best interest to promote a secure web.
You have to decide what percentage of your users get degraded performance.
Unlike something like CSS support, HTTP/2 support does not affect your content. Its benefits are mostly performance. You don't need HTTP/1.1 hacks. Your site will still look and act the same for HTTP/1.1 if you get rid of them. It's up to you when you want to stop putting in the extra work to maintain.
Like any other hack, hopefully your web framework is doing it for you. If you're manually stitching together icons into a single image, you're doing it wrong. There are all sorts of frameworks which should make this all transparent to you.
It doesn't have to be an all-or-nothing thing either. As the percentage of HTTP/1.1 connections to your site drops, you can do a cost/benefit analysis and start removing the HTTP/1.1 optimizations which are the most hassle and the least benefit. The ones that are basically free, leave them in.
Like any other web protocol, the question is how fast will people upgrade? These days, most browsers update automatically. Mobile users, and desktop Firefox and Chrome users, will upgrade quickly. That's 60-80% of the market.
As always, IE is the problem. While the newest version of IE already supports HTTP/2, it's only available in Windows 10 which isn't even out yet. All those existing Windows users will likely never upgrade. It's not in Microsoft's best interest to backport support into old versions of Windows or IE. In fact, they just announced they're replacing IE. So that's probably 20% of the web population permanently left behind. The statistics for your site will vary.
Large institutional installations like governments, universities and corporations will also be slow to upgrade. Regardless of what browser they have standardized on, they often disable automatic updates in order to more tightly control their environment. If this is a large chunk of your users, you may not be willing to drop the HTTP/1.1 hacks for years.
It will be up to you to monitor how people are connecting to your web site, and how much effort you want to put into optimizing it for an increasingly shrinking portion of your users. The answer is "it depends on who your users are" and "whenever you decide you're ready".

Related

http2 domain sharding without hurting performance

Most articles consider using domain sharding as hurting performance but it's actually not entirely true. A single connection can be reused for different domains at certain conditions:
they resolve to the same IP
in case of secure connection the same certificate should cover both domains
https://www.rfc-editor.org/rfc/rfc7540#section-9.1.1
Is that correct? Is anyone using it?
And what about CDN? Can I have some guarantees that they direct a user to the same server (IP)?
Yup that’s one of the benefits of HTTP/2 and in theory allows you to keep sharding for HTTP/1.1 users and automatically unshard for HTTP/2 users.
The reality is a little more complicated as always - due mostly to implementation issues and servers resolving to different IP addresses as you state. This blog post is a few years old now but describes some of the issues: https://daniel.haxx.se/blog/2016/08/18/http2-connection-coalescing/. Maybe it’s improved since then, but would imagine issues still exist. Also new features like the ORIGIN frame should help but are not widely supported yet.
I think however it’s worth revisiting the assumption that sharding is actually good for HTTP/1.1. The costs of setting up new connections (DNS lookup, TCP setup, TLS handshake and then the actual sending HTTP messages) are not immaterial and studies have shown the 6 connection browser limit is really used never mind adding more by sharding. Concatenation, spriting and inlining are usually much better options and these can still be used for HTTP/2. Try it on your site and measure is the best way of being sure of this!
Incidentally it is for for these reasons (and security) that I’m less keen on using common libraries (e.g. jquery, bootstrap...etc.) from their CDNs instead of hosted locally. In my opinion the performance benefit of a user already having the version your site uses already cached is over stated.
With al these things, HTTP/1.1 will still work without sharded domains. It may (arguably) be slower but it won’t break. But most users are likely on HTTP/2 so is it really worth adding the complexity for the minority’s of users? Is this not a way of progressively enhancing your site for people on modern browsers (and encouraging those not, to upgrade)? For larger sites (e.g. Google, Facebook... etc.) the minority may still represent a large number of users and the complexity is worth it (and they have the resources and expertise to deal with it) for the rest of us, my recommendation is not to shard, to upgrade to new protocols like HTTP/2 when they become common (like it is now!) but otherwise to keep complexity down.

Is there any change in the Browserpaint if you use http2?

We are thinking about moving a server with many websites to http2. Now one concerns was that if you use http2 and download all ressources parallel that it could take longer for the browser to begin with painting / rendering the page as with only http since it is waiting for all ressources to be downloaded instead of just beginning with what is already there and continue to repaint stuff as it gets downloaded.
I think this is wrong but i found no article or good explaining so i could prove it to the ones that think this could be the case.
The browser will paint when it has the resources needed to paint and this will mostly not change under HTTP/2.
I am not sure why you think a browser would wait to download all the resources under HTTP/2 but not under HTTP/1.1?
Certain resources (e.g. CSS and Javascript unless set with async attribute) are render blocking and they must be downloaded before the initial paint will happen. In theory HTTP/2 is faster for multiple downloads so all that should happen if you move to HTTP/2 is these will download sooner and so it will paint earlier.
Now the limited number of connections that browsers used under HTTP/1.1 (typically 6-8) created a natural queuing mechanism and the browser had to prioritize these critical resources over non-critical resources like images and send them first. With HTTP/2 there is a much higher limit (typically 100-120 parallel downloads depending on the server), so the browser no longer prioritizes and there is a concern that if all the resources are downloaded in parallel then they could slow each other down. For example downloading 50 large print-quality images will use up a lot of bandwidth and might make a more critical CSS resource downloading at the same time take longer to download. In fact some early movers to HTTP/2 saw this scenario.
This is addressed with prioritization and dependencies in HTTP/2 - where the server can send some resource types (e.g. CSS, JavaScript) with a higher priority than others (e.g. images) rather than send everything with same priority. So even though all 51 resources are in flight at the same time the CSS data should be sent first, with the images after. The client can also suggest a prioritization but it's the server that ultimately decides. This does depend on the server implementation to have a good prioritization strategy so it is good to test before switching over.
The other thing worth bearing in mind is that how to measure this changes under HTTP/2. If a low priority image is queued for 4 seconds under HTTP/1 waiting for one of the limited number of HTTP/1 connections to become free and then downloads in 2 seconds you may have previously measured that as a 2 second download time (which is technically not correct as you weren't including the queuing time so it was actually 6 seconds). So if this shows as the 5 seconds under HTTP/2 as it is sent immediately you may think it is 3 seconds slower when in fact it's a full second faster. Just something to be aware of when analysis the impact of any move to HTTP/2. It's much better to look at the overall key metrics (first paint, document complete...etc.) rather than individual requests when measuring the impact because of this.
Incidentally this is a very interesting topic that goes beyond what can reasonably be expected to be covered in a StackOverflow answer. It's a shameless plug, but I cover a lot of this in a book I am writing on the topic if interested in finding out more on this.
What you mentioned should ideally not happen if the web server obeys the priorities that browser requests with. On http2, browser typically requests css with highest priority and async js, images with lower priority. This should ensure that even if your images, js and css are requested at the same time - the server sends css back first.
The only case this should not happen is if browser is not configured correctly.
You can watch priority of various resources for any page within chrome devtools.

HTTP2 protocol impact on web developement?

I would like to bring your attention to something that I re-think for days. The new features and impact of HTTP/2 protocol for web development. I would also like to ask some related questions, because my annual planning is getting less accurate because of HTTP/2.
Since HTTP/2 uses a single, multiplexed connection, instead of multiple connections in HTTP 1.x domain sharding techniques will not be needed any more.
With HTTP/1.x you may have already put files in different domains to increase parallelism in file transfer to the web browser; content domain networks (CDNs) do this automatically. But it doesn't help – and can hurt – performance under HTTP/2.
Q1: Will HTTP/2 minimize the need for CDNs?
Code files concatenating. Code chunks that would normally be maintained and transferred as separate files are combined into one. The browser then finds and runs the needed code within the concatenated file as needed.
Q2. Will HTTP/2 eliminate the need to concatenate files with similar extensions (css, javascript) and the usage of great Grunt and Gulp tools to do so?
Q. Also, in order to simplify and keep the question more compact, I would ask quite generally what may be other impacts of HTTP/2 on web development as you can foresee?
Q1: Will HTTP/2 minimize to need for CDNs?
It will certainly shift the balance a bit, provided that you use the right software. I talk about balance because CDNs cost money and management time.
If you are using CDNs to offload traffic you still will need them to offload traffic.
If you are a smallish website (and most websites are, in numerical terms), you will have less of a reason to use a CDN, as latency can be hidden quite effectively with HTTP/2 (provided that you deploy it correctly). HTTP/2 is even better than SPDY, and check this article for a use case regarding SPDY.
Also, most of the third-party content that we incorporate into our sites already uses CDNs.
Q2. Will HTTP/2 eliminate the need to concatenate files with similar extensions (css, javascript) and the usage of great Grunt and Gulp tools to do so?
Unfortunately not. Concatenating things won't be needed, unless the files you are delivering are extremely small, say a few hundred bytes. Everything else is still relevant, including minification and adding those ugly query strings for cache busting.
Q3 . Also, in order to simplify and keep the question more compact, I would ask quite general what may be other impacts of HTTP/2 on web development as you can foresee?
This is a tricky question. In one hand HTTP/2 arrives at a moment when the web is mature, and developers have whole stacks of things to take care of. HTTP/2 can be seen as a tiny piece to change in such a way that the entire stack doesn't crumble. Indeed, I can imagine many teams selling HTTP/2 to management this way ("It won't be a problem, we promise!").
But from a technical standpoint, HTTP/2 allows for better development workflows. For example, the multiplexing nature of HTTP/2 means that most of the contents of a site can be served over a single connection, allowing some servers to learn about interactions between assets by just observing browser behaviors. The information can be used together with other features of HTTP/2 and the modern web (specifically, HTTP/2 PUSH and the pre-open headers) to hide a lot of latency. Think about how much work that can save developers interested in performance.
Q1: Will HTTP/2 minimize to need for CDNs?
No. CDN's are primarily to co-locate content close to the user based on geographic location. Closer your are to the server, faster you will get the contet.
Q2. Will HTTP/2 eliminate the need to concatenate files with similar extensions (css, javascript) and the usage of great Grunt and Gulp tools to do so?
Concatenation is only a part of things a tool like is Grunt/Gulp does. Linting, conversions, runnings tests are other things you would still need a tool for. So they will stay. In terms of concat, you would ideally move away from creating a single large concat file per type and move to creating smaller concatenated files per module.
Q3. Also, in order to simplify and keep the question more compact, I would ask quite general what may be other impacts of HTTP/2 on web development as you can foresee?
General idea is HTTP/2 will not make a huge change to the way we develop things as its a protocol level change. Developers would ideally remove optimizations (like compacting, sharding) which are not optimization techniques with http/2

Are there any browsers that support HTML5's Canvas that don't default to an 'Accept-Encoding' of gzip?

I'm creating a webapp where upon connecting to my server, you will have one simple HTML page downloaded with one Canvas element in said page. If your browser doesn't support Canvas, you'll get a message telling you to upgrade your browser in it's place. If Canvas works, then there'll be some interactivity between my server and the canvas element.
Since I'm writing my own server, I don't really feel like properly adhering to the W3C standards for dealing with 'Accept-Encoding', since writing a function to properly check which compression is ok is something I'd rather avoid (since there are a lot of other things I'd rather work on in my webapp). However, I feel like if a browser can support HTML5's canvas, then I can assume that it'll deal just fine with Gzipping, and I can have all the interactivity between the browser and my site be Gzipped without worrying about failure.
Does anybody know of any browsers that have HTML5 capabilities (specifically Canvas in my case) but take issue with Gzipped HTTP responses?
NOTE - I have had 0 experience with non-desktop browsers. My app isn't targeting mobile devices (resolution isn't large enough for what I'm working on), but I would be curious to know whether or not this holds for mobile browsers as well.
Best, and thanks for any responses in advance,Sami
Note that while I cannot think of any browsers with this limit, HTTP proxies might impose the limit. Since this is at the transport layer, you can't guarantee support for optional portions.
I would advise against making any such assumptions.
The browser in question may support Canvas, but it could still sit behind a proxy which for some unknown reason does not support gzipped responses.
You could instead put your custom web server behind a proxy that is widely used, such as Apache or Squid, and let that proxy negotiate with the client for you. This way your own web server would only have to deal with a single client, which could simplify its implementation significantly. This intermediate proxy could also take care of many security issues for you so that you won't have to worry quite as much about hackers pwning your web server.
Here's an article indicating the 10% of browsers did not support gzip as of 2009: http://www.stevesouders.com/blog/2009/11/11/whos-not-getting-gzip/
That being said, I would think any browser that has support for canvas would also support gzip (it is an easy piece of code to add).

HTTP Tools for analysis and capture of requests/response

I am looking for tools that can be used for debugging web applications.I have narrowed my search to the following tools:
HTTPwatch.
Fiddler.
ieHTTPheader
liveHTTPheader.
It would be great if some of you having experience with these tools could discuss their pros and cons (features that you like or you think are missing in some of the tools but present in others).I am majorly confused between HTTPWatch and Fiddler, I would prefer Fiddler (being free) if it could fullfill all or most of HTTPWatch's features (however I am ready to pay for HTTPWatch if it's worth it).
P.S. - I know HTTPWatch and Fiddler are far more powerful than the other two tools (let me know if you disagree).
I am sure most of you would want more details as to what I would exactly like to do with these tools however I would like if you could compare these tools taking a broader perspective in mind comparing them as tools in general.
** Disclaimer: Posted by Simtec Limited **
Here's a list of the main advantages of HttpWatch (our product) and Fiddler. Of course we're biased, but we've tried to be objective:
HttpWatch Advantages
Shows requests that were read from
the browser cache without going onto network
Shows page level events, e.g. Render Start, DOM Load, etc
Handles SSL traffic without certificate warnings or requiring changes to trusted root CAs
Reduces 'observer effect' by not requiring HTTP proxy at network level
Groups requests by page
Fiddler Advantages
Works with almost any HTTP client not just Firefox and IE
Can intercept traffic from clients on non-Windows platforms, e.g. mobile devices
Requests can be intercepted and modified on the fly, e.g. change cookie value
Supports plugins to add extra functionality
Wireshark works at the network layer and of course gives you more information that the other tools you have mentioned here, however, if you want to debug web applications by breaking on requests/responses, modifying them and replaying - Fiddler is the tool for you!
Fiddler cannot however show TCP level information however and in such cases you will need Network Monitor or Wireshark.
If you specify what exactly you want to do with the 'debugger', one can suggest what's more appropriate for the job.
Fidler is good and simple to use. Wireshark is also worth considering since it gives a lot of extra information
You could also use Wireshark which allows you to analyze many protocols including TCP/IP.
A lab exercise from a University lecture on using Wireshark to analyze HTTP can be found here: Wireshark Lab: HTTP
take a look at HTTP Debugger Pro
It works with all browsers and custom software and doesn't change proxy settings.

Resources