How to test HTTP/2 implementation on non-supporting browser? - nginx

To implement HTTP/2 support on nginx/1.11.1, I'm going to redirect all HTTP Request to HTTPS.
In this case, how will Bot and Browsers, that don't support HTTP/2 protocol, behave and render the page?
Is there a way for me to simulate HTTP/1.1 browser behavior on Chrome Developer Tools?

You are mixing two concepts here that are somehow related, but they are largely different: HTTP to HTTPs redirect, and HTTP 1.1 vs HTTP/2 negotiation.
Redirecting HTTP to HTTPS requests is fine. Virtually every client (browser, bot, etc) available these days is capable of understanding HTTPS requests.
As for HTTP 1.1 vs HTTP/2, Nginx will fallback to HTTP 1.1 if the HTTP/2 connection fails because the client doesn't support it.
Last but not least, this question has very little to do with StackOverflow. It is more appropriate in ServerFault or SuperUser.

Potentially interesting
TCP retransmission will increase. This could lead on poorly configured devices for connection abort.

Related

HTTP2 causes more requests/longer load time?

I am starting to look into HTTP/2, running tests on Windows 10/IIS 10. From what I understand HTTP/2 is enabled there by default for secure connections. Yet when I am browsing a local site from Chrome 67.0.3396.99 - HTTP/2 seems slower, issuing more requests.
HTTP connection:
HTTPS connection:
Any idea why this is happening?
Your screenshot shows 8 additional requests being loaded over HTTPS so you are not comparing like for like. Investigate what those are and you'll likely have your answer.
Additionally while the latest version of IIS uses HTTP/2 by default you are better adding the protocol column to the network tab to confirm if this is being used. That way you know whether you are comparing HTTP to HTTPS or HTTP to HTTP/2 (over HTTPS).
HTTP/2 is primarily faster over high latency connections so you may not notice much difference over low latency connections (e.g. if testing with localhost) but it shouldn't really be any slower because of this (except for perhaps a small additional initial connection SSL/TLS negotiation time for HTTPS).

HTTP/2 compatibility with old/ unsupported browsers

How does a website behave on a client that does not support the HTTP/2 protocol?Is there a backward compatibility of the server that the server falls back on HTTP/1?
Standard webservers will handle HTTP 1.x requests just fine and reply with HTTP 1.x responses. There are just too many browsers out there that don't speak HTTP/2 yet to completely drop HTTP 1.x support from a server.

Should simultaneous HTTP connections to a new HTTP 1.1 server create one connection or several connections?

Question
If 2 HTTP requests are made to the same server at the same time from fresh, e.g. GET /image1.png HTTP/1.1 & GET /image2.png HTTP/1.1 with no previous connection to the server. Then should 1 TCP connection be made or 2?
Info
Persistent connections supported by default in HTTP 1.1. HTTP 1.0 uses the Connection: Keep-Alive.
It seems pretty clear from reading the RFC that if the above requests are made one after each other then the second request should reuse the connection.
HTTP Pipelining is sending multiple requests down the same connection without first waiting for a response. I am not sure where this fit into the answer tho.
If 2 HTTP requests are made to the same server at the same time from fresh, e.g. GET /image1.png
If the requests are made by the browser simultaneously (and there is no HTTP proxy server), then there will be two connections made to the server (unless http pipelining is enabled). Per the wikipedia article on pipelining,
Out of all the major browsers, only Opera based on Presto layout engine had a fully working implementation that was enabled by default. In all other browsers HTTP pipelining is disabled or not implemented.
Internet Explorer 8 does not pipeline requests, due to concerns regarding buggy proxies and head-of-line blocking.
Mozilla browsers (such as Mozilla Firefox, SeaMonkey and Camino) support pipelining, however it is disabled by default. Pipelining is disabled by default to avoid issues with misbehaving servers. When pipelining is enabled, Mozilla browsers use some heuristics, especially to turn pipelining off for older IIS servers.
Konqueror 2.0 supports pipelining, but it's disabled by default.
Google Chrome supports pipelining for HTTP in the stable release as a non-default option (starting with version 18). There is no support for pipelining HTTPS yet.[11] As of version 26, the flag to enable HTTP pipelining in Chrome has been disabled.
So, probably two connections.

Which HTTP features are different in HTTPS?

Wikipedia defines HTTP(S) or S-HTTP as a security layer over HTTP:
Technically, it is not a protocol in and of itself; rather, it is the
result of simply layering the Hypertext Transfer Protocol (HTTP) on
top of the SSL/TLS protocol, thus adding the security capabilities of
SSL/TLS to standard HTTP communications.
Logically, it implies that every feature and aspect of HTTP (e.g. methods and status codes) exists in HTTPS.
Should I expect any caveats or differences when switching an existing HTTP REST interface to HTTPS?
There doesn't seem to be any limitation of what you can do with HTTP but not HTTPS. The only limitations/differences relate to the fact that the connection is encrypted. As Eugene mentioned, this includes the fact that HTTPS cannot be proxy-cached. There are however some caveats:
HTTP inline content inside HTTPS page
If you start using HTTPS for sites where you originally used HTTP, problems might arise with HTTP inline content, e.g. if you use 3rd party HTTP services or cross-domain content:
scripts: google maps API
iframes: other webs, facebook, google ads, ...
images, static google maps, ...
In that case, many browsers will disable the "insecure" HTTP content inside HTTPS page! For the user, it is very hard to switch this off (especially in Firefox).
The only reliable way around that is to use protocol-relative URLs. So, instead of:
<script src="http://maps.googleapis.com/maps/api/js?v=3.exp&sensor=false"></script>
which would break on HTTPS page, you will just use
<script src="//maps.googleapis.com/maps/api/js?v=3.exp&sensor=false"></script>
which will work as HTTP on HTTP page and as HTTPS on HTTPS page. This fixes the problem.
The downside of course is that it is useless encryption of large amount of network traffic, that is not vulnerable and wouldn't normally have to be encrypted. This is the cost of the paranoid browser approach to security (like year ago, there was no warning from FF in this situation, and I was completely happy. World changes ...)
If you don't have signed SSL certificate for your domain
Another caveat of course is that if you don't have SSL certificate for your domain which is signed by trusted CA authority, then if your users will use HTTPS, they will have to pass a terrible scary 4-5 step procedure to accept the certificate. It is almost impossible and unprofessional to expose an average user (unaware of the problematics) to this. You will have to buy certificate in this case. Many times you end up using HTTP instead of HTTPS because of this. So if you cannot afford to buy the certificate, the browser paranoia forces you many times to use insecure HTTP protocol instead of HTTPS. Again, 6-7 years ago, it wasn't the case.
Mixing HTTP and HTTPS - cookie and authorization problems
If you use both HTTP and HTTPS within the same session, you might run into problems because sometimes they will be treated as separate sites (even if the rest of the URL is the same). This might be the case of cookies - in some cases they will not be shared between HTTP and HTTPS. Also, the HTTP authentication - RFC2617 will not be shared between HTTP and HTTPS. However, this type of authentication is now very rare on the Web, possibly due to lack of customization of the login form.
So, if you start using HTTPS, easiest way is then to use HTTPS only.
After several years of running HTTP over HTTPS, I am not aware of any other caveats.
Performance Considerations
HTTP vs HTTPS performance
HTTPS vs HTTP speed comparison
HTTPS Client/Broswer Caching
Top 7 Myths about HTTPS - Note commentary on HTTPS caching that is handled differently in browsers. It's from 2011 though, the browsers might have changed.
Will web browsers cache content over https
More on why there is no HTTPS proxy caching
Can a proxy server cache SSL GETs? If not, would response body encryption suffice?
UPGRADE command in Websockets via HTTPS
While the WebSocket protocol itself is unaware of proxy servers and firewalls, it features an HTTP-compatible handshake so that HTTP servers can share their default HTTP and HTTPS ports (80 and 443) with a WebSocket gateway or server. The WebSocket protocol defines a ws:// and wss:// prefix to indicate a WebSocket and a WebSocket Secure connection, respectively. Both schemes use an HTTP upgrade mechanism to upgrade to the WebSocket protocol.
http://en.wikipedia.org/wiki/WebSocket
As a coder of REST, I do not see any possible caveats when you switch HTTP REST to HTTPS. In times if you find some, you would definitely have them in normal HTTP REST too.

Http 1.1 pipelining support

I enabled http pipelining support in google chrome and observerd some problems in how data is received even when using big sites like amazon.com. What is the current support for pipelining from major servers ? I wonder if issues can be caused also because of our transparent proxy (microsoft TMG) although http://technet.microsoft.com/en-us/library/cc302548.aspx mentions that
"ISA Server does not implement pipelining. Client request pipelining is supported, allowing a client to make multiple requests without waiting for each response. However, pipelining when sending requests to the Web server is not supported." I think this should not cause data to be received wrongfully from the pipeline aware web server.

Resources