Response.Flush with HTTP/2 - asp.net

Background
In a version of Safari that supports HTTP/2 (i.e. v9+) running on macOS “El Capitan” v10.11 or newer, when accessing a webpage served from IIS10 via HTTP/2 (e.g. Windows Server 2016 / Windows 10), if the page contains a "Response.Flush" then it will not load. It simply hangs with a white screen. Web server CPU usage also spikes during these occurrences.
This thread suggests that when Response.Flush is used, IIS switches protocol from HTTP/2 back to HTTP/1.1. Safari cannot handle this, whilst all other browsers seemingly can.
Demos from the link above:
Working - http://limoeventplanner.com/safari-test.asp
Not working - https://limoeventplanner.com/safari-test.asp
I appreciate that the solution to this problem may lie elsewhere (I currently have a bug open with webkit), so I will try to make my questions focused...
TL;DR
Does using Response.Flush still make sense in a HTTP/2 environment?
Is the "downgrade" to HTTP/1.1 by IIS the expected behaviour in this scenario? If so, why?

Related

Why is HTTP/2 slower for me in FireFox?

There's a very interesting HTTP/2 demo that Akamai have on their site:
https://http2.akamai.com/demo
HTTP/2 (the future of HTTP) allows for concurrently downloaded assets over a single TCP connection reducing the need for spritesheets and concatenation... As I understand it, it should always be quicker on sites with lots of requests (like in the demo).
When I try the demo in Chrome or Safari it is indeed much faster, but when I've tested it in FireFox it's consistently SLOWER. Same computer, same connection.
Why is this?
HTTP/2 is apparently supported by all major browsers, including FireFox, so it should work fine, but in this real world demonstration it is slower 80% of the time. (In Chrome and Safari it's faster 100% of the time.)
I tried again on the following Monday after ensuring I'd cleared all my caches:
My OS: El Capitan Version 10.11.3 (15D21) with FireFox Version 44.0.2
UPDATE (APR 2016)
Now running Firefox 45.0.1:
Still slower!
You seem to have a pretty small latency and a very fast network.
My typical results for HTTP/1.1 are latency=40ms, load_time=3.5s, and HTTP/2 is consistently 3 times faster.
With a network such as yours, other effects may come into play.
In my experience one of the most important is the cipher that is actually negotiated.
HTTP/2 mandates the use of very strong ciphers, while HTTP/1.1 (over TLS) allows for far weaker, and therefore faster, ciphers.
In order to compare apples to apples, you would need to make sure that the same cipher is used. For me, for this Akamai demo, the same cipher was used.
The other thing that may be important is that the HTTP/1.1 sources are downloaded from http1.akamai.com, while for HTTP/2 they are downloaded from http2.akamai.com. For me they resolve to different addresses.
One should also analyze how precise is the time reported in the demo :)
The definitive answer can only come from a network trace with tools like Wireshark.
For networks worse than yours, probably the majority, HTTP/2 is typically a clear winner due to HTTP/2 optimizations related to latency (in particular, multiplexing).
Latency matters more than absolute load time if you're mixing small and big resources. E.g. if you're loading a very large image but also a small stylesheets then HTTP2's multiplexing over a single connection that can have the stylesheets finish while the image is still loading. The page can be rendered with the final styles and - assuming that the image is progressive - will also display a low-res version of the image.
In other words, the tail end of a load might be much less important if it's caused by a few big resources.
That said, the demo page actually loads http2 faster for me on FF nightly most of the time, although there is some variance. You might need better measurements.

Serving HTTP version of site to those who don't support HTTP2

I'd like to move my client's site entirely to HTTPS in order to allow HTTP2 to work, however I was wondering is it ok (in the eyes of search engines) to serve older traffic (of which there is a lot and which would otherwise suffer a perf hit) that do not support HTTP2?
Is this dangerous to do from an SEO point of view? and
could you do the detection with tools like WURFL?
I want to stay current and offer improved perf/security to those on newer browsers but don't want those on older browsers in developing countries to suffer.
For what is worth, I did some tests a few weeks ago and I got the impression that Google's spiders don't see HTTP/2 yet. But as #sbordet pointed out the upgrade to HTTP/2 is optional, so just be sure to have a site that also responds to HTTP/1.1. Here are a few thoughts more:
Google's algorithms will penalize slower sites, but it is unlikely that you will take a big performance hit from using HTTPS in your servers.
Using HTTPS can actually boost your SEO. Doesn't have anything to do with HTTP/2.
Popular browsers that don't support HTTP/2: Safari and IE. Safari doesn't support any TLS crypto-suite compatible with HTTP/2, AFAIK. But that won't cause problems as long as you list HTTP/2-compatible suites first in your TLS server hello: ECDHE-RSA-AES128-GCM-SHA256 and ECDHE-RSA-AES256-GCM-SHA384 are the ones I know of. Then you can list weaker suites.
You don't need to serve different content depending on whether you use HTTP/2 or HTTP/1.1, as your question title may hint (sorry if I misunderstood).
Also, just because you updated to HTTP/2, it does not mean that your server cannot serve HTTP/1.1 anymore.
You can easily update to HTTP/2, and retain HTTP/1.1 support for older devices or networks that do not support or do not allow HTTP/2 traffic.
Whether a client and a server can speak HTTP/2 is negotiated: only if the server detects that the client supports it, then it will use it, otherwise the server will fallback to HTTP/1.1. Therefore you don't risk to make your site unavailable for older browsers in developing countries.
Then again, HTTP/2 implementations may vary, but typically they have to be prepared to clients that don't speak HTTP/2, and use HTTP/1.1 for those (because otherwise they won't be able to serve content and it will appear that the service is down).

SignalR on IIS 7.5 always uses Long Polling with every Browser

I know SignalR has it's transport-method hierarchy: Websocket->Server-Sent Events->Forever Frame->Long Polling
But when I check the console in every Browser, I noticed that the transport is always Long Polling.
I'm using Windows 7, IIS Express 7.5 and Visual Studio 2013 (SignalR 2.0 of course).
I know Websocket is only supported with IIS 8, but at least SSE or Forever Frame for IE should work.
For example in Google Chrome I get this:
That means, Chrome is trying to use SSE right? But why is it cancelled?
And here a screenshot of Fiddler with Internet explorer:
It's blue... and the code is 200. (and why are there different ports? The site runs under port 4040, but where does 11437 come from?)
There isn't even an explanation, why IE doesn't go on using SSE.
I mean, SSE/Forever Frame does work with IIS 7.5, doesn't it?
Thank you in advance!
PS: Before you ask, I am at home and not behind a proxy
The SignalR requests to port 11437 are being made by Visual Studio's new Browser Link feature which can be disabled.
Can you show us your server-side code (particularly anything in OnConnected)? It would also be helpful to see the responses to the SSE and ForeverFrame /connect requests.
Lastly, looking at SignalR's server-side tracing could be helpful.
Well this is a bit late, but I want to resolve this anyway.
The reason was Bitdefender Internet Security 2013.
It buffers requests or something like that, I don't know exactly :X
Anyways, I uninstalled it and that did the trick :D

Http requests / concurrency?

Say a website on my localhost takes about 3 seconds to do each request. This is fine, and as expected (as it is doing some fancy networking behind the scenes).
However, if i open the same url in tabs (in firefox), then reload them all at the same time, it appears to load each page sequentially rather than all at the same time. What is this all about?
Have tried it on windows server 2008 iis and windows 7 iis
It really depends on the web browser you are using and how tab support in it has been programmed.
It is probably using a single thread to load each tab in turn, which would explain your observation.
Edit:
As others have mentioned, it is also a very real possibility the the webserver running on your localhost is single threaded.
If I remember correctly HTTP standard limits the number of concurrent conections to the same host to 2. This is the reason highload websites use CDNs (content delivery networks).
network.http.max-connections 60
network.http.max-connections-per-server 30
The above two values determine how many connections Firefox makes to a server. If threshold is breached, it will pipeline the requests.
Each browser implements it in its own way. The requests are made in such a way to maximize the performance. Moreover, it also depends on the server (localhost which is slower).
Your local web server configuration might have only one thread, so every next request will wait for the previous to finish

Adobe Flex: Why do I get intermittent SecurityErrorEvents on some browsers?

Our flex app talks back to its originating server over a TCP-socket connection. This requires an allowance from the server in question and thus we've set up a socket policy server at the host (source code at pastie.org/791060).
This has worked fine on many permutations of Firefox, Safari, Windows and Mac OS X, but then yesterday we discovered problems with IE 7 on Windows XP. In about 50% of the cases a SecurityErrorEvent is raised upon socket.connect. This despite calling Security.loadPolicyFile("xmlsocket://:843") before connecting, and the observation of the socket policy server transmitting the socket policy data to the client (checked with tcpdump). The error can often be undone by reloading the flash app in question, while restarting IE triggers its return.
Why do we see this intermittent errors, and what can we do about them?
Regards,
Ville Jutvik
Jutvik Solutions
I've pinned the issue down to a bad socket policy server implementation. It seems like it hanged up too early in the TCP conversation with the flash client (didn't wait for the string), causing the connection errors under some circumstances, IE 7 on Windows XP notably. I didn't know that it was so easy to create havoc on the TCP level from the user level...
Heath: Thank you for your time. I will keep your hypothesis of the firewall acting up in my mind because I will surely encounter it later on as our testing progresses.
/Ville

Resources