SignalR not closing some connections - signalr

My web app has online 200 users. But when I check SignalR connections after 1 day - it counts near 5000, most 2-10 hours long.
It starts okay, but grows 500 connections per hour. It seems like some connections just don't close.
And when I try to send message to all SignalR clients - my app hangs with CPU load 100%.
What can be the issue? SignalR version 2.2.0.

If you're using any kind of reverse proxy or tunnel between IIS and the public internet make sure everything is up to date.
For me it turned out to be due to using an out of date Cloudflare Argo tunnel.
SignalR Core connections not being closed and bringing down IIS

Related

IIS with secure socket connection keeps responding after stopping website

I'm running IIS 10.0 on Windows Server 2019 Standard a simple ASP.NET Framework 4.7.2 with long running Websocket connection and SignalR.
Things work well when it comes to stopping the website and sockets are closed if I'm using non secured sockets. However if I have a secure socket connection (TLS/SSL) the worker process will hang as long as the sockets are open. The client will continue to send and receive responses from the server. The only way to fix this is to have the client restart the connection.
Both direct websockets or via SignalR will cause this issue, the application will keep on running after trying to stop the website, transmitting and receiving messages over the secure socket; as soon as the socket closes the worker process dies as expected.
Here is a similar issue with no response : Server keeps sending ping messages to client after IIS site is stopped.
Connections do not timeout after the app pool timeout (90s)
Here is a screenshot of the active connections
What could be causing this and how do I make sure these connections are dropped when a stop/recycle is requested by IIS ?
Update:
If I change the port but still use a secure connection, the problem goes away. On website stop, the connections are dropped and the worker process dies as expected. So it seems to have something to do with port 443...

SignalR (Azure) drops web socket connection every 15 min

We have SPA (on Angular) with ASP.NET Core at the backend. We are leveraging Azure SignalR for communication.
Problem: SignalR client side drops WebSocket connection every 15 mins. That's confirmed by Network tab as well as Azure SignalR service logs.
It sound like it is either SignalR client library has some timeouts or maybe WebSockets themselves.
Tested: on different environments (different Azure SignalR service), on different browsers (Chrome, Firefox), on different browser locations (behind different networks), with different ASP.NET app hosting options (on IIS, on IIS Express, on Azure App Services). The result is always the same: WebSocket connection lasts exactly 15 mins.
One interesting fact: it is failing not only with specific interval, but also at specific times: at 0th, 15th, 30th, 45th min of every hour.
Guess this can be fixed with some configuration, but default "KeepAlive and other timeouts" look good.
Browser logs
Azure SignalR logs

IIS and HTTP pipelining, processing requests in parallel

Is it possible to configure IIS in such a way that it can handle multiple HTTP requests that arrive on the same TCP socket in HTTP pipelining mode in parallel?
We have a problem where multiple requests are done by a web client in a single TCP socket, using HTTP pipelining. The client basically sends let's say 10 requests at once, and then the server sends 10 responses (in the same order as the requests). Our server takes quite some time for each request, mostly waiting for external IO. It would be much more efficient if IIS could start to work on all 10 requests in parallel, then serialize the responses in the correct order back to the client. Obviously, the server would need some way to cache responses if e.g. response 3 is available earlier than response 2.
Is that possible somehow? Maybe this is not possible in IIS, or I'm just searching for the wrong keywords... We are running IIS 7.5 and ASP.NET 4.5 on Windows Server 2008 R2.
We came across the same issue in IIS 7.5.
Our solution was to enable "Web Garden"... and it really really works well! It's just that you can't have a "session" based web site. So if you have clients "logging in", you will have to re-configure the process. (We used cookies to store an encrypted token - anyway that's besides the point).
Go to:
Internet Information Service > Applications Pools
Select the Pool being used (you should have a pool per site)
Click Advanced Settings...
Find "Maximum Worker Processes" and crank that sucker!
The amount of processes that you push it up to now depends entirely on how much RAM your system has. You can of course monitor and control this your self.
With a "Web Garden" enabled, you will notice (with Process Explorer or something similar), IIS will spawn a new instance of w3wp.exe for each request, up to the max number you specified. New requests simply get processed by the next available Worker Process available, enabling true IIS parallel request processing. If two requests come in within moments of each other, and request 2 is completed before request 1, request 2 is sends its response.
IIS uses the HTTP server api (that uses HTTP.sys); so I did a simple test -
wrote an HTTP server using this API,
wrote a Winsock client that opens a connection and sends 2 http requests
I observed that if I called HttpReceiveHttpRequest twice on the server (without sending the response for the first request), it doesn't receive the second request (basically, the second call blocks). This holds true for both PUT and GET requests.
It appears that HTTP.sys is in fact serializing requests to IIS on a single connection; I couldn't find any configuration on HTTP.sys that might modify this behavior.
As you can see while the requests from all users all over the web are just being added to the queue, and building up and up (Green) - only 1 single Request is Executing (Blue).
This doesn't really answer the question - but its an beautiful illustration of this disastrous situation.

Round robin load balancing options for a single client

We have a biztalk server that makes frequent calls to a web service that we also host.
The web service is hosted on 4 servers with a DNS load balancer sitting between them. The theory is that each subsequent call to the service will round robin the servers and balance the load.
However this does not work presumably because the result of the DNS lookup is cached for a small amount of time on the client. The result is that we get a flood of requests to each server before it moves on to the next.
Is that presumption correct and what are the alternative options here?
a bit more googling has suggested that I can disable client side caching for DNS: http://support.microsoft.com/kb/318803
...however this states the default cache time is 1 day which is not consistent with my experiences
You need to load balance at a lower level with NLB Clustering on Windows or LVS on Linux (or other equivalent piece of software). If you are letting clients to the web service keep an HTTP connection open for longer than a single request/response you still might not get the granularity of load balancing you are looking for so you might have to reconfigure your application servers if that is the case.
The solution we finally decided to go with was Application Request Routing which is an IIS extension. In tests this has shown to do what we want and is far easier for us (as developers) to get up and running as compared to a hardware load balancer.
http://www.iis.net/download/ApplicationRequestRouting

is it possible to disable tcp slow start with windows 2003 and iis 6

Is it possible to disable tcp slow-start with windows 2003 and iis 6.0?
Or is it possible to up the Initial window higher than 3?
I'm basing this on a post where the guy claims that google and microsoft up the window to 8 for connections to decrease page load times.
This is also related to an issue we had on our network where page loads from a remote site were 20 times slower than from a local connection. It was eventually tracked back to a load-balancer setting; by forcing the load balancer to use the default slow-start standard the problem disapeared.
This has resulted in a bit of a pushing match between network and app development. network is convinced that it is something that was done with the application or at the very least done by changing something on the web servers. as you can guess i'm trying to convince them that it was a network setting in the load balancer that has caused the issue.

Resources