SignalR increasing Disconnect and Keep-Alive timeout increases volume of connections using SSE and volume reconnect requests - signalr

We updated our SignalR DisconnectTimeout to 60 seconds from 36 seconds. And our KeepAlive to 20 seconds from 12 seconds.
After this change we noticed an increase in connections using SSE (Server-sent events) and these connections would reconnect every 40 seconds. The reconnect would succeed, but the in the SSE protocol the reconnect returns when the connections has dropped. We are using SignalR version 2.4.1.
Does anyone know why increasing these two configurations for SignalR can cause an increase in reconnects and sessions using SSE?
Our other SignalR global configurations are,
ConnectionTimeout = 110 seconds (Default)
TransportConnectionTimeout = 5 seconds (Default)
DefaultMessageBufferSize = 1000

Related

Npgsql - random idle connection spikes under load

I have a server on ASP.NET Core communicating with a postgres DB hosted on a same machine.
The server is relatively high-loaded (5-6k rps).
The database receives 3-4k transactions per second.
All queries are very fast - pg_stat_statements shows mean execution time of 0.7 miliseconds.
Server load stays low (cpu under 20%, memory under 60%)
The problem is that despite there being only 3-5 active connections, idle connections fluctuate between 30-100, and sometimes spike into thousands. With thousands of idle connections I either exhaust connection pool, or run out of memory.
Dapper and EntityFramework are used in Data access layer. Connections for Dapper are disposed with using
using var db = new NpgsqlConnection(_connectionString);
What can be the reason for idle connection spikes?
the baseleine of 100 idle connections that you see is caused by Min Connections in connection string - without it the idle connection graph would be even more jagged, oscilating between 20 and 140 connections, and spiking to thouhsands.

Why retransmision occured 16 seconds later?

I use jmeter to test a tomcat web application running behind f5's load balancer.
From the pcap file I captured in jmeter node I see that there is a duplicate ack and then
16 seconds later the client retransmit the packet and do nothing within 20 seconds.
As tomcat's default socket timeout is set as 20 seconds the client received Http error.
My question is:
What could be the reason that make client's TCP stack retransmit after 16 seconds and later do nothing within 20 seconds? Is it too busy? Is there a way to find out the reason?

SignalR keep alive timeout

From SignalR wiki there is this section on Reconnecting Event
Reconnecting client event.
Raised when (a) the transport API detects that the connection is lost, or (b) the keepalive timeout period has passed since the last message or keepalive ping was received. The SignalR client code begins trying to reconnect. You can handle this event if you want your application to take some action when a transport connection is lost. The default keepalive timeout period is currently 20 seconds.
The section on timeouts tells about the three configuration values i.e. DisconnectTimeout,KeepAlive & ConnectionTimeout.
My question is, if I want to decrease the KeepAlive value to say, 5 seconds or increase it to say, 30 seconds, does the Keep Alive Timeout, after which the client starts to reconnect change automatically or would it still default to 20 seconds as mentioned above?
If no, is there a way I could set the keep alive timeout via code?

SignalR KeepAlive vs ConnectionTimeout

In SignalR (1.2.2), What is the difference between a KeepAlive and ConnectionTimeout?
With a keep alive actively pinging the server, the connection will never timeout. So what is the point of ConnectionTimeout?
Am I confusing ConnectionTimeout with a timeout associated while establishing a new connection?
I found the answer on the wiki shortly after posting the question. Pretty much ConnectionTimeout has no effect when a KeepAlive is set.
The wiki says:
ConnectionTimeout - Represents the amount of time to leave a connection open before timing out. Default is 110 seconds.
KeepAlive - Representing the amount of time to wait before sending a keep alive packet over an idle connection. Set to null to disable keep alive. This is set to 30 seconds by default. When this is on, the ConnectionTimeout will have no effect.
ConnectionTimeout
This setting represents the amount of time to leave a transport connection open and waiting for a response before closing it and opening a new connection. The default value is 110 seconds.
KeepAlive
This setting represents the amount of time to wait before sending a keepalive packet over an idle connection. The default value is 10 seconds. This value must not be more than 1/3 of the DisconnectTimeout value.
KeepAlive also means that you have an opened resource - connection. CPU is used to handle it each 10 seconds for instance. KeepAlive will just don't let server to drop it saying smth like "Yes, I'm small and slow, but I'm still alive and send you packages".
ConnectionTimeout could be the same, untill reconnect. And reconnect may not happen. After timeout the resource will be closed (connection) and reopened. ConnectionTimeout will be like "Ok, give me 110 seconds and I'll decide what to do during this period. After timeout we can talk again if required".

In-practice ideal timeout length for idle HTTP connections

In an embedded device, What is a practical amount time to allow idle HTTP connections to stay open?
I know back in the olden days of the net circa 1999 internet chat rooms would sometimes just hold the connection open and send replies as they came in. Idle timeouts and session length of HTTP connections needed to be longer in those days...
How about today with ajax and such?
REASONING: I am writing a transparent proxy for an embedded system with low memory. I am looking for ways to prevent DoS attacks.
My guess would be 3 minutes, or 1 minute. The system has extremely limited RAM and it's okay if it breaks rare and unpopular sites.
In the old days (about 2000), an idle timeout was up to 5 minutes standard. These days it tends to be 5 seconds to 50 seconds. Apache's default is 5 seconds. With some special apps defaulting to 120 seconds.
So my assumption is, that with AJAX, long held-open HTTP connections are no longer needed.
How about allowing idle HTTP connections to remain open unless another communication request comes in? If a connection is open and no one else is trying to communicate, the open connection won't hurt anything. If someone else does try to communicate, send a FIN+ACK to the first connection and open the second. Many http clients will attempt to receive multiple files using the same connection if possible, but can reconnect between files if necessary.

Resources