ASP.NET SignalR - Too many connection on IIS in AuthorizeRequest state - asp.net

I have an ASP.NET application using SignalR (2.2.1.0) in production and sometimes happens that there are more than 1000 connections in AuthorizeRequest state.
Any way to limit the number of connections? Could the Timeout setting help me to solve this problem?

Please try to set up the below property.
The appConcurrentRequestLimit attribute specifies the maximum number of concurrent requests that can be queued for an application.
https://learn.microsoft.com/en-us/iis/configuration/system.webserver/serverruntime
Besides, please have a look at these links.
Set limit concurrent connections for websocket on iis 8
SignalR / Websockets connection limitations and best practices

Related

ODP.NET Connection Pooling Issues - Fault Tollerance After Database Goes Down

I have an WebAPI service using ODP.NET to make connections to several oracle databases. Normally the web service would be hit several times a second and will never have long periods on inactivity. In our test site however, we did not use it for 2-3 days. This morning, we hit the service and got "connection request timeout" exceptions from ODP.NET, suggesting that the connection pool was out of available connections. We are closing the connections after use. The service was working fine before the period, but today the very first query got the timeout exception. Our app pool in IIS is configured to never reset.
My question then is, what can cause the connection pool to fill with bad connections after a period of inactivity, where these connections are not cleaned up in the usual 3 minute cycle? It only happened to 2 out of the 3 of our databases, and Validate Connection=true is set for all of them.
EDIT
So after talking to the DBA, there is some different between a connection/session being killed manually or by timeout and the database server severing the TCP connections. In this case, the TCP connection was severed as part of a regular backup (why is not important for this). I guess this happens when the whole database server goes offline at once. The basis of the question still applies I think though: why is ODP.NET unable to cleanup severed connections overtime? There is a performance counter that refers to "Stasis" connections, could those connections be stuck in that state? I would think that it should be able to see that a connection is no longer active (Validate Connection=True), kill it and not return it to the pool.
Granted, this problem can be solved by just resetting the app pool everything the database goes down. I would still like to configure ODP.NET connection pooling to be more fault tolerant.
I have run into this same issue, and the only solution I have found is to use the Connection Lifetime connection string parameter in conjunction with Validate Connection.
In my particular case, the connection timeout was set at the server and the connections in the pool would timeout, but not be sniped out of the pool, resulting in errors.
Setting both the Connection Lifetime and the Validate Connection parameters has resolved the issue.
Make sure the Connection Lifetime value that you choose is less than the server connection inactivity timeout.
The recommended solution is to use ODP.NET Fast Connection Failover (FCF). FCF will automatically remove invalid connections from the pool such that you don't need to use Validate Connection, Connection Lifetime, nor clear the pool.
To use FCF, set "HA events=true", use connection pooling, and have your DBA set up Fast Application Notification (FAN) on the server side. FAN is what alerts the ODP.NET pool when a DB service or node goes down or rebooted. Upon receiving the message, ODP.NET knows which connections to remove from the pool and removes them, leaving all other valid connections untouched.
Something else is going on here. Min Pool Size and some of the other settings help when the connection is severed from things like DBA configured idle timeouts and firewall tcp idle timeouts, 'connection request timeout' occurs when created a new connection.
This could be simple network problem. There could be something interfering with dns resolution of the servers. Another case is not having fully qualified entries in tnsnames. I've been bit by the latter a couple of times.
The other issue is the one you've already recognized - full pool.
Double check that you don't have a connection leak somewhere. A missing .Close is one thing but if you're not using a 'using' statement, a try/finally is required as an unhandled exception could be thrown prior to the .Close.
I would use perfmon to monitor some of the connection statistics to start - NumberOfPooledConnections, NumberOfActiveConnections, etc:

SignalR connections skewing response time in IIS using ARR

We are using application request routing with 5 servers running IIS 7.5 and have just recently implemented a messaging system using SignalR in our application.
The SignalR connections are working as we expect (with the only drawback being that a message sent from one server doesn't get activated on the other 4).
The problem(?) we are having is that the Response Time of some requests on IIS that are shown in the load balancer (ARR) are coming up as 2-3 minutes sometimes, which I am assuming are because of connections using something like long-polling.
Our ARR is set to load balance using lowest response time, but it seems like this metric will be completely incorrect because of these SignalR connections. Is there any way to fix these connections so they don't get used in the ARR calculation for response time? Are we stuck having to move the SignalR messages to a separate server to avoid this type of thing (which admittedly would solve other things as well)?
I think the article below will help you. Try setting the "response buffer threshold" to 0 in ARR
http://matthewmanela.com/blog/using-signalr-in-an-arr-cluster/

Is there a max http connection limit in WinRT application?

I'm making a WinRT application and I found a strange behavior. I can't open more than few parallel http requests to my server. The number is about 4-6 requests ( I don't know exact number).
New requests stuck somewhere inside client app.
I have independent instances of HttpClient and seems they share this limit, so it's not per-client, it's per app.
I aware of http connections limit in browsers, has WinRT same behavior? How can it be tuned?
This looks to be different in Windows 8.1 as you can set the maximum number of connections via HttpBaseProtocolFilter.MaxConnectionsPerServer
Note that this requires you use the new HttpClient in Windows.Web.Http
It seems that limit is per-domain.
So I set subdomains for my server`s domain and call server in round robin.

HTTP Keep Alive in a large Web Applications

I have a web application deployed over IIS 7.0. the application is accessible by large number of users and manipulates large data ..my question is concerning the HTTP Keep-Alive option which is set to true by default.
is it a better approach to set the HTTP Keep-Alive to false or true.
in case of true is the good approach to use time out?
KeepAlive should normally be used to handle the requests that immediately follow an HTML request. Let's say on the first visit to your site I get an HTML page with 5 css, 5js and 25 images, I will use my HTTP connection which is still alive to request these things (well, depends on the browser, I'll maybe use 3 connection to speed up these things).
To handle this fact we usually use a Keepalive of 2s or 3s. Having a longer keepalive means the connection is waiting for the next page that the user may request. This may be a valid way of thinking, next time the user will want a page, we'll avoid to loose time establishing HTTP connection (and this can be the longest part of the request/response time). But for your server that mean most of HTTP connection that are handled by the server are doing... nothing. And you will reach your MaxConnection (W3SVC/MaxConnections with a ridiculous default to 10), with connections doing nothing. Really Bad. So long keep-alive needs big webservers and should be used only if your application really needs it.
If you use Keepalive in a 'classical website' you must change the connection timeout (by default 2min). In Apache you would have 2 settings, a keepalive tiemout (5s by default) and a connection timeout (2min). In IIS seems the timeout settings is used for both. So do not set it to 2s (a client really slow in sending his request will timeout), but something like 10s is maybe enough. Now one response is to disallow Keep-Alive, and make the browser opening more connections. Another response is to use a modern webserver (like nginx or cherokee for example) which handles keep-alive connection in a more elegant and resource-free way than Apache or IIS.
Even if you do not use Keepalive, what's the reason of waiting 2 minutes for a client timeout? it's is certainly too high, decrease this value to something like 60s.
Then you should check several settings related to timeout (ConnectionTimeout, HeaderWaitTimeout, MinFileBytesPerSec) and this nice response on performances settings in the registry.
This article will bring more insight and don't forget to check the "How do we fix it?" section
http://mocko.org.uk/b/2011/01/23/http-keepalive-considered-harmful/
I think that it's not a good idea to get all users connected.
Because of:
User just can open your site, but not use it - why we shoud keep connection for long time?
It's hard to keep much connection (more memory)
Use connection time-out (max 5 min will we ok)
BUT: if your application is a live chat - you should kepp alive all connection. In this way better to use Ajax Long Polling Request + Node JS + some fast nosql db to store chat messages.

How many concurrent outbound HttpWebRequest calls can be made in ASP.NET / IIS7?

I'm writing an ASP.NET web application which will run on Windows Server 2008 (IIS7).
Each page's codebehind will need to make at least one synchronous web service call to an external server using HttpWebRequest and GET.
My question - is there any limit to the number of outbound HttpWebRequest calls I can make? (assume that the server I'm calling has no limit)
Is there any means to pool these connections to make the app scale better? Would a web garden configuration help?
By default, an HTTP/1.1 server is limited to two connection, and a HTTP/1.0 server is limited to four connections. So, your ASP.NEt app will have serious throughput problems if you are trying to issue more than two outstanding requests to an HTTP/1.1 server, for eg.
You will need to increase the connection limit, either per server, or globally.
For eg, globally:
ServicePointManager.DefaultConnectionLimit = 10; // allow 10 outstanding connections
Hope this helps.
I think your question should be geared toward network configurations.
I'd say you are asking for trouble if every page is dependent on a synchronous external call. What if you get N number of request that get hung on the external web service(s)? You will have some issues on your end then and you can do nothing about it.
Have you considered async calls with callbacks?
EDIT: Asynchronous Pages in ASP.NET 2.0
The following link points to a really great article for optimizing Asp.net.
http://www.codeproject.com/KB/aspnet/10ASPNetPerformance.aspx
Hope it helps ;)

Resources