Slow SQL Connection - asp-classic

I am running an ASP classic website on a new dedicated server with SQL Server 2012 on it. All is running nicely but the database connection takes about 5.1 seconds to establish when a page is loaded. If the page is refreshed the connection is instant (presumably due to connection pooling?) but if the page is reloaded a few minutes later the connection takes 5.1 seconds to establish again.
Are there any settings that I can change to speed things up?

Have you tried pinning your tables to memory on the SQL side?
Check in chrome, most likely under the network tab, you will find your timeout is due to what Chrome Networking Tools defines as "waiting" - which means the browser is waiting for a response.
http://www.mcobject.com/in_memory_database
and you could try the DBCC PINTABLE function, so that the query doesn't have to fire from a non-cached engine each time.

Related

Blazor Server # of connections per use is limited by the browser

I am in the process of building a Blazor Server-Side database application.
One of my requirements is that the user can open each website page in a different tab.
I have found that after 5 tabs are opened, any new pages are blocked from rendering. If I close one page, then the 6th page can render. Apparently this is due to the fact that browsers can support a limited number of SignalR connections at one time. I have read the limit for Chrome is 6 at a time (although I can only get 5 working).
Error Messages in Chrome:
Error: Connection disconnected with error 'Error: Server returned handshake error: Handshake was canceled.'
Error: Error: Server returned handshake error: Handshake was canceled.
Uncaught (in promise) Error: Cannot send data if the connection is not in the 'Connected' State.
at e.send (blazor.server.js:1)
Is there a solution for this problem? Or do I need to explore porting to Blazor Client?
I found the following article about this topic but not sure it it can be applied to a Blazor application:
SignalR and Browser Connection limit
It's a little scary as I have already built quite a bit of code, and don't want to spend too much time trying to hack a workaround.
I finally managed to replicate it on my internal network, it seems to have been resolved now that I have installed WebSockets.
Open Server Manager
Open Add Roles and Features
Expand WebServer (IIS)
Expand Application Development
Select WebSocket Protocol
After installing this, I opened 20 tabs of my blazor server application, each one on a different page and the issue did not re-occur (I also did a couple of refreshes on each to be sure).
I came across this after reading
Blazor works best when using WebSockets as the SignalR transport due to lower latency, reliability, and security. Long Polling is used by SignalR when WebSockets isn't available or when the app is explicitly configured to use Long Polling.
From the Blazor docs.

ODP.NET Connection Pooling Issues - Fault Tollerance After Database Goes Down

I have an WebAPI service using ODP.NET to make connections to several oracle databases. Normally the web service would be hit several times a second and will never have long periods on inactivity. In our test site however, we did not use it for 2-3 days. This morning, we hit the service and got "connection request timeout" exceptions from ODP.NET, suggesting that the connection pool was out of available connections. We are closing the connections after use. The service was working fine before the period, but today the very first query got the timeout exception. Our app pool in IIS is configured to never reset.
My question then is, what can cause the connection pool to fill with bad connections after a period of inactivity, where these connections are not cleaned up in the usual 3 minute cycle? It only happened to 2 out of the 3 of our databases, and Validate Connection=true is set for all of them.
EDIT
So after talking to the DBA, there is some different between a connection/session being killed manually or by timeout and the database server severing the TCP connections. In this case, the TCP connection was severed as part of a regular backup (why is not important for this). I guess this happens when the whole database server goes offline at once. The basis of the question still applies I think though: why is ODP.NET unable to cleanup severed connections overtime? There is a performance counter that refers to "Stasis" connections, could those connections be stuck in that state? I would think that it should be able to see that a connection is no longer active (Validate Connection=True), kill it and not return it to the pool.
Granted, this problem can be solved by just resetting the app pool everything the database goes down. I would still like to configure ODP.NET connection pooling to be more fault tolerant.
I have run into this same issue, and the only solution I have found is to use the Connection Lifetime connection string parameter in conjunction with Validate Connection.
In my particular case, the connection timeout was set at the server and the connections in the pool would timeout, but not be sniped out of the pool, resulting in errors.
Setting both the Connection Lifetime and the Validate Connection parameters has resolved the issue.
Make sure the Connection Lifetime value that you choose is less than the server connection inactivity timeout.
The recommended solution is to use ODP.NET Fast Connection Failover (FCF). FCF will automatically remove invalid connections from the pool such that you don't need to use Validate Connection, Connection Lifetime, nor clear the pool.
To use FCF, set "HA events=true", use connection pooling, and have your DBA set up Fast Application Notification (FAN) on the server side. FAN is what alerts the ODP.NET pool when a DB service or node goes down or rebooted. Upon receiving the message, ODP.NET knows which connections to remove from the pool and removes them, leaving all other valid connections untouched.
Something else is going on here. Min Pool Size and some of the other settings help when the connection is severed from things like DBA configured idle timeouts and firewall tcp idle timeouts, 'connection request timeout' occurs when created a new connection.
This could be simple network problem. There could be something interfering with dns resolution of the servers. Another case is not having fully qualified entries in tnsnames. I've been bit by the latter a couple of times.
The other issue is the one you've already recognized - full pool.
Double check that you don't have a connection leak somewhere. A missing .Close is one thing but if you're not using a 'using' statement, a try/finally is required as an unhandled exception could be thrown prior to the .Close.
I would use perfmon to monitor some of the connection statistics to start - NumberOfPooledConnections, NumberOfActiveConnections, etc:

Different server & browser HTTP status codes

I have a small python web application running on nginx with unicorn. The web application refresh it's page automatically every 1 minute.
Every day I see that around the same hour, the browser reports a 504 Gateway Time-out error and the application stops refreshing obviously.
I checked it with both chrome and firefox on two different client machines and two different server machines and found out it happens almost everyday on the same time (different time for each web server).
The weird thing is that looking at the web server access log I identify these calls and they are reported with 200 OK status code.
Could it be the the browser reports a different error code than the server due to connection issues? Any ideas how should I keep investigating it?
We found out the indeed our server had a maintenance procedure which blocked the access to it. Although it finished the request after a while the browser "gave up" and returned a timeout error. Once the maintenance procedure was canceled - the issue was resolved.
Yes - the server is able to serve the page ok so returns 200, but the client cannot finish the connection.
It could be a part of your infrastructure (firewall?) is choosing to update or something, although the odds of this happening at the exact same time of your request is slim unless it's a long running request or gateway outage.

SignalR can´t reconnect after the application being a long time idle

My application is using WebSocket protocol and all the connection and communication proccess is working well. However, after a long time with the user away, the ws connection is broken, occurs a new call to /signalr/negotiate? but there is none call to ws://localhost/signalr/connect. Inspecting the response from negotiate, it´s all ok.
The applications was running with an old client script version. I´ve updated it with version 1.1.2 and it´s all fine now.

browser waiting for server indefinitely

I have an issue with a long-running page. ASP.NET page takes about 20 minutes to get generated and served to the browser.
Server successfully completes the response (according to logs and ASP.NET web trace) and I assume sends it to the browser. However, browser never recieves the page. It (IE8 & Firefox 3 both) keeps spinning and spinning (I've let it run for several hours, nothing happens).
This issue only appears on the shared host server. When I run the same app on dev machine or internal server, everything works fine.
I've tried fiddler and packet sniffing and it looks like server doesn't send anything back. It doesn't even send keep-alive packets. Yet both browsers I've tried don't time out after pre-defined timeout period (1 hour in IE I believe, not sure what it is in Firefox).
The last packet server sends back is ACK to the POST from the browser.
I've tried this from different client machines, to ensure it's not a broken configuration on my machine.
How can I futher diagnose this problem? Why doesn't browser time-out, even though there're no keep-alive packets?
p.s. server is Windows 2003, so IIS6. It used to work fine on shared hosting, but they've changed something (when they moved to new location) and it broke. Trying to figure out what.
p.p.s. I know I can change page design to avoid page taking this long to get served. I will do this, but I would also like to find the cause of this problem. I'd like to stay focused on this issue and avoid possible alternative designs for the page (using AJAX or whatever else).
Check the server's connection timeout (on the Web Site properties page).
A better approach would be to send the request, start the calculation on the server, and serve a page with a Javascript timer that keeps sending requests to itself. Upon post-back, this page checks whether the server process has completed. While the process is still running, it responds with another timer. Once it has completed, it redirects to the results.
Would you clarify how you fixed this problem? It seems I have the same one. I'm getting .docx report and both browsers (IE10 and Fx37) indefinitely wait for a response (keep on spinning).
But it works great in VS2012's IIS Express and on my localhost IIS7.
Although server has IIS6.1
And when it works on localhost, it's just several minutes to get a report.

Resources