my application is developed on classic asp, but also uses asp.net as I am migrating the application on .Net. Its using SQL server as database and hosted on Windows server 2003.
Now the problem is that the application continue to work perfectly fine for a long time but then after some time SQL server gives timeout error and it could fulfill any of the requests made. It doesn't get fixed even when I restart my SQL server or even IIS, ultimately I have to restart my server every time which only fixes the problem.
Any idea what might be causing the problem? Just to give a rought idea, the site is used by around 300 people at peak times.
Any idea what might be causing the problem? Just to give a rought idea, the site is used by around 300 people at peak times. I am certainely closing connection everywhere, my end code on each page closes the connection. If an error occurs before the end page, the expection handler closes the connection. So I am sure that closing the connection isn't an issue. And that there are no open connection if I see the sql logs. Our server, only one box, has SQL Server, IIS, iMail (our mail server). After I had restarted SQL Server, it did not solve the problem. Only restarting Windows Server, it worked. From perfom, IO usage is quite high. Is there any suggestions?
Thanks,
At the very least, are you closing the connection to the database, once you are done using it in the code? Also, how does your connection string look like? does it use connection pool?
EDIT: I saw your comments. Are there pending transactions to be committed?
It sounds a lot like there's an unmanaged resource of some kind that you aren't cleaning up properly. We don't have enough information to know exactly what that resource might be, so all we can do is guess.
My first instinct is database connections, except that restarting that restarting sql server should fix it if that were the case. Next on the list is file handles and threads, so if you do any multithreading work or extra file io that would be something to look at. Remember, in ASP.Net, the using statement (not directive) is your friend.
First, you need to talk to your DBA... they can check the number of open connections, table locks, slow-running queries, etc.
My gut reaction is that you aren't closing your connections somewhere, or your connection pool is too low.
Are you doing regular database maintenance? Rebuilding / defragmenting indexes, recalculating statistics (unless it's set to do this automatically). Check the size of your transaction log, etc.
Related
Background: We are trying to migrate a large, complex web application written in classic ASP from Windows Server 2003 to Windows Server 2012 R2.
Everything is working without errors, but the new server is extremely slow to serve the ASP pages. With a single user on the site, response times in the order of 2-3 seconds for ASP pages are common. Equally large AJAX calls and JavaScript pages are served and process in under 100ms.
When the site receives a moderate level of load (more than approx. 50 users) it becomes unusably slow. Normal load for the the production site is several thousand users.
There does not appear to be a correlation between the amount of data returned or the database connection. We are using SQL Server 2008 R2 for the database.
The Web application server is in a DMZ and uses a hosts file entry for the database server which is in our general intranet. Database queries process extremely fast (within milliseconds).
I've tried profiling the web server memory usage, disk I/O and network usage, and found no evidence of memory leaks. Query profiling shows no lag in processing database calls.
Update after running Failed Request Tracing
I set up tracing to be triggered for classic ASP requests taking longer than 1 second
Maximum time shown by detail logs for each trace from request start to request completion: 140ms
Total request times logged ranged from 1094ms to 1453ms - so the actual request is taking an order of magnitude longer than the events logged by the failed request trace.
What are common fixes for this performance problem?
There are reports of classic ASP sites being slow if the connection string uses the machine name\instance name instead of an IP address, especially if SQL is running on a non-standard port. Maybe try changing the connection string, e.g.:
Server=10.10.10.123,1433\myInstanceName;
Reference: forums.iis.net
I am unable to comment since I do not have enough reputation, so asking question as an answer. I will remove this once I get the answer
What is the driver you are using to connect in your connection string ?
I did see your comment on host file, can you please try direct IP in the connection string. Please do not remove the host file.
Can you try a small new web application in asp just with minimal database listing. is that also slow?
Again try the same new application without a database connection and time the difference.
Do you have on error resume next in the code, are you failing on any file / log permission but that is not getting reported.
Try disabling Microsoft Defender. There are serious slowdowns after an update. They modified vbscript.dll which is what executes ASP code.
If you cannot live without Windows Defender, you can replace vbscript.dll with an older version.
We are trying to diagnose an issue that occurred in our production environment last week. Long story short, the database connection pool seemed to be full of active connections from our ASP.NET 3.5 app that would not clear, even after restarting the application pool and IIS.
The senior DBA said that because the network connections occur at the operating system level, recycling the app and IIS did not sever the actual network connections, so SQL Server left the database connections to continue running, and our app was still unable to reach the database.
In looking up ways to force a database connection pool to reset, I found the static method SqlConnection.ClearAllPools(), with documentation explaining what it does, but little to nothing explaining when to call it. It seems like calling it at the beginning of Application_Start and the end of Application_End in my global.asax.cs is a good safety measure to protect the app from poisoned connection pools, though it would of course incur a performance hit on startup/shutdown times.
Is what I've described a good practice? Is there a better one? The goal is to allow a simple app restart to reset an app's mangled connection pool without having to restart the OS or the SQL Server service, which would affect many other apps.
Any guidance is much appreciated.
When a process dies, all network connection are always, always, always closed immediately. That's at the TCP level. Has nothing to do with ADO.NET and goes for all applications. Kill the browser, and all downloads stop. Kill the FTP client and all connections are closed immediately.
Also, the connection pool is per process. So clearing it when starting the app is useless because the pool is empty. Clearing it at shutdown is not necessary because all connections will (gracefully) shut down any moment.
Probably, your app is not returning connections to the pool. You must dispose of all connections after use in all cases. If you fail to do that, dangling connections will accumulate for an indefinite amount of time.
Clearing the pool does not free up dangling connections because those appear to be in use. How could ADO.NET tell that you'll never use them again? It can't.
Look at sys.dm_exec_connections to see who is holding connections open. You might increase the ADO.NET pool size as a stop-gap measure. SQL Server can take over 30k connections per instance. You'll normally never saturate that.
I've got a number of ASP.Net websites (.Net v3.5) running on a server with a SQL 2000 database backend. For several months, I've been receiving seemingly random InvalidOperationExceptions with the message "Internal connection fatal error". Sometimes there's a few days in between, while other times there are multiple errors per day.
The exception is not limited to one site in particular, though they share business and data access assemblies. The error seems to always be thrown from SqlClient.TdsParser.Run(). It sometimes is thrown from old-school direct SqlCommand.Execute() calls, while other times it is thrown from Linq2Sql code.
I've been assured by the network guys that there are no errors or packets lost on their end. Has anyone else experienced this? Could it be a driver problem? We have been unable as of yet to pinpoint a specific trigger for this exception.
We're running II6 on Windows Server 2003.
After a few months of ignoring this issue, it started to reach a critical mass as traffic gradually increased. Under heavy load, including some crawlers, things got crazy and these errors poured in nonstop.
Through trial and error, we eventually tracked down a handful of SqlCommand or LINQ queries whose SqlConnection wasn't closed immediately after use. Instead, through some sloppy programming originating from a misunderstanding of LINQ connections, the DataContext objects were disposed (and connections closed) only at the end of a request rather than immediately.
Once we refactored these methods to immediately close the connection with a C# "using" block (freeing up that pool for the next request), we received no more errors. While we still don't know the underlying reason that a connection pool would get so mixed up, we were able to cease all errors of this type. This problem was resolved in conjunction with another similar error I posted, found here: Why is my SqlCommand returning a string when it should be an int?
Sounds like the database connection is getting dropped or timing out.
We recently had similar issues moving to IIS 6 from IIS 5 connecting to SQL 2000. Our issue was solved by increasing number of ephemeral ports available.
Look at the usage of the ephemeral ports by the IIS server. The default max no. of ports available is normally 4000. You might want to consider increasing this if the sites on your server are particularly busy or your application is making a lot of database calls.
You can monitor these first to see if going over max limit.
Search Microsoft Knowledge base for "MaxUserPort" and "TcpTimedWaitDelay" and make necessary registry changes. Make sure you back up registry or snapshot server before making the changes. Will need to reboot for changes to take effect.
You should double check your database and recordset connection are being closed after use. Not closing will use up this port range unnecessarily.
Check the efficiency of your stored procedures anyway as they might be taking longer than they need too.
"If you rapidly open and close 4000 sockets in less than four minutes, you will reach the default maximum setting for client anonymous ports, and new socket connection attempts fail until the existing set of TIME_WAIT sockets times out." - from http://support.microsoft.com/kb/328476
Check your server's LOG folder (\program files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG or similar) for files named SqlDump*.mdmp and SqlDump*.txt. If you do find any you'll have to take it to Product Support.
I was creating a new EF Core project and was trying to create the database to an external Linux server instead of a Windows Server or local one. After hours of searching I found out that I am using MySQL instead of the Microsoft SQL server.
I found it weird that everyone was using 1433 instead of the usual 3306. So to fix my 'Internal connection fatal error' I had to set up a docker instance of SQL Server bound to its default port of 1433.
It literally was that simple. In the docker repo look for "microsoft-mssql-server" and run the image as described neatly in the description below. Everything works now and I am able to push my database from my EF Core project to an external server.
I have an application with a file receive location. After the host instance has been running for a few hours the receive location fails to identify new files dropped into the folder that it is monitoring. It doesn't forget about them altogether, it's just that performance grinds to a crawl. The receive location is configured to poll the target folder every 60 seconds but after host instance has been running for an hour or so, then it seems that the target folder is being polled only every thirty minutes. If I restart the host instance then the files waiting in the target folder are collected right away and performance is fine for the next hour or so.
The same application runs fine in a different environment.
There are now obvious entries in the event log related to the problem.
All the BizTalk SQL jobs are running fine except for Backup BizTalk Server (BizTalkMgmtDb).
Any suggestions gratefully received.
Thanks
Rob
Here are some additional tools which may help you identify and diagnose BizTalk database issues.
BizTalk MsgBox Viewer
Here is a tool to repair identified errors:
Terminator
Use at your own risk... read the glogs and docs. Start with the message box viewer and let us know our results.
Without more details, the biggest tell is that your Backup Job is failing. If the backup job is failing, it may not be properly configured. If it is properly configured and still failing, then you've got other issues. Can you give us some more information about your BizTalk install.
What version are you running?
What are our database sizes?
What are your purge and archive settings like?
Is there any long running blocks in your SQL Server DB coming from BizTalk?
Another thing to consider is the user accounts the send, receive and orchestration hosts are running under. Please check the BizTalk Administration Console. If they are all running the same account, sometimes the orchestrations can starve the send and receive processes of CPU time. I believe priority is given to orchestrations then receive, then send. Even if you are just developing, it is useful to use separate accounts for this. This also improves security.
The Wrox BizTalk Server 2006 will also supply tuning advice.
What other things are going on with the server? Is BizTalk pegged otherwise or is it idle?
You mention that the solution does not have any problems in another environment, so it's likely that there is a configuration problem.
Check the following:
** On SQL Server, set some upper memory limit for SQL Server. By default, SQL Server uses whatever it can get and then hangs onto it, so set a reasonable limit so that your system can operate without spending a lot of time paging memory onto and from your hard drive(s).
** Ensure that you have available disk space - maybe you are running low - this can lead to all kinds of strange problems.
** Try to split up the system's paging file among its physical drives (if you have more than one drive on the system). Also consider using a faster drive, or if you have lots of cash laying around, get a SAN.
** In BizTalk, is tracking enabled? If so, are you also tracking message bodies? Disable tacking or message body tracking and see if there is a difference.
** Start performance monitor and monitor the following counters when running your solution
Object: BizTalk Messaging
Instance: (select the receiving host) %%
Counter: Documents Received/Sec
Object: BizTalk Messaging
Instance: (select the transmitting host) %%
Counter: Documents Sent/Sec
Object: XLANG/s Orchestrations
Instance: (select the processing host) %%
Counter: Orchestrations Completed/Sec.
%% You may have only one host, so just use it. Since BizTalk configurations vary, I am using generic names for hosts.
The preceding counters monitor the most basic aspects of your server, but may help to narrow down places to look further. You can, of course, add CPU and Memory too. If you have time (days...maybe weeks) you could monitor for processes that allocate memory and never release it. Use the following counter...
Object: Memory
Counter: Pool Nonpaged Bytes
Slow decline of this counter indicates that a process is not releasing memory, which affects everything on the system.
Let us know how things turn out!
I had the same problem with, when my orchestration was idle for some time it took a long time to process the first msg. A article of EvYoung helped me solve this problem.
"This is caused by application domain unloading within the BizTalk host process. If an AppDomain is shutdown after idle, the next message that comes needs to wait for the Orchestration to compile again. Depending on the complexity of your design, this can be a noticeable wait. To prevent this in low latency requirement scenario, you can modify the BTSNTSVC.EXE.config file and set SecondsIdleBeforeShutdown property to -1. This will prevent AppDomain shutdown due to idle."
You can find the article in here:
http://blogs.msdn.com/b/biztalkcpr/archive/2008/05/08/thoughts-on-orchestration-performance.aspx
It took me to long to respond but i thought i might help someone. cheers :)
Some good suggestions from others. I will add :
Do you have any custom receive pipeline components on the receive location ? If so perhaps one is leaking memory, calling some external component eg database which is taking a long time ?
How big are the files you are receiving ?
On the File transport properties of your receive location, set "file renaming" on, do the files get renamed within 60s.
While tracing the active connection on my db i found that some times the connections exceeds 100, is that normal?
and after few minutes it return back to 20 or 25 active connection
more details about my problem
Traffic on the site is around 200 visitor per day.
Why i am asking? because the default MaxPool in the asp.net connection string is 100
Also i am using Connection in the website IIS
That really depends on your site and your traffic. I've seen a site peek out at over 350 active connections to SQL during its peak time. That was for roughly 7,000 concurent web users, on two web servers, plus various backend processes.
Edit
Some additional information that we need to give you a better answer:
How many Web Processes hit your sql
server? For example are you using web
gardens? Do you have multiple servers
how many if you do? This is important because then you can calculate how many connections you can have by figuring out how many worker threads per process you have configured. Assume worse case, each thread is running which would add a connection to the pool.
Are you using connection pooling? If so your going to see the connections stick around after the user's request ends. By default its enabled.
How many concurent users do you have?
But, I think your going after this wrong, your having an issue with no free connections available in your pool. The first thing I'd look for is any leaked connections (connections being held open for longer then they should). For example passing a data reader up to the Web Page, could be a sign of this.
Next thing is to evaluate the default settings. Maybee you should run a web garden which should give you more connections, or increase the number of connections available.
The last thing I would do is try to opitmize queries like in your last question. Let's say you cut those queries in half, all you've done is bought yourself more time until more users come onto the system, and your right back here, only this time you might not be able to optimize that query yet again.
You're leaving out some details making it difficult to answer correctly but...
It depends, really. If you're not using connection pooling then each time a page is hit that requires access to the database a new connection is going to be opened. So sure, it could be perfectly normal.
I would also look into caching. Cache pages, cache query results, etc. You might be surprised how many times you go back to the database to get a list of US States...