I'm trying to run a trace with Sql Server Profiler against an Asp.NET Website Application running in Visual Studio development server.
However, whenever the trace is running, all db requests from the web application fails giving me the error message:
"Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding."
If I stop the trace, the web application works again.
Any input on this is appreciated.
You just need to increase the CommandTimeout on the sql connection while you are debugging and the application pool timeout values as well.
Once you get around the profiler timeout issue you should look at tuning your database (if you havent already, although it doesnt sound like it).
I have had a similar issue recently and it turned out to be IO blocking due to high reads on certain querys/statements. Getting the profiler to run on top of an already sluggish database was difficult. We had to run the profiler in ten minute sections at quieter times, although this does not help to identify the biggest issues with the heaviest loads.
Once we got the profiler to capture data (on sql server 2005) and implemented the indexes and statistics recommeneded by the Database Tuning Advisor (DTA) the database was running at expected peformance levels again.
I would recommend you read this free ebook on sql server profiler....
http://www.red-gate.com/products/SQL_Response/offers/mastering_sql_profiler_ebook.htm
It details how to run lightweight traces that will help the DTA recommend indexes and statistics that will improve the performance of your database and also identify some slow running queries that could be located in your code.
The trace you are running could be tipping your database over the edge, so running it in 10-20 minute sections might be more feasable.
If you have IO blocking issues this affects the overall sql server in general and management studio will seem non responsive at times.
Is it possible that you're accidentally stuck in single-user mode?
Try this:
ALTER DATABASE [database name] SET MULTI_USER;
Related
Background: We are trying to migrate a large, complex web application written in classic ASP from Windows Server 2003 to Windows Server 2012 R2.
Everything is working without errors, but the new server is extremely slow to serve the ASP pages. With a single user on the site, response times in the order of 2-3 seconds for ASP pages are common. Equally large AJAX calls and JavaScript pages are served and process in under 100ms.
When the site receives a moderate level of load (more than approx. 50 users) it becomes unusably slow. Normal load for the the production site is several thousand users.
There does not appear to be a correlation between the amount of data returned or the database connection. We are using SQL Server 2008 R2 for the database.
The Web application server is in a DMZ and uses a hosts file entry for the database server which is in our general intranet. Database queries process extremely fast (within milliseconds).
I've tried profiling the web server memory usage, disk I/O and network usage, and found no evidence of memory leaks. Query profiling shows no lag in processing database calls.
Update after running Failed Request Tracing
I set up tracing to be triggered for classic ASP requests taking longer than 1 second
Maximum time shown by detail logs for each trace from request start to request completion: 140ms
Total request times logged ranged from 1094ms to 1453ms - so the actual request is taking an order of magnitude longer than the events logged by the failed request trace.
What are common fixes for this performance problem?
There are reports of classic ASP sites being slow if the connection string uses the machine name\instance name instead of an IP address, especially if SQL is running on a non-standard port. Maybe try changing the connection string, e.g.:
Server=10.10.10.123,1433\myInstanceName;
Reference: forums.iis.net
I am unable to comment since I do not have enough reputation, so asking question as an answer. I will remove this once I get the answer
What is the driver you are using to connect in your connection string ?
I did see your comment on host file, can you please try direct IP in the connection string. Please do not remove the host file.
Can you try a small new web application in asp just with minimal database listing. is that also slow?
Again try the same new application without a database connection and time the difference.
Do you have on error resume next in the code, are you failing on any file / log permission but that is not getting reported.
Try disabling Microsoft Defender. There are serious slowdowns after an update. They modified vbscript.dll which is what executes ASP code.
If you cannot live without Windows Defender, you can replace vbscript.dll with an older version.
I have an BizTalk application which loops on a XML and send data to SQL server database. The orchestration works fine on the DEV machine throughout the process and is consistent. But if I process the same file on the QA machine it starts with the same speed and then the performance keeps on degrading. There is no issue on the Database object, the throttling settings are the same compared to DEV. I restarted the machine. Not sure why QA is reacting this way for this application.
What are the areas to be checked?
There are various factors which can cause this and overall your solution performance:
Is QA a shared environment, i.e. there are other solutions on it
which may cause the slow down?
If you are sharing hosts on which orchestration is running then that host might be throttling due to various reasons such as memory issues etc, Use performance counter to monitor the host throttling state.
You may have too many persistent points in orchestration, since you are looping and sending message to sql db in loop. if you are using send shape it will cause persistent point per send in loop,will degrade performance considerable.
Isolate the issue i.e. whether it is orchestration running slow or
sending to SQL is taking time.
Tracking is turned on and DTA jobs are not running
Message clean jobs not running as expected in QA
I wrote a blog about how to use SQL Server Profiler to capture the RPC call from BizTalk to SQL Server. You could isolate whether SQL is causing the issue that way; capture the RPC call on DEV or QA, and then try running just the stored procedure on QA. If it doesn't run as quickly as on DEV, that's your problem. If it does, look at your BizTalk artifacts.
Here's the blog: http://blog.tallan.com/2015/01/09/capturing-and-debugging-a-sql-stored-procedure-call-from-biztalk/
BizTalk host throttled because DatabaseSize exceeded the configured throttling limit. Also The SQL Server Agent was not running on the server, so purge processes did not run. This looks to have built up the database size over time until Biztalk throttled the application due to the resources being low
I have an IIS Web Server that hosts 400 web applications (distributed across 30 application pools). They are both ASP.NET applications and WCF Services end points. The server has 32GB of RAM and is usually running fast; although it's running at 95% memory usage. Worker processes each take between 500MB and 1.5GB of RAM.
I also have another box running SQL Server. That one has plenty of free memory.
Sometimes, the Web Server starts throwing SQL Timeout exceptions. A few per minutes at first and rapidly increasing to hundreds per minute; effectively making the server down. This problem affects applications in all pools. Some requests still complete but most of them don't. While this happens the CPU usage on the server is around 30% (which is the normal load on that box).
While this is happening, we can still use SQL Server Management Studio (from the IIS Server) to execute requests successfully (and fast).
The fix is to restart IIS. And then everything goes back to normal until the next time.
Because the server is running with very low memory, I feel like this is the cause. But I cannot explain the relationship between low memory and sudden bursts of SQL Timeout exceptions.
Any idea?
Memory pressure can trigger paging and garbage collection. Both introduce latency which would not be present otherwise.
GC'ing 32GB of data can take seconds. Why would all app processes GC at the same time? Because at about 95% memory utilization Windows sets a "low memory" event that the CLR listens to. It will try to release memory to help other processes.
If the applications get into a paging frenzy that would also explain huge delays in normal execution.
This is just guessing, though. You can try proving it by looking at the "Hard page faults/sec" counter. There also must be a counter for "full GC" or "Gen 2 GC".
The fix would be running at a higher margin to the physical memory limit.
The first problem is to discover where the timeout is happening. Can you tell from the stack trace if the timeout is happening when executing a request against the database, or when connecting to the database? (Or even connecting to the web server?)
Timeouts executing database requests can be a variety of causes. The problem might be in the database with blocking processes, database maintenance (also locking), deadlocks, etc. When apps are running slowly, do you see a lot of entries in sys.dm_exec_requests, and if so, what are their wait_types?
Even if you can run SQL in the query window while the web server is timing out, that doesn't mean there isn't massive blocking or deadlocking going on.
If it is a timeout connecting to the database, then it is possible the ADO connection pools are overwhelmed and not getting cleaned up, or the database has a connection limit, and the web services are timing out waiting for a connection.
One of the best ways to find out what is going on is to capture a memory dump of the w3wp.exe process and analyze it. Even if you aren't adept at a debugger like WinDbg, Microsoft's DebugDiag tool can produce some nice reports with helpful information.
SqlCommand.CommandTimeout
This property is the cumulative time-out for all network reads during command execution or processing of the results. A time-out can still occur after the first row is returned, and does not include user processing time, only network read time.
It is a client based time out. If stuff is getting queued due to memory constraints then that could cause a timeout.
Are you retrieving a lot of data from these queries?
If some queries return a lot of data consider breaking them up and give the user a next and prior button.
Have you considered asynch like BeginExecuteReader?
The advantage is no timeout.
It does not release the calling thread.
isExecutingFTSindexWordOnce = true;
sqlCmdFTSindexWordOnce.BeginExecuteNonQuery(callbackFTSindexWordOnce, sqlCmdFTSindexWordOnce);
// isExecutingFTSindexWordOnce set to false in the callback
Debug.WriteLine("Calling thread active");
But I agree with your comment how to respond to the request as the answer does not come back to the calling thread.
Sorry I am used to WPF where I just update a public property on the call back.
my application is developed on classic asp, but also uses asp.net as I am migrating the application on .Net. Its using SQL server as database and hosted on Windows server 2003.
Now the problem is that the application continue to work perfectly fine for a long time but then after some time SQL server gives timeout error and it could fulfill any of the requests made. It doesn't get fixed even when I restart my SQL server or even IIS, ultimately I have to restart my server every time which only fixes the problem.
Any idea what might be causing the problem? Just to give a rought idea, the site is used by around 300 people at peak times.
Any idea what might be causing the problem? Just to give a rought idea, the site is used by around 300 people at peak times. I am certainely closing connection everywhere, my end code on each page closes the connection. If an error occurs before the end page, the expection handler closes the connection. So I am sure that closing the connection isn't an issue. And that there are no open connection if I see the sql logs. Our server, only one box, has SQL Server, IIS, iMail (our mail server). After I had restarted SQL Server, it did not solve the problem. Only restarting Windows Server, it worked. From perfom, IO usage is quite high. Is there any suggestions?
Thanks,
At the very least, are you closing the connection to the database, once you are done using it in the code? Also, how does your connection string look like? does it use connection pool?
EDIT: I saw your comments. Are there pending transactions to be committed?
It sounds a lot like there's an unmanaged resource of some kind that you aren't cleaning up properly. We don't have enough information to know exactly what that resource might be, so all we can do is guess.
My first instinct is database connections, except that restarting that restarting sql server should fix it if that were the case. Next on the list is file handles and threads, so if you do any multithreading work or extra file io that would be something to look at. Remember, in ASP.Net, the using statement (not directive) is your friend.
First, you need to talk to your DBA... they can check the number of open connections, table locks, slow-running queries, etc.
My gut reaction is that you aren't closing your connections somewhere, or your connection pool is too low.
Are you doing regular database maintenance? Rebuilding / defragmenting indexes, recalculating statistics (unless it's set to do this automatically). Check the size of your transaction log, etc.
I've got a number of ASP.Net websites (.Net v3.5) running on a server with a SQL 2000 database backend. For several months, I've been receiving seemingly random InvalidOperationExceptions with the message "Internal connection fatal error". Sometimes there's a few days in between, while other times there are multiple errors per day.
The exception is not limited to one site in particular, though they share business and data access assemblies. The error seems to always be thrown from SqlClient.TdsParser.Run(). It sometimes is thrown from old-school direct SqlCommand.Execute() calls, while other times it is thrown from Linq2Sql code.
I've been assured by the network guys that there are no errors or packets lost on their end. Has anyone else experienced this? Could it be a driver problem? We have been unable as of yet to pinpoint a specific trigger for this exception.
We're running II6 on Windows Server 2003.
After a few months of ignoring this issue, it started to reach a critical mass as traffic gradually increased. Under heavy load, including some crawlers, things got crazy and these errors poured in nonstop.
Through trial and error, we eventually tracked down a handful of SqlCommand or LINQ queries whose SqlConnection wasn't closed immediately after use. Instead, through some sloppy programming originating from a misunderstanding of LINQ connections, the DataContext objects were disposed (and connections closed) only at the end of a request rather than immediately.
Once we refactored these methods to immediately close the connection with a C# "using" block (freeing up that pool for the next request), we received no more errors. While we still don't know the underlying reason that a connection pool would get so mixed up, we were able to cease all errors of this type. This problem was resolved in conjunction with another similar error I posted, found here: Why is my SqlCommand returning a string when it should be an int?
Sounds like the database connection is getting dropped or timing out.
We recently had similar issues moving to IIS 6 from IIS 5 connecting to SQL 2000. Our issue was solved by increasing number of ephemeral ports available.
Look at the usage of the ephemeral ports by the IIS server. The default max no. of ports available is normally 4000. You might want to consider increasing this if the sites on your server are particularly busy or your application is making a lot of database calls.
You can monitor these first to see if going over max limit.
Search Microsoft Knowledge base for "MaxUserPort" and "TcpTimedWaitDelay" and make necessary registry changes. Make sure you back up registry or snapshot server before making the changes. Will need to reboot for changes to take effect.
You should double check your database and recordset connection are being closed after use. Not closing will use up this port range unnecessarily.
Check the efficiency of your stored procedures anyway as they might be taking longer than they need too.
"If you rapidly open and close 4000 sockets in less than four minutes, you will reach the default maximum setting for client anonymous ports, and new socket connection attempts fail until the existing set of TIME_WAIT sockets times out." - from http://support.microsoft.com/kb/328476
Check your server's LOG folder (\program files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG or similar) for files named SqlDump*.mdmp and SqlDump*.txt. If you do find any you'll have to take it to Product Support.
I was creating a new EF Core project and was trying to create the database to an external Linux server instead of a Windows Server or local one. After hours of searching I found out that I am using MySQL instead of the Microsoft SQL server.
I found it weird that everyone was using 1433 instead of the usual 3306. So to fix my 'Internal connection fatal error' I had to set up a docker instance of SQL Server bound to its default port of 1433.
It literally was that simple. In the docker repo look for "microsoft-mssql-server" and run the image as described neatly in the description below. Everything works now and I am able to push my database from my EF Core project to an external server.