SQL connection timeout - asp.net

We have been facing weird connection timeouts on one of our websites.
Our environment is composed of an IIS 7 web server (running on Windows Server 2008 R2 Standard Edition) and an SQL Server 2008 database server.
When debugging the website functionality that provokes the timeout, we notice that the connection itself takes milliseconds to complete, but the SqlCommand, which invokes a stored procedure on the database, hangs for several minutes during execution, then raises the timeout exception.
On the other hand, when we run the stored procedure directly on the database, it takes only 2 seconds to correctly finish execution.
We already tried the following:
Modified SqlCommand timeout on the website code
Modified execution timeout on the web.config file
Modified sessionState timeout on the web.config file
Modified authorization cookie timeout on the web.config file
Modified the connection timeout on the website properties on IIS
Modified the application pool shutdown time limit on IIS
Checked the application pool idle timeout on IIS
Checked the execution timeout on the SQL Server properties (it's set to 0, unlimited)
Tested the stored procedure directly on the database with other parameters
We appreciate any help.
Nirav

I've had this same issue with a stored procedure that was a search feature for the users. I tried everything, include ARTIHABORT etc. The SP joined many tables, as the users could search on anything. Many of the parameters for the SP were optional, meaning they had a default value of NULL in the SP. Nothing worked.
I "fixed" it by making sure my ADO.NET code only added parameters where the user selected a value. The SP went from many minutes to seconds in execution time. I'm assuming that SQL Server handled the execution plan better when only parameters with actual values were passed to the SP.
Note that this was for SQL Server 2000.

A few years ago I had a similar problem when migrating an app from SQL2000 to SQL2008.
I added OPTION (RECOMPILE) to the end of all the stored procs in the database that was having problems. In my case it had to do with parameters that were very different between calls to the stored proc. Forcing the proc to recompile will force SQL to come up with a new execution plan instead of trying to use a cached version that may be sub-optimal for the new params.
And in case you haven't done it already, check your indexes. Nothing can kill db performance like lack of a badly needed index. Here is a good link (http://sqlfool.com/2009/04/a-look-at-missing-indexes/) on a query that will display missing indexes.

Super-super late suggestion, but might come handy for others: A typical issue I saw and rather applicable to Java is the following:
You have a query which takes a string as a parameter. That string is search criterion on a varchar(N) column in the database. however, you submit the string param in the query as Unicode (nvarchar(N)). This will result in a full-table scan and conversion of every single field values to Unicode for proper comparison, to avoid potential data loss (if SQL Server converted the input param to non-Unicode, it may lose information).
Simple test: run the query twice (for the sake of simplicity, I'm assuming it's an SP):
exec spWhatever 'input'
exec spWhatever N'input'
See how they behave. Also, you may want to take a look at the Recent Expensive Queries section on the Activity Monitor in SSMS and ask for the execution plan, to clarify the situation.
Cheers,
Erik

Related

BizTalk 2013 R2 WCF-SQL adapter having random issues

Ahoy,
We have two BizTalk applcations in BizTalk 2013 R2 that seem to be having random issues. Both applications follow the same process.
Pull data from a WCF endpoint.
Delete data from a database via a stored procedure.
Insert the new data that was pulled via WCF-SQL call.
Both applications worked great during our testing for quite a while. But, over time, we've had a few issues crop up with the insert via the WCF-SQL call.
A fatal error occurred while reading the input stream from the network. The session will be terminated (input error: 64, output error: 0).
This error showed up in the Sql Server logs. We had this one for about a day and then it just went away. Everything else continued to work fine on that target sql server. It was only BizTalk that had issues.
Our latest error is where the request to the WCF-SQL insert happens ( the data is actually inserted ), but there never is a response. So, the Send Port continues to try and send for it's retries and the Orchestration just dehydrates.
We tinkered with every setting throughout the application to try and solve this, but only a delete of the application and a redeploy fixed this ( for now at least ).
So, I guess my question is whether or not anyone else has had these sorts of issues with BizTalk having "random" errors like this where it'll work great and then go downhill like we've seen?
I'd really prefer to have something stable that is minimal maintenance. This is an enterprise product after all.
I've issues similar to this happen when moving between environments where there were data differences, e.g. a column full of NULLs in QA and a column full of actual data in PROD. There are a few things you can try.
Use SQL Sever Profiler to capture the RPC call coming from BizTalk, and try running it directly on the SQL Server BizTalk is calling remotely (wrap it in a transaction you roll back at the end if this is production). Does it take longer than expected to run? Debug the procedure to find the pain points and optimize if possible. I've written a blog about how to do this here: http://blog.tallan.com/2015/01/09/capturing-and-debugging-a-sql-stored-procedure-call-from-biztalk/
Up the timeout settings in the binding configuration for the send port to ensure that it is not timing out before SQL can finish doing its work.
Up the System.Transactions timeout in Machine.config to ensure that MSDTC isn't causing issues: http://blogs.msdn.com/b/madhuponduru/archive/2005/12/16/how-to-change-system-transactions-timeout.aspx and http://blog.brandt-lassen.dk/2012/11/overriding-default-10-minutes.html
If possible, do a data compare between the TEST/QA and PROD databases. Look for significant differences, especially in columns that you are using in JOIN conditions and WHERE clauses.

Inexplicable query timeout with ASP.NET and SQL Server

Sometimes, my web application is throwing timeout exceptions when trying to execute a specific stored procedure. From that moment on, the stored procedure will never execute again until I reboot the database server.
The strange thing is that I can execute the stored procedure manually from within the SQL Server Management studio, and the execution time is correct (about 0.2 seconds).
But if the same exact call is made from the webserver... Timeout. How is that possible?
I am using SQL Server 2012 and I'm mapping the stored procedure in my code using Linq2Sql.
Additional information: I have tried running the "detect blocking" sql from this blog post: http://blog.sqlauthority.com/2010/10/06/sql-server-quickest-way-to-identify-blocking-query-and-resolution-dirty-solution/ but no rows are returned.
Possibly a bad query plan is cached in SQL server. You could try to recompile the Stored Procedure with the Recompile option. With this option the SP is not cached and is recompiled every time it is called.

Asp.net SQL backed sessions

If I have sessions backed by SQL Server and run a command sequence like
HttpContext.Current.Session['user']
HttpContext.Current.Session['user']
Will this make 2 requests to the session DB table to fetch the value, or does asp.net do anything special with the Session object to prevent multiple DB hits?
Definitely YES.
I have SQL server session state setup and i ran Profiler on it. And could clearly see optimized DB calls.
If fact there are optimizations for getting multiple session items in one shot.
Like the below code will also result in SINGLE optimized set of calls (Note: Its not a plain single DB call to get session item)
HttpContext.Current.Session['user']
HttpContext.Current.Session['userTwo']
NOTE: Tested in .NET 4
You can implement your own session state provider if you need.
http://msdn.microsoft.com/en-us/library/ms178587.aspx

LINQ to SQL Stored Procedure. Exception says it times out but it is not timing out

Website using .NET Framework v3.5, SQL Server 2008, written in C#
I have a stored procedure which I have added to my DBML by dragging it across from the server explorer.
In it's properties it returns Auto-generated type.
The procedure takes < 1 second to run from within SQL Mgmt Studio for all inputs.
However from the code for 1 particular input (which takes < 1 second in the Mgmt studio) it hangs and then throws:
System.Data.SqlClient.SqlException: Timeout expired.
This didn't always happen for this one input! It used to also work fine when called from the code. The last time it didn't work I deleted and re-added the same stored procedure to the DBML. This "fixed" it, and that input ran fine and in the same time as all the others. However this is not an adequate fix! It has happened again and I can't keep deleting and re-adding as required.
I made no changes to the data that's being returned during the point at which it was "fixed", so I can't think what the problem could be. Any help on this would be much appreciated!
Exception says it times out but it is
not timing out
If it says it's timing out, it's timing out. The only question is "why"?
Run a SQL Server Profiler trace against your database and see what query is actually going to the server. It's possible that another query is being issued too. It's possible there is another transaction interfering in your production scenario.
It turns out that this is parameter sniffing - this is explained in another post: Executing stored proc from DotNet takes very long but in SSMS it is immediate
Also, be sure that the stored procedure is not being held up inside of a transaction, waiting for another process to complete. I just ran across this with a Linq to Sql stored procedure being called multiple times within a transaction. It gave me a timeout expired error and I just realized it was waiting for a previous call to complete, and thus timing out.

AS 400 Performance from .Net iSeries Provider

First off, I am not an AS 400 guy - at all. So please forgive me for asking any noobish questions here.
Basically, I am working on a .Net application that needs to access the AS400 for some real-time data. Although I have the system working, I am getting very different performance results between queries. Typically, when I make the 1st request against a SPROC on the AS400, I am seeing ~ 14 seconds to get the full data set. After that initial call, any subsequent calls usually only take ~ 1 second to return. This performance improvement remains for ~ 20 mins or so before it takes 14 seconds again.
The interesting part with this is that, if the stored procedure is executed directly on the iSeries Navigator, it always returns within milliseconds (no change in response time).
I wonder if it is a caching / execution plan issue but I can only apply my SQL SERVER logic to the AS400, which is not always a match.
Any suggestions on what I can do to recieve a more consistant response time or simply insight as to why the AS400 is acting in this manner when I was using the iSeries Data Provider for .Net? Is there a better access method that I should use?
Just in case, here's the code I am using to connect to the AS400
Dim Conn As New IBM.Data.DB2.iSeries.iDB2Connection(ConnectionString)
Dim Cmd As New IBM.Data.DB2.iSeries.iDB2Command("SPROC_NAME_HERE", Conn)
Cmd.CommandType = CommandType.StoredProcedure
Using Conn
Conn.Open()
Dim Reader = Cmd.ExecuteReader()
Using Reader
While Reader.Read()
'Do Something
End While
Reader.Close()
End Using
Conn.Close()
End Using
EDIT: after looking about a bit on this issue and using some of the comments below, I am beginning to wonder if I am experiencing this due to the gains from connection pooling? Thoughts?
I've found the Redbook Integrating DB2 Universal Database for iSeries with Microsoft ADO .NET useful for diagnosing issues like these.
Specifically look into the client and server side traces to help isolate the issue. And don't be afraid to call IBM software support. They can help you set up profiling to figure out the issue.
You may want to try a different driver to connect to the AS400-DB2 system. I have used 2 options.
the standard jt400.jar driver to create a simple java web service to get my data
the drivers from the company called HIT software (www.hitsw.com)
Obviously the first option would be the slower of the two, but thats the free way of doing things.
Each connection to the iSeries is backed by a job. Upon the first connection, the iSeries driver needs to create the connection pool, start a job, and associate that job with the connection object. When you close a connection, the driver will return that object to the connection pool, but will not end the job on the server.
You can turn on tracing to determine what is happening on your first connection attempt. To do so, add "Trace=StartDebug" to your connection string, and enable trace logging on the box that is running your code. You can do this by using the 'cwbmptrc' tool in the Client Access program directory:
c:\Program Files (x86)\IBM\Client Access>cwbmptrc.exe +a
Error logging is on
Tracing is on
Error log file name is:
C:\Users\Public\Documents\IBM\Client Access\iDB2Log.txt
Trace file name is:
C:\Users\Public\Documents\IBM\Client Access\iDB2Trace.txt
The trace output will give you insight into what operations the driver is performing and how long each operation takes to complete. Just don't forget to turn tracing off once you are done (cwbmptrc.exe -a)
If you don't want to mess with the tracing, a quick test to determine if connection pooling is behind the delay is to disable it by adding "Pooling=false" to your connection string. If connection pooling the is reason that your 2nd attempt is much faster, disabling connection pooling should make each request perform as slowly as the first.
I have seen similar performance from iSeries SQL (ODBC) queries for several years. I think it's part of the nature of the beast-- OS/400 dynamically moves data from disk to memory when it's accessed.
FWIW, I've also noticed that the iSeries is more like a tractor than a race car. It deals much better with big loads. In one case, I consolidated about a dozen short queries into a single monstrous one, and reduced the execution time from something like 20 seconds to about 2 seconds.
I have had to pull data from the AS/400 in the past, basically there were a couple of things that worked for me:
1) Dump data into a SQL Server table nightly where I could control the indexes, the native SqlClient beats the IBM DB2 .NET Client every time
2) Talk to one of your AS400 programmers and make sure the command you are using is hitting a logical file as opposed to a physical (logical v/s physical in their world is akin to our tables v/s views)
3) Create Views using a Linked Server on SQL server and query your views.
I have observed the same behavior when connecting to Iseries data from Java solutions hosted on Websphere Application Server (WAS) as well as .Net solutions hosted on IIS. The first call of the day is always more "expensive" than the second.
The delay on the first call is caused by the "setup" time for the Iseries to set up the job to service the request, (job name is QZDASOINIT in subsystem QUSRWRK). Subsequent calls will reuse the existing jobs that stay active waiting for more incoming requests.
The number of QZDASOINIT jobs and how long they stay active is configurable on the Iseries.
One document on how to tune your prestart job entries:
http://www.ibmsystemsmag.com/ibmi/tipstechniques/systemsmanagement/Tuning-Prestart-Job-Entries/
I guess it would be a reasonable assumption that there is also some overhead to the "first call of the day" on both WAS and IIS.
Try creating a stored procedure. This will create and cache your access plan with the stored procedure, so optimizer doesn't have to look in the SQL cache or reoptimize.

Resources