First off, I am not an AS 400 guy - at all. So please forgive me for asking any noobish questions here.
Basically, I am working on a .Net application that needs to access the AS400 for some real-time data. Although I have the system working, I am getting very different performance results between queries. Typically, when I make the 1st request against a SPROC on the AS400, I am seeing ~ 14 seconds to get the full data set. After that initial call, any subsequent calls usually only take ~ 1 second to return. This performance improvement remains for ~ 20 mins or so before it takes 14 seconds again.
The interesting part with this is that, if the stored procedure is executed directly on the iSeries Navigator, it always returns within milliseconds (no change in response time).
I wonder if it is a caching / execution plan issue but I can only apply my SQL SERVER logic to the AS400, which is not always a match.
Any suggestions on what I can do to recieve a more consistant response time or simply insight as to why the AS400 is acting in this manner when I was using the iSeries Data Provider for .Net? Is there a better access method that I should use?
Just in case, here's the code I am using to connect to the AS400
Dim Conn As New IBM.Data.DB2.iSeries.iDB2Connection(ConnectionString)
Dim Cmd As New IBM.Data.DB2.iSeries.iDB2Command("SPROC_NAME_HERE", Conn)
Cmd.CommandType = CommandType.StoredProcedure
Using Conn
Conn.Open()
Dim Reader = Cmd.ExecuteReader()
Using Reader
While Reader.Read()
'Do Something
End While
Reader.Close()
End Using
Conn.Close()
End Using
EDIT: after looking about a bit on this issue and using some of the comments below, I am beginning to wonder if I am experiencing this due to the gains from connection pooling? Thoughts?
I've found the Redbook Integrating DB2 Universal Database for iSeries with Microsoft ADO .NET useful for diagnosing issues like these.
Specifically look into the client and server side traces to help isolate the issue. And don't be afraid to call IBM software support. They can help you set up profiling to figure out the issue.
You may want to try a different driver to connect to the AS400-DB2 system. I have used 2 options.
the standard jt400.jar driver to create a simple java web service to get my data
the drivers from the company called HIT software (www.hitsw.com)
Obviously the first option would be the slower of the two, but thats the free way of doing things.
Each connection to the iSeries is backed by a job. Upon the first connection, the iSeries driver needs to create the connection pool, start a job, and associate that job with the connection object. When you close a connection, the driver will return that object to the connection pool, but will not end the job on the server.
You can turn on tracing to determine what is happening on your first connection attempt. To do so, add "Trace=StartDebug" to your connection string, and enable trace logging on the box that is running your code. You can do this by using the 'cwbmptrc' tool in the Client Access program directory:
c:\Program Files (x86)\IBM\Client Access>cwbmptrc.exe +a
Error logging is on
Tracing is on
Error log file name is:
C:\Users\Public\Documents\IBM\Client Access\iDB2Log.txt
Trace file name is:
C:\Users\Public\Documents\IBM\Client Access\iDB2Trace.txt
The trace output will give you insight into what operations the driver is performing and how long each operation takes to complete. Just don't forget to turn tracing off once you are done (cwbmptrc.exe -a)
If you don't want to mess with the tracing, a quick test to determine if connection pooling is behind the delay is to disable it by adding "Pooling=false" to your connection string. If connection pooling the is reason that your 2nd attempt is much faster, disabling connection pooling should make each request perform as slowly as the first.
I have seen similar performance from iSeries SQL (ODBC) queries for several years. I think it's part of the nature of the beast-- OS/400 dynamically moves data from disk to memory when it's accessed.
FWIW, I've also noticed that the iSeries is more like a tractor than a race car. It deals much better with big loads. In one case, I consolidated about a dozen short queries into a single monstrous one, and reduced the execution time from something like 20 seconds to about 2 seconds.
I have had to pull data from the AS/400 in the past, basically there were a couple of things that worked for me:
1) Dump data into a SQL Server table nightly where I could control the indexes, the native SqlClient beats the IBM DB2 .NET Client every time
2) Talk to one of your AS400 programmers and make sure the command you are using is hitting a logical file as opposed to a physical (logical v/s physical in their world is akin to our tables v/s views)
3) Create Views using a Linked Server on SQL server and query your views.
I have observed the same behavior when connecting to Iseries data from Java solutions hosted on Websphere Application Server (WAS) as well as .Net solutions hosted on IIS. The first call of the day is always more "expensive" than the second.
The delay on the first call is caused by the "setup" time for the Iseries to set up the job to service the request, (job name is QZDASOINIT in subsystem QUSRWRK). Subsequent calls will reuse the existing jobs that stay active waiting for more incoming requests.
The number of QZDASOINIT jobs and how long they stay active is configurable on the Iseries.
One document on how to tune your prestart job entries:
http://www.ibmsystemsmag.com/ibmi/tipstechniques/systemsmanagement/Tuning-Prestart-Job-Entries/
I guess it would be a reasonable assumption that there is also some overhead to the "first call of the day" on both WAS and IIS.
Try creating a stored procedure. This will create and cache your access plan with the stored procedure, so optimizer doesn't have to look in the SQL cache or reoptimize.
Related
First some background:
1. Understand how COnnection Pooling is being used by NPGSQL in ASP.NET REST API
Environment:
- We have a REST API controller that queries first a list of items (to RDS) then per each item in this list we need to obtain some additional values so we use a Parallel.ForEach statement
Every time we use a connection we dispose it properly
I've seen that every time this endpoint is called the number of connections increase and then they are removed ok.
Process:
I've followed http://www.npgsql.org/doc/performance.html#performance-counters to check on how NPGSQL is handling connections, also added the following to the connectionstring:
"CommandTimeout=50000;TIMEOUT=1024;POOLING=True;MINPOOLSIZE=1;MAXPOOLSIZE=100;Use Perf Counters=true;"
but I found a strange outcome:
NumberOfNonPooledConnections and NumberOfPooledConnections is always the same in my case (56) we are using a Parallel.ForEach to query several items.
The value for NumberOfActiveConnectionPools is 1.
At first I couldn't really understand how this is working, was it really using the connection pool ?
Then I stop the process removed the ";POOLING=True;" from the connection string and I have the same result.
Finally I set ";POOLING=false;" and execute again, now the NumberOfPooledConnections went to the roof it reached 2378, and then it started timing out opening new connections.
I also noted in RDS performance metrics that the number of connections never exeeced 110 connections.
So the questions would be:
What would be the criteria to set the MaxPoolSize parameter ? 100 seems the usual.
In ASP.NET the connection pool is handled by instance ? so all connections being made from the same Application Pool in IIS will be reused or is per execution?.
First, ASP.NET (the web side) has absolutely no effect on Npgsql's connection pooling or on ADO.NET in general, so it's better to reason about Npgsql and ADO.NET without thinking about web.
Second, you aren't saying which version of Npgsql you're using.
Beyond that, before looking at performance counters, what exactly is the problem you are seeing? Are you seeing too many connections at the PostgreSQL side? You can check this by querying pg_stat_activity.
If Npgsql pooling is on (Pooling=true in the connection string, it's also the default), then when you call NpgsqlConnection.Open() a physical connection will be taken from the pool if one is available. When you close or dispose that NpgsqlConnection, it will be returned to the pool to be reused later. If you're seeing physical connections going up too much at the PostgreSQL side, that is a probable sign that you are forgetting to close/dispose a connection in your code and you have a leak.
The performance counters feature can be useful to understand what's happening, but unfortunately it isn't well-tested and may contain bugs. So please make sure there's an actual issue before starting to look at it (and at the very least report the Npgsql version you're using).
Ahoy,
We have two BizTalk applcations in BizTalk 2013 R2 that seem to be having random issues. Both applications follow the same process.
Pull data from a WCF endpoint.
Delete data from a database via a stored procedure.
Insert the new data that was pulled via WCF-SQL call.
Both applications worked great during our testing for quite a while. But, over time, we've had a few issues crop up with the insert via the WCF-SQL call.
A fatal error occurred while reading the input stream from the network. The session will be terminated (input error: 64, output error: 0).
This error showed up in the Sql Server logs. We had this one for about a day and then it just went away. Everything else continued to work fine on that target sql server. It was only BizTalk that had issues.
Our latest error is where the request to the WCF-SQL insert happens ( the data is actually inserted ), but there never is a response. So, the Send Port continues to try and send for it's retries and the Orchestration just dehydrates.
We tinkered with every setting throughout the application to try and solve this, but only a delete of the application and a redeploy fixed this ( for now at least ).
So, I guess my question is whether or not anyone else has had these sorts of issues with BizTalk having "random" errors like this where it'll work great and then go downhill like we've seen?
I'd really prefer to have something stable that is minimal maintenance. This is an enterprise product after all.
I've issues similar to this happen when moving between environments where there were data differences, e.g. a column full of NULLs in QA and a column full of actual data in PROD. There are a few things you can try.
Use SQL Sever Profiler to capture the RPC call coming from BizTalk, and try running it directly on the SQL Server BizTalk is calling remotely (wrap it in a transaction you roll back at the end if this is production). Does it take longer than expected to run? Debug the procedure to find the pain points and optimize if possible. I've written a blog about how to do this here: http://blog.tallan.com/2015/01/09/capturing-and-debugging-a-sql-stored-procedure-call-from-biztalk/
Up the timeout settings in the binding configuration for the send port to ensure that it is not timing out before SQL can finish doing its work.
Up the System.Transactions timeout in Machine.config to ensure that MSDTC isn't causing issues: http://blogs.msdn.com/b/madhuponduru/archive/2005/12/16/how-to-change-system-transactions-timeout.aspx and http://blog.brandt-lassen.dk/2012/11/overriding-default-10-minutes.html
If possible, do a data compare between the TEST/QA and PROD databases. Look for significant differences, especially in columns that you are using in JOIN conditions and WHERE clauses.
We have been facing weird connection timeouts on one of our websites.
Our environment is composed of an IIS 7 web server (running on Windows Server 2008 R2 Standard Edition) and an SQL Server 2008 database server.
When debugging the website functionality that provokes the timeout, we notice that the connection itself takes milliseconds to complete, but the SqlCommand, which invokes a stored procedure on the database, hangs for several minutes during execution, then raises the timeout exception.
On the other hand, when we run the stored procedure directly on the database, it takes only 2 seconds to correctly finish execution.
We already tried the following:
Modified SqlCommand timeout on the website code
Modified execution timeout on the web.config file
Modified sessionState timeout on the web.config file
Modified authorization cookie timeout on the web.config file
Modified the connection timeout on the website properties on IIS
Modified the application pool shutdown time limit on IIS
Checked the application pool idle timeout on IIS
Checked the execution timeout on the SQL Server properties (it's set to 0, unlimited)
Tested the stored procedure directly on the database with other parameters
We appreciate any help.
Nirav
I've had this same issue with a stored procedure that was a search feature for the users. I tried everything, include ARTIHABORT etc. The SP joined many tables, as the users could search on anything. Many of the parameters for the SP were optional, meaning they had a default value of NULL in the SP. Nothing worked.
I "fixed" it by making sure my ADO.NET code only added parameters where the user selected a value. The SP went from many minutes to seconds in execution time. I'm assuming that SQL Server handled the execution plan better when only parameters with actual values were passed to the SP.
Note that this was for SQL Server 2000.
A few years ago I had a similar problem when migrating an app from SQL2000 to SQL2008.
I added OPTION (RECOMPILE) to the end of all the stored procs in the database that was having problems. In my case it had to do with parameters that were very different between calls to the stored proc. Forcing the proc to recompile will force SQL to come up with a new execution plan instead of trying to use a cached version that may be sub-optimal for the new params.
And in case you haven't done it already, check your indexes. Nothing can kill db performance like lack of a badly needed index. Here is a good link (http://sqlfool.com/2009/04/a-look-at-missing-indexes/) on a query that will display missing indexes.
Super-super late suggestion, but might come handy for others: A typical issue I saw and rather applicable to Java is the following:
You have a query which takes a string as a parameter. That string is search criterion on a varchar(N) column in the database. however, you submit the string param in the query as Unicode (nvarchar(N)). This will result in a full-table scan and conversion of every single field values to Unicode for proper comparison, to avoid potential data loss (if SQL Server converted the input param to non-Unicode, it may lose information).
Simple test: run the query twice (for the sake of simplicity, I'm assuming it's an SP):
exec spWhatever 'input'
exec spWhatever N'input'
See how they behave. Also, you may want to take a look at the Recent Expensive Queries section on the Activity Monitor in SSMS and ask for the execution plan, to clarify the situation.
Cheers,
Erik
Hi i got a web application(asp.net) where we just started getting "System.InvalidOperationException: Timeout expired." when trying to get a new sql connection.
So my guess some where in the code a connection is created but never disposed, but how would i go about to find where this happens? sadly most of the database communication does not use a datalayer it works directly with the sql data types...
Is there any thing i could enable in the web.config to trace what connections are open for longer then x seconds and where they where opened?
find everywhere your sqlconnection is used. Ensure it is in a using() block to automatically dispose of it. There is nothing built in to the web.config with this unfortunately. You may be able to try out ants memory profiler to help track this down. for ex:
http://www.developerfusion.com/review/59466/ants-memory-profiler-51/
Ok i found a way to track them down by using
EXEC SP_WHO2
DBCC INPUTBUFFER(SPID)
SP_WHO2 gives you information about the connections and by using DBCC INPUTBUFFER you can find out what command they ran last time.
.NET framework will not solve your architecture and logging issues so you will have to find the problem yourselves. What can help you:
Performance counters can show you utilization of connection pool
SQL Server Management Studio can show you connections in use, their activity and last executed statement. In Object Explorer select the server and in context menu go to Reports > Standard Reports > Activity xxx
Another approach is turning off the pool and using some profiler to collect information about your application and track not disposed connections.
This is a quite a big, quite a badly coded ASP.NET website that I am currently tasked with maintaining. One issue that I'm not sure how to solve is that at seemingly random times, the live web site will lock up completely, page access is ok, but anything that touches the database causes the application to hang indefinitely.
The cause seems to be many more open connections to the database than you would expect of a lowish level traffic web site. Activity monitor shows 150+ open connections, most with an Application value of '.NET SqlClient Data Provider', with the network service login.
A quick solution is to restart the SQL Server service (I've recycled the ASP.NET app pool also just to ensure that the application lets go of anything, and stops any code from reattempting to open connections if there was some sort of loop process that I'm unaware of). This doesn't however help me solve the problem.
The application uses a Microsoft SQLHelper class, which is a pretty common library, so I'm fairly confident that the code that uses this class will have connections closed where required.
I have however spotted a few DataReaders that are not closed properly. I think I'm right in saying that a DataReader can keep the underlying connection open even if that connection is closed because it is a connected class (Correct me if I'm wrong).
Something that it perculiar is that one of the admins restarted the server (not the database server, the actual server) and immediatley, the site would hang again. The culprit was again 150+ open database connections.
Does anybody have any diagnostic technique that they can share with me for working out where this is happening?
Update: SQL Server Log files show many entries like this (30+)
2010-10-15 13:28:53.15 spid54 Starting up database 'test_db'.
I'm wondering if the server is getting hit by an attacker. That would explain the many connections right after boot, and at seemingly random times.
Update: Have changed the AutoClose property, though still hunting for a solution!
Update 2: See my answer to this question for the solution!
Update:
Lots and lots of Starting up database: Set the AutoClose property to false : REF
You are correct about your DataReaders: make sure you close them. However, I have experienced many problems with connections spawning out of control even when connections were closed properly. Connection pooling didn't seem to be working as expected since each post-back created a new SqlConnection. To avoid this seemingly uneeded re-creation of the connection, adopted a Singleton approach to my DAL. So I create a single DataAdapter and send all my data requests through it. Although I've been told that this is unwise, I have not received any support for that claim (and am still eager to read any documentation/opinion to this effect: I want to get better, not be status quo). I have a DataAdapter class for you to consider if you like.
If you are in SQL 2005+, you should be able to use Activity Monitor to see the "Details" of each connection which sometimes gives you the last statement executed. Perhaps this will help you track the statement back to some place in code.
I would recommend downloading http://sites.google.com/site/sqlprofiler/ to see what queries are happening, and sort of work backwards from there. Good luck man!
Many types such as DbConnection, DbCommand and DbDataReader and their derived types (SqlConnection, SqlCommand and SqlReader) implement the IDisposable interface/pattern.
Without re-gurgitating what has already been written on the subject you can take advantage of this mechanism via the using statement.
As a rule of thumb you should always try to wrap your DbConnection, DbCommand and DbDataReader objects with the using statement which will generate MSIL to call IDisposable's Dispose method. Usually in the Dispose method there is code to clean up unmanaged resources such as closing database connections.
For example:
using(SqlConnection cn = new SqlConnection(connString))
{
using(SqlCommand cmd = new SqlCommand("SELECT * FROM MyTable", cn))
{
cmd.CommandType = CommandType.Text;
cn.Open();
using(SqlDataReader rdr = cmd.ExecuteReader())
{
while(rdr.Read())
{
... do stuff etc
}
}
}
}
This will ensure that the Dispose methods are called on all of the objects used immediately after use. For example the Dispose methods of the SqlConnection closes the underlying database connection right away instead of leaving it around for the next garbage collection run.
Little changes like this improve your applications ability to scale under heavy load. As you already know, "Acquire expensive resources late and release early". The using statement is a nice bit of syntactic sugar to help you out.
If you're using VB.NET 2.0 or later it has the same construct:
Using Statement (Visual Basic) (MSDN)
using Statement (C# Reference) (MSDN)
This issue came up again today and I managed to solve it quite easility, so I thought I'd post back here.
In this case, you can assume that the connection spamming is coming from the same code block, so to find the code block, I opened activity monitor and checked the details for some of the many open connections (Right click > details).
This showed me the offending SQL, which you can search for in your application code / stored procedures.
Once I'd found it, it was as I suspected, an unclosed data reader. Problem is now solved.