Program not closing connections to db2400 / as/400 - asp.net

I've been programming for just a few years, and we have a default dll used for data access. It seems like there has been some data-mining or site scraping going on here lately, and although there are no issues with our SQL database connections, many of the programs that access the as/400 are keeping connections open and idle for long periods of time. I looked through our default data access dll and added code to close the connection after each function, but that didn't help. I have little experience with db2 / as/400 ... how do I close all of these open / idle connections from the code?

If you're using connections pools, that's working as designed.
Are you sure the connection is actually open? How are you determining that?
If you're just seeing locks held by the QZDASOINIT job on the IBM i, then that's also by design. The system will hard close tables (cursors) after the first use. When used again by the same job, the system will only pseudo-close them; in order to provide faster response when they are re-used.
If an operation needing exclusive access is attempted, the system will hard close the pseudo closed cursor.

Related

What happens if you don't close an ODBC connection?

I've seen other posts about using PHP and ADO to access ODBC databases, but I don't think my question has been asked outside of PHP. I've recently taken over a project where a touchscreen interface is running Windows XP and using some proprietary european programming language that's extremely similar to Java to interface with PLCs and machinery.
We record information from various sensors at a regular interval, and then use the program to open a connection to an ODBC database and store the records. I've been tasked with tracking down a bug wherein data just stops recording for days at a time for no apparent reason, and I'm convinced it has something to do with either the ODBC database (fixable) or a version incompatibility between windows and the PLCs (not fixable). So I'm shooting for the fixable one first.
The program creates a new ActiveXObject and uses ADO to open a connection to the database, strings together a command, executes it, and then closes the connection. It does all this each time a record is created, and I'm trying to find out if there's a reason the original programmers do it this way instead of creating an adodb.Connection, opening it, and then making a transaction for each data record to write, and closing it only when the user quite the program.
The only thing I can think of is that they were worried about what would happen if the touchscreen lost power while a connection was open. What would that do? Nobody really knows anything about this almost-Java-language that we're using, so I can't say for sure what happens to ActiveXObjects when the program closes. Could something like this be causing these few-day-long lapses in recording, or am I totally barking up the wrong tree?
Opening and closing the connection each time it is needed would normally be considered the safer and the least network intense approach. The only time it is inefficient is when many calls are being made to the database in without much time elapsing between them.
Leaving database connections open is sometimes not recommended. In the case where you are using a file-based database such as Visual Foxpro or MS Access, a database file can actually become corrupt by a network connection being dropped although I think normally for this to happen the connection would need to drop during a write of some kind.
Do you have any error control or debugging options? Could you write to a text file each time a call is attempted to the database?
I really don't think the language being used here is overly important since you are using ADO, ODBC, and I'm assuming some kind of standard database format. The failure probably lies with one of these technologies, unless there is an error somewhere in your code that is preventing the data logging routine from firing.

asp.net runs fast then slow on restart

Why is it that every time the server goes down, and asp.net restarts, the response time is SUPER FAST when it comes back up for a few minutes. I assume because everyone is off the server and I am one of the few (or only) people back on the server so quick?
I have discussed this with our developers and they say the response time is due to everyone on the server normally (200+ desktops) and when you are the only person on there, it flys. Really? Then does that mean we need newer, faster web servers?
I am not a programmer, but I think there may be two answers, one is what the devs say above is true, and two is the system is accumulating temp files of some sort and they get cleared out when the server crashes and then restarts.
How do we prove who might be right? Where does one start to look for asp.net bottlenecks?
windows server 2003
asp.net 3.0
iis6
12GB ram
sql server 2005 (db admin says there is no load issue on sql..)
Some very basic steps that you can follow and check if your server work on limits are:
First you download the Process Explorer from sysinternals and you run it to see two things.
Is your server work on their memory limit ?
If yes then what program eats the memory, usually SQL Server 2005 use a lot of memory for database cache, and this is done after many time of work.
Did the server use all of his computing power, if yes, check what program is the one that need all that computing power.
Now next step, download the TCPView from sysinternals, run it and see how many connections are done, how fast, etc... There you can see anomalies, or if the computer is also on their limit.
Final step is to defrag your disks.
Also have in mine that the asp.net session is lock the entire view on all users. So if you have one application on web, with too many users, and each user, or some users, make long time processing on their calls, then this can cause delay to all the users.

Debugging MySQL: Too many connections

After we deployed the new version of our ASP.NET C# app with a MySQL DB we are having issues with the connections.
Yesterday I got the "Too many connections" error and I'm watching the open connections with
SHOW FULL PROCESSLIST and they keep increasing during the day.
Is there a good way to figure out where our bug could be? Like checking the last query that a sleeping connection made?
Make sure your app is closing connection to db properly....if your app not closing connections then you will get above error
Connection pooling usually solved this problem. In your case, connections seem to stay open much too long, which means that in some branches of your software, there's not definitive finally that closes the connection after it has been used. It's especially useful to diagnose problematic connection usage at a central point, because it can keep an eye on the number of open connections at any given time and maybe alert somebody to make a dump to analyze later.
You could also increase the number of connections allowed to your MySQL instance. The settings in your my.cnf is "max_connections".
Lastly, if you want you can try to decrease the "wait_timeout" or "interactive_timeout" properties of your instance. These settings regulate atomatic closing of connections after certain amounts of time.

locked s3db-journal

a few days we had a strange error with sqlite. We use a sqlite database on a network share with several computers accessing it. Our client reported, that the database is gone. A quick overview showed, that the database was still there but no computer could access it. It also showed a s3db-journal file indicating that someone is/was accessing the db when something happened. The thing that is strange - the s3db-journal file was locked by the file system (we could not copy/delete it). After restarting all applications, the locked file disappeared as it should be.
How does this happen? We would like to deduct somehow how our client got into this situation. We know, that there was a corrupt network cabeling to one of the computers.
Thank you for your help.
Tobias
To clarify this: several = up to 10 computer
From the "Appropriate uses for SQLite" page:
If you have many client programs accessing a common database over a network, you should consider using a client/server database engine instead of SQLite. SQLite will work over a network filesystem, but because of the latency associated with most network filesystems, performance will not be great. Also, the file locking logic of many network filesystems implementation contains bugs (on both Unix and Windows). If file locking does not work like it should, it might be possible for two or more client programs to modify the same part of the same database at the same time, resulting in database corruption. Because this problem results from bugs in the underlying filesystem implementation, there is nothing SQLite can do to prevent it.
A good rule of thumb is that you should avoid using SQLite in situations where the same database will be accessed simultaneously from many computers over a network filesystem.
It very well might be a bug in the network filesystem you're using. Either way, the SQLite developers explicitly recommend against using databases on network filesystems.
The issue is resolved. The database-component (zeos) threw an exception and we tried a rollback. Due to the way the component was designed, this is only allowed when you started a transaction. If you don't you get the locked s3db-journal file.
In the end we learned 2 things: never rollback when you did not start a transaction, second - there is a function InTransaction from zeos for that.

Slow BizTalk File Receive

I have an application with a file receive location. After the host instance has been running for a few hours the receive location fails to identify new files dropped into the folder that it is monitoring. It doesn't forget about them altogether, it's just that performance grinds to a crawl. The receive location is configured to poll the target folder every 60 seconds but after host instance has been running for an hour or so, then it seems that the target folder is being polled only every thirty minutes. If I restart the host instance then the files waiting in the target folder are collected right away and performance is fine for the next hour or so.
The same application runs fine in a different environment.
There are now obvious entries in the event log related to the problem.
All the BizTalk SQL jobs are running fine except for Backup BizTalk Server (BizTalkMgmtDb).
Any suggestions gratefully received.
Thanks
Rob
Here are some additional tools which may help you identify and diagnose BizTalk database issues.
BizTalk MsgBox Viewer
Here is a tool to repair identified errors:
Terminator
Use at your own risk... read the glogs and docs. Start with the message box viewer and let us know our results.
Without more details, the biggest tell is that your Backup Job is failing. If the backup job is failing, it may not be properly configured. If it is properly configured and still failing, then you've got other issues. Can you give us some more information about your BizTalk install.
What version are you running?
What are our database sizes?
What are your purge and archive settings like?
Is there any long running blocks in your SQL Server DB coming from BizTalk?
Another thing to consider is the user accounts the send, receive and orchestration hosts are running under. Please check the BizTalk Administration Console. If they are all running the same account, sometimes the orchestrations can starve the send and receive processes of CPU time. I believe priority is given to orchestrations then receive, then send. Even if you are just developing, it is useful to use separate accounts for this. This also improves security.
The Wrox BizTalk Server 2006 will also supply tuning advice.
What other things are going on with the server? Is BizTalk pegged otherwise or is it idle?
You mention that the solution does not have any problems in another environment, so it's likely that there is a configuration problem.
Check the following:
** On SQL Server, set some upper memory limit for SQL Server. By default, SQL Server uses whatever it can get and then hangs onto it, so set a reasonable limit so that your system can operate without spending a lot of time paging memory onto and from your hard drive(s).
** Ensure that you have available disk space - maybe you are running low - this can lead to all kinds of strange problems.
** Try to split up the system's paging file among its physical drives (if you have more than one drive on the system). Also consider using a faster drive, or if you have lots of cash laying around, get a SAN.
** In BizTalk, is tracking enabled? If so, are you also tracking message bodies? Disable tacking or message body tracking and see if there is a difference.
** Start performance monitor and monitor the following counters when running your solution
Object: BizTalk Messaging
Instance: (select the receiving host) %%
Counter: Documents Received/Sec
Object: BizTalk Messaging
Instance: (select the transmitting host) %%
Counter: Documents Sent/Sec
Object: XLANG/s Orchestrations
Instance: (select the processing host) %%
Counter: Orchestrations Completed/Sec.
%% You may have only one host, so just use it. Since BizTalk configurations vary, I am using generic names for hosts.
The preceding counters monitor the most basic aspects of your server, but may help to narrow down places to look further. You can, of course, add CPU and Memory too. If you have time (days...maybe weeks) you could monitor for processes that allocate memory and never release it. Use the following counter...
Object: Memory
Counter: Pool Nonpaged Bytes
Slow decline of this counter indicates that a process is not releasing memory, which affects everything on the system.
Let us know how things turn out!
I had the same problem with, when my orchestration was idle for some time it took a long time to process the first msg. A article of EvYoung helped me solve this problem.
"This is caused by application domain unloading within the BizTalk host process. If an AppDomain is shutdown after idle, the next message that comes needs to wait for the Orchestration to compile again. Depending on the complexity of your design, this can be a noticeable wait. To prevent this in low latency requirement scenario, you can modify the BTSNTSVC.EXE.config file and set SecondsIdleBeforeShutdown property to -1. This will prevent AppDomain shutdown due to idle."
You can find the article in here:
http://blogs.msdn.com/b/biztalkcpr/archive/2008/05/08/thoughts-on-orchestration-performance.aspx
It took me to long to respond but i thought i might help someone. cheers :)
Some good suggestions from others. I will add :
Do you have any custom receive pipeline components on the receive location ? If so perhaps one is leaking memory, calling some external component eg database which is taking a long time ?
How big are the files you are receiving ?
On the File transport properties of your receive location, set "file renaming" on, do the files get renamed within 60s.

Resources