What happens if you don't close an ODBC connection? - odbc

I've seen other posts about using PHP and ADO to access ODBC databases, but I don't think my question has been asked outside of PHP. I've recently taken over a project where a touchscreen interface is running Windows XP and using some proprietary european programming language that's extremely similar to Java to interface with PLCs and machinery.
We record information from various sensors at a regular interval, and then use the program to open a connection to an ODBC database and store the records. I've been tasked with tracking down a bug wherein data just stops recording for days at a time for no apparent reason, and I'm convinced it has something to do with either the ODBC database (fixable) or a version incompatibility between windows and the PLCs (not fixable). So I'm shooting for the fixable one first.
The program creates a new ActiveXObject and uses ADO to open a connection to the database, strings together a command, executes it, and then closes the connection. It does all this each time a record is created, and I'm trying to find out if there's a reason the original programmers do it this way instead of creating an adodb.Connection, opening it, and then making a transaction for each data record to write, and closing it only when the user quite the program.
The only thing I can think of is that they were worried about what would happen if the touchscreen lost power while a connection was open. What would that do? Nobody really knows anything about this almost-Java-language that we're using, so I can't say for sure what happens to ActiveXObjects when the program closes. Could something like this be causing these few-day-long lapses in recording, or am I totally barking up the wrong tree?

Opening and closing the connection each time it is needed would normally be considered the safer and the least network intense approach. The only time it is inefficient is when many calls are being made to the database in without much time elapsing between them.
Leaving database connections open is sometimes not recommended. In the case where you are using a file-based database such as Visual Foxpro or MS Access, a database file can actually become corrupt by a network connection being dropped although I think normally for this to happen the connection would need to drop during a write of some kind.
Do you have any error control or debugging options? Could you write to a text file each time a call is attempted to the database?
I really don't think the language being used here is overly important since you are using ADO, ODBC, and I'm assuming some kind of standard database format. The failure probably lies with one of these technologies, unless there is an error somewhere in your code that is preventing the data logging routine from firing.

Related

Program not closing connections to db2400 / as/400

I've been programming for just a few years, and we have a default dll used for data access. It seems like there has been some data-mining or site scraping going on here lately, and although there are no issues with our SQL database connections, many of the programs that access the as/400 are keeping connections open and idle for long periods of time. I looked through our default data access dll and added code to close the connection after each function, but that didn't help. I have little experience with db2 / as/400 ... how do I close all of these open / idle connections from the code?
If you're using connections pools, that's working as designed.
Are you sure the connection is actually open? How are you determining that?
If you're just seeing locks held by the QZDASOINIT job on the IBM i, then that's also by design. The system will hard close tables (cursors) after the first use. When used again by the same job, the system will only pseudo-close them; in order to provide faster response when they are re-used.
If an operation needing exclusive access is attempted, the system will hard close the pseudo closed cursor.

SQL Server, connection pools vs static connection for special cases

I sort of know the answer to this, but cannot really grasp the underlying concept. I know you are always instructed to use connection pooling now. But imagine this scenario.
I need to read data from one database, and one table, multiple times.
Connection pooling is going to inject microseconds of overhead, but why not eliminate that by using a single connection for everything and locking around that?
Since it is one database, with one table. Isn't it pretty unlikely that we will be able to get any performance boost from multithreaded connection pools?
Just hoping for some clarity here. And maybe some simple resources which would explain WHY, connection pooling ALWAYS is better.
Thanks. I know this is not the greatest question, and I appreciate your time. I am specifically in the .net environment, but this is a basic concept across programming correct?
With one global connection you need to be prepared to handle spurious connection failues. Those can always happen (network hiccup, ...).
You absolutely do get concurrency when using multiple concurrent statements against a single table. SQL Server does not usually lock tables exclusively (exceedingly rare).
You will forget to use the synchronization protocol somewhere (lock everywhere). You will get it wrong eventually and have to fight races.
If you have a slow runaway query that would block the entire app. It will appear "hung" to browsers.
You serialize all HTTP requests on the global lock. You only use one CPU. You won't scale at all. Your app will not handle burst well.
Having a single global connection is really a bad idea. Why not just use pooling? That saves you the development work of using synchronization. It is even less work.
Of course, pooling is not always better. You can construct pathological cases where it isn't. I never encountered a case where I needed to keep a connection open for longer than the current HTTP request, though.

locked s3db-journal

a few days we had a strange error with sqlite. We use a sqlite database on a network share with several computers accessing it. Our client reported, that the database is gone. A quick overview showed, that the database was still there but no computer could access it. It also showed a s3db-journal file indicating that someone is/was accessing the db when something happened. The thing that is strange - the s3db-journal file was locked by the file system (we could not copy/delete it). After restarting all applications, the locked file disappeared as it should be.
How does this happen? We would like to deduct somehow how our client got into this situation. We know, that there was a corrupt network cabeling to one of the computers.
Thank you for your help.
Tobias
To clarify this: several = up to 10 computer
From the "Appropriate uses for SQLite" page:
If you have many client programs accessing a common database over a network, you should consider using a client/server database engine instead of SQLite. SQLite will work over a network filesystem, but because of the latency associated with most network filesystems, performance will not be great. Also, the file locking logic of many network filesystems implementation contains bugs (on both Unix and Windows). If file locking does not work like it should, it might be possible for two or more client programs to modify the same part of the same database at the same time, resulting in database corruption. Because this problem results from bugs in the underlying filesystem implementation, there is nothing SQLite can do to prevent it.
A good rule of thumb is that you should avoid using SQLite in situations where the same database will be accessed simultaneously from many computers over a network filesystem.
It very well might be a bug in the network filesystem you're using. Either way, the SQLite developers explicitly recommend against using databases on network filesystems.
The issue is resolved. The database-component (zeos) threw an exception and we tried a rollback. Due to the way the component was designed, this is only allowed when you started a transaction. If you don't you get the locked s3db-journal file.
In the end we learned 2 things: never rollback when you did not start a transaction, second - there is a function InTransaction from zeos for that.

What Causes "Internal connection fatal errors"

I've got a number of ASP.Net websites (.Net v3.5) running on a server with a SQL 2000 database backend. For several months, I've been receiving seemingly random InvalidOperationExceptions with the message "Internal connection fatal error". Sometimes there's a few days in between, while other times there are multiple errors per day.
The exception is not limited to one site in particular, though they share business and data access assemblies. The error seems to always be thrown from SqlClient.TdsParser.Run(). It sometimes is thrown from old-school direct SqlCommand.Execute() calls, while other times it is thrown from Linq2Sql code.
I've been assured by the network guys that there are no errors or packets lost on their end. Has anyone else experienced this? Could it be a driver problem? We have been unable as of yet to pinpoint a specific trigger for this exception.
We're running II6 on Windows Server 2003.
After a few months of ignoring this issue, it started to reach a critical mass as traffic gradually increased. Under heavy load, including some crawlers, things got crazy and these errors poured in nonstop.
Through trial and error, we eventually tracked down a handful of SqlCommand or LINQ queries whose SqlConnection wasn't closed immediately after use. Instead, through some sloppy programming originating from a misunderstanding of LINQ connections, the DataContext objects were disposed (and connections closed) only at the end of a request rather than immediately.
Once we refactored these methods to immediately close the connection with a C# "using" block (freeing up that pool for the next request), we received no more errors. While we still don't know the underlying reason that a connection pool would get so mixed up, we were able to cease all errors of this type. This problem was resolved in conjunction with another similar error I posted, found here: Why is my SqlCommand returning a string when it should be an int?
Sounds like the database connection is getting dropped or timing out.
We recently had similar issues moving to IIS 6 from IIS 5 connecting to SQL 2000. Our issue was solved by increasing number of ephemeral ports available.
Look at the usage of the ephemeral ports by the IIS server. The default max no. of ports available is normally 4000. You might want to consider increasing this if the sites on your server are particularly busy or your application is making a lot of database calls.
You can monitor these first to see if going over max limit.
Search Microsoft Knowledge base for "MaxUserPort" and "TcpTimedWaitDelay" and make necessary registry changes. Make sure you back up registry or snapshot server before making the changes. Will need to reboot for changes to take effect.
You should double check your database and recordset connection are being closed after use. Not closing will use up this port range unnecessarily.
Check the efficiency of your stored procedures anyway as they might be taking longer than they need too.
"If you rapidly open and close 4000 sockets in less than four minutes, you will reach the default maximum setting for client anonymous ports, and new socket connection attempts fail until the existing set of TIME_WAIT sockets times out." - from http://support.microsoft.com/kb/328476
Check your server's LOG folder (\program files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG or similar) for files named SqlDump*.mdmp and SqlDump*.txt. If you do find any you'll have to take it to Product Support.
I was creating a new EF Core project and was trying to create the database to an external Linux server instead of a Windows Server or local one. After hours of searching I found out that I am using MySQL instead of the Microsoft SQL server.
I found it weird that everyone was using 1433 instead of the usual 3306. So to fix my 'Internal connection fatal error' I had to set up a docker instance of SQL Server bound to its default port of 1433.
It literally was that simple. In the docker repo look for "microsoft-mssql-server" and run the image as described neatly in the description below. Everything works now and I am able to push my database from my EF Core project to an external server.

Sqlite over a network share [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Does anyone have real world experience running a Sqlite database on an SMB share on a LAN (Windows or Linux)?
Its clear from the documentation that this is not really the fastest way to share a Sqlite database.
The obvious caveats are that it may be slow, and Sqlite only supports a single thread writing to the DB at a time. So you become a lot less concurrent cause your DB updates now will block the DB for longer (DB will be locked while data is in transit over the network).
For my application the amount of data that I would like to share is fairly small and writes are not too frequent (a few writes every few seconds at most).
What should I watch out for? Can this work?
I know this is not what Sqlite was designed for, I am less interested in a Postgres/MySql/Sql Server based solution as I am trying to keep my app a light as possible with a minimal amount of dependencies.
Related Links:
From the sqlite mailing list, so I guess one big question is how unreliable are the filelock apis over SMB (windows or linux)
My experience of file based databases (i.e. those without a database server process), which goes back over twenty years, is that if you try to share them, they will inevitably eventually get corrupted. I'd strongly suggest you look at MySQL again.
And please note, I am not picking on SQLite - I use it myself, just not as a shared database.
You asked for real-world experience. Here's some:
SQLite locking is robust, ASSUMING the underlying (networked) file system is also robust. Historically, that's been a poor assumption. Recent operating systems get it much better.
If you play by the rules, your biggest problem will be cases where the database stays "locked" for many minutes at a stretch. For example, if the network drops an "unlock" request from a reader, you might be unable to write until the lock expires. If an "unlock" from a writer goes missing, you'll be unable to read. (To be fair, you can experience the same problems with ordinary documents.)
You'll get fewer problems on a good reliable network with "opportunistic locking" (client-level file caching) disabled for the database.
Well I am not great sqlite expert but I believe the Locking of records/tables may not work correctly and may make database corrupt. Because since there is no single server which maintains central locking, two sqlite dll instances on different machines sharing same file over network may not work correctly at all. If database is opened on same machine, sqlite may use file level locking offered by OS to maintain integrity but I doubt if it works correctly over network share.
"If you have many client programs accessing a common database over a
network, you should consider using a client/server database engine
instead of SQLite. SQLite will work over a network filesystem, but
because of the latency associated with most network filesystems,
performance will not be great. Also, the file locking logic of many
network filesystems implementation contains bugs (on both Unix and
Windows). If file locking does not work like it should, it might be
possible for two or more client programs to modify the same part of
the same database at the same time, resulting in database corruption.
Because this problem results from bugs in the underlying filesystem
implementation, there is nothing SQLite can do to prevent it."
from https://www.sqlite.org/whentouse.html
that also applies for any kind of file-based databases like Microsoft Access

Resources