SQLite throws following exception while using 1 transaction (read/write) and 1 none-transaction related read/write operation: "Database is locked"
The SQLite default isolation level is Serialized and should; as far as I understood; lock all pages affected by a insert or update operation. (Hence, a select onto those dataSets would still be possible)
The exception occurs in following scenario: A connection with a transaction (serialized) creates temp tables and filles them with data. A second connection (no transaction applied), creates a table. Already at this point the exception gets thrown. However, the none-transaction related connection doesn't read/write from those temp tables and shouldn't interfere in any terms.
After some online search, I found the WAL DB configuration option, introduces with version 3.7. Even thought the journal_mode has been change to WAL, the exception still occurs.
Why does the exception occur? The connections shouldn't allocate locks over the same pages. Furthermore, is there any solution to read/write different tables within different transactions?
Related
I am trying to debug a performance issue in an ASP.NET application using .NET 4.5, EF5 (with a 2nd level cache and lazy loaded navigation properties) and SQL Server 2014. We are experiencing a number of wait locks in the SQL server. When I look at the locking transactions, they contain a very quick UPDATE, and then a very large SELECT. The UPDATE is ostensibly a necessary one, but I am confused as to why the SELECT is being run in the same transaction (and why anything is being selected at all). The fundamental issue is that the table referenced in the UPDATE statement is locked for the duration of the SELECT statement.
We use repository pattern for getting data from the db, and DbContext.SaveChanges() for committing changes. I cannot figure out how it is possible that EF produces a transaction where there is both a write and a read, and I am not getting relevant results when I try to search Google.
We have a number of interfaces into the system, and a couple of console applications working on the database as well, but they all go through the same setup/versions of .NET and EF.
I figure that it must be through SaveChanges, since this is (AFAIK) the only time that things are written to the database.
Does anyone here have a hint as to how these locking transactions might be produced?
The fundamental issue is that the table referenced in the UPDATE
statement is locked for the duration of the SELECT statement.
The answer is in your question:
the SELECT is being run in the same transaction
X lock is always held until the end of the transaction, i.e. until it commits or rolls back. So if after your quick update there is a long select, all that update locked in your table remains locked until your select ends.
You can separate your update and select if your business rules permit, you can add an appropriate index on the updated table to lock only some rows and not the whole table, or you can optimize your select to execute faster.
I gave the command to CREATE TABLE and INSERT the values into the table but its not saved. The next time I login and the table is there but the data inserted is not there. "no selected rows" is the statement shown in screen.
please help to save the changes in ORACLE 11g.
Solution
First set your Oracle Database client application to always implicitly commit or rollback transactions before termination (behaviour recommended by Oracle). Then learn about database transactions and either set your Oracle Database client application in autocommit mode or explicitly commit your D.M.L. transactions (with the COMMIT statement) because you should not rely on the commit behaviour of a client application.
Summary on database transactions
Database transactions are a fundamental concept of database management systems. The essential property of a transaction is its atomicity: a transaction is either
committed (database changes are all made permanent, that is stored on disk); or
rolled back (database changes are all discarded).
Lexically, a transaction consists of one or more S.Q.L. statements of these kinds:
data definition language (D.D.L.) statements (CREATE, DROP, ALTER, GRANT, REVOKE, …) that change the structure of the database;
data manipulation language (D.M.L.) statements (INSERT, DELETE, UPDATE, SELECT, …) that retrieve or change the contents of the database.
Database client applications can operate in two different modes:
autocommit mode, where implicit transactions are implicitly started before D.D.L. and D.M.L. statements and implicitly committed on success and rolled back on failure after D.D.L. and D.M.L. statements, and explicit transactions are explicitly started at BEGIN statements and explicitly committed at COMMIT statements and rolled back at ROLLBACK statements;
non-autocommit mode, where mixed transactions are implicitly started before D.D.L. or D.M.L. statements and explicitly committed at COMMIT statements and rolled back at ROLLBACK statements.
In both modes, transactions are implicitly committed before the database client application terminates normally and implicitly rolled back before the database client terminates abnormally. For database management systems that do not support rolling back D.D.L. statements (Maria D.B., Oracle Database, S.Q.L. Server), transactions are also implicitly committed after D.D.L. statements.
Explanation
When you issued your CREATE statement, since it is a D.D.L. statement and Oracle Database does not support rolling back D.D.L. statements, the transaction that creates the table was implicitly committed. That is why the table structure was still there after you started a new session. But when you issued an INSERT statement, since it is a D.M.L. statement and you are not in autocommit mode, the transaction that populates the table was not implicitly committed. And as your Oracle Database client application was not set to implicitly commit transactions before termination, that is why the table contents had gone after you started a new session.
I am having a recurring problem with Lasso 9.2.6 where the instance slows to a crawl performance-wise and throws these errors to the log:
Failure in sqlite_session_driver active_tick: Error from SQLite
database "lasso_session": 19 constraint failed
Restarting the instance solves the performance problem temporarily, but errors continue to appear.
Any recommendations for cleaning this up or resetting the session database to clear out invalid data?
Depending on traffic volume or logging volume you may be overloading the sqlite tables. It's hard to say exactly what the cause is without checking, but I'd look at the settings for sessions. Consider setting the sessions to use either memory or the MySQL driver (I recommend a Memory table if using MySQL).
Have a look at the size of the tables and check if any are excessively large. You can just run ls -l /var/lasso/instances/default/SQLiteDBs/ or use a sqlite tool. The logbook and email tables are also likely suspects.
i'm working with IronPython 2.6 for .Net4 and sqlite3 module from: IronPython.SQLite.
i have a written a GUI program what runs in four frames of an MDI window. Every of the four programs receives data from a serialport and stores this data in a sqlite database. One database per program.
Between inserting this data on receive into the database the program querys the database every 100ms for the latest data items.
I'm already using a mutex call for the cursor.execute() command to prevent problems with simultaneous commands (insert or select).
During runtime the program (sporadically) runs into an exception.
When trying to query data:
System.Exception: database disk image is malformed
or when trying to insert data:
System.Exception: database or disk is full
Is it possible, that an database query short after an database insert (or the way around) could cause such exceptions and destroy the database?
It would be very kind of you, if you could give me a kind of advice how to solve this issue.
I have a Web site live and running now. I am using the Subsonic to handle the database connections etc.
I am getting time out expired error while updating a table (say Employee). When I check sp_who2, I see the suspended connection for the PID which is updating with a block by anothor pid, so I run the profiler and found out when ever this suspended connection occur, the blocked pid is a select statement on the view (say ActiveEmployees, which is the same as the table but with some where conditions).
Anyone know why a Select statement on the view could cause failure in update. If it is other (like select fails due to update) may be reasonable.
Is there any way for me to make select on a view without locking the table?
PS: I am using the Sql server 2005 and subsonic 2.2.
You might add with(nolock) hint to the select statement in the view if you don't care about accuracy of the returned data (it will return uncommited rows possibly).
We encountered timeouts also when the select statements where scanning a table that other thread was inserting into. I resolved the issue by adding appropriate index that is used by our select.