How to prevent deadlock in the marklogic trigger.
Triggers are using collection.
mblakele Is right,
However deadlocks normally happen because you are writing to a file that needs to be read or your reading a file that has a write lock on it.
So I would start by looking at how you are saving your files. You might want to isolate your savings from your reads.
Related
)
I'm deveoping a program using an SQLite database I acces via QSqlDatabase. I'd like to handle the (hopefully rare) case when some changes are done to the database which are not caused by the program while it's running (e. g. the user could remove write access, move or delete the file or modify it manually).
I tried to use a QFileSystemWatcher. I let it watch the database file, and in all functions wrtiting something to it, I blocked it's signals, so that only "external" changes would trigger the changed signal.
Problem is that the check of the QFileSystemWatcher and/or the actual writing to disk of QSqlDatabase::commit() seems not to happen in the exact moment I call commit(), so that actually, first the QFileSystemWatcher's signals are blocked, then I change some stuff, then I unblock them and then, it reports the file to be changed.
I then tried to set a bool variable (m_writeInProgress) to true each time a function requests a change. The "changed" slot then checks if a write action has be requested and if so, sets m_writeInProgress to false again and exits. This way, it would only handle "external" changes.
Problem is still that if the change happens in the exact moment the actual writing is going on, it's not catched.
So possibly, using a QFileSystemWatcher is the wrong way to implement this.
How could this be done in a safe way?
Thanks for all help!
Edit:
I found a way to solve a part of the problem. Starting an exclusive lock on the database file prevents other connections from changing it. It's quite simple, I just have to execute
PRAGMA locking_mode = EXCLUSIVE
BEGIN EXCLUSIVE
COMMIT
and handle the error that emerges if another instance of my program trys to access the database.
What's left is to know if the user (accidentally) deleted the file during runtime ...
First of all, there's no SQLITE support for this: SQLITE only supports monitoring changes created over a database connection within your direct control. Whatever happens in a separate process concurrently with your process, or when your process is not running, is by design completely out of your control.
The canonical solution to this problem is to encrypt the database with a key specific to your application (and perhaps user, etc.). Then, no third-party process can modify the database using SQLITE. Of course any process can corrupt your database, or get rid of it -- that's too bad. You can detect corruption trivially by using cryptographic signatures, perhaps even error correcting codes so as to be able to restore the data should a certain amount of corruption happen. You don't need notifications of someone moving or deleting the database file: you will know when you attempt to open the database and the "file not found" error is given back to you.
Of course all of the above requires a custom VFS implementation. That's very much par for the course.
I want to immediately copy an Sqlite database file after closing the db, so is this safe or is it asynchronous in that the close function does not wait until everything is complete? I.e. is there risk of corruption if you copy a db file using OS file operations, imiediately after the db is closed?
My guess would be that there is no issue and the close of the db ensures now it is safe to make a copy of the file on the drive... But I want to make sure, as a corrupt copy of a database would be a huge headache, and likely intermittent bug not always occuring.
The close call to the db will not be in a separate thread in my program
The SQLite library runs in your process, and does not use threads (except for sorting).
So it is guaranteed that after closing, any operations done through this connection are complete.
However, this does not prevent other processes from accessing the database file. Better use the online backup API to make a copy.
I am trying to figure out how to best configure sqlite3. I need writes to be very fast but I can't risk the entire database getting corrupt in the event of a power failure. I don't care if the last write or last few writes are lost in the event of a power failure. I just don't want all the data to be lost. What would be the best settings to use to achieve this?
What you are looking for is the Write ahead log, or WAL journalling mode. Otherwise, there is also the asynchronous I/O module. You will find information about it here: An Asynchronous I/O Module For SQLite.
It saves writes to a queue which is dispatched to the filesystem in a background thread. The transactional guarantees still apply so as long as your transactions are composed correctly, there's no danger of corrupting the database.
In my application, there is a thread that is constantly receiving and writing data to a SQLite database inside a transaction, then committing the transaction when it's done.
At the same time, when the application runs a long running query, the write thread seems to get blocked and no data gets written. Each method uses the same connection object.
Is there way to do an equivalent of a SQL (nolock) query, or some other way to have my reads not lock up any of my tables?
Thanks!
You have a concept error. SQLite works on this way:
The traditional File/Open operation does an sqlite3_open() and
executes a BEGIN TRANSACTION to get exclusive access to the content.
File/Save does a COMMIT followed by another BEGIN TRANSACTION. The use
of transactions guarantees that updates to the application file are
atomic, durable, isolated, and consistent.
So you can't work on this way, because is not really need of work that way. I think you must rethink the algorithm to work with SQLite. Thats the reason your connection is blocked.
More information:
When use it.
FAQ
Using threads on SQLite: avoid them!
I'm investigating SQLite as a storage engine, and am curious to know whether SQLite locks the database file on reads.
I am concerned about read performance as my planned project will have few writes, but many reads. If the database does lock, are there measures that can be taken (such as memory caching) to mitigate this?
You can avoid locks when reading, if you set database journal mode to Write-Ahead Logging (see: http://www.sqlite.org/wal.html).
From its Wikipedia page:
Several computer processes or threads may access the same database without problems. Several read accesses can be satisfied in parallel.
More precisely, from its FAQ:
Multiple processes can have the same database open at the same time. Multiple processes can be doing a SELECT at the same time. But only one process can be making changes to the database at any moment in time, however.
A single write to the database however, does lock the database for a short time so nothing can access it at all (not even reading). Details may be found in File Locking And Concurrency In SQLite Version 3. Basically reading the database is no problem unless someone wants to write to the database immediately. In that case the DB is locked exclusively for the time it takes to execute that transaction and the lock is released afterwards. However, details are scarce on what exactly does with read operations on the datapase in the time of a PENDING or EXCLUSIVE lock. My guess is that they either return SQLITE_BUSY or block until they can read. In the first case, it shouldn't be too hard to just try again, especially if you are expecting few writes.
Adding more info for this answer:
Q: Does SQLite lock the database file on reads?
A: No and Yes
Ref: https://www.sqlite.org/atomiccommit.html#_acquiring_a_read_lock
The first step toward reading from the database file is obtaining a shared lock on the database file. A "shared" lock allows two or more database connections to read from the database file at the same time. But a shared lock prevents another database connection from writing to the database file while we are reading it