How do I prevent SQLite database locks? - sqlite

From sqlite FAQ I've known that:
Multiple processes can have the same database open at the same time.
Multiple processes can be doing a SELECT at the same time. But only
one process can be making changes to the database at any moment in
time, however.
So, as far as I understand I can:
1) Read db from multiple threads (SELECT)
2) Read db from multiple threads (SELECT) and write from single thread (CREATE, INSERT, DELETE)
But, I read about Write-Ahead Logging that provides more concurrency as readers do not block writers and a writer does not block readers. Reading and writing can proceed concurrently.
Finally, I've got completely muddled when I found it, when specified:
Here are other reasons for getting an SQLITE_LOCKED error:
Trying to CREATE or DROP a table or index while a SELECT statement is
still pending.
Trying to write to a table while a SELECT is active on that same table.
Trying to do two SELECT on the same table at the same time in a
multithread application, if sqlite is not set to do so.
fcntl(3,F_SETLK call on DB file fails. This could be caused by an NFS locking
issue, for example. One solution for this issue, is to mv the DB away,
and copy it back so that it has a new Inode value
So, I would like to clarify for myself, when I should to avoid the locks? Can I read and write at the same time from two different threads? Thanks.

For those who are working with Android API:
Locking in SQLite is done on the file level which guarantees locking
of changes from different threads and connections. Thus multiple
threads can read the database however one can only write to it.
More on locking in SQLite can be read at SQLite documentation but we are most interested in the API provided by OS Android.
Writing with two concurrent threads can be made both from a single and from multiple database connections. Since only one thread can write to the database then there are two variants:
If you write from two threads of one connection then one thread will
await on the other to finish writing.
If you write from two threads of different connections then an error
will be – all of your data will not be written to the database and
the application will be interrupted with
SQLiteDatabaseLockedException. It becomes evident that the
application should always have only one copy of
SQLiteOpenHelper(just an open connection) otherwise
SQLiteDatabaseLockedException can occur at any moment.
Different Connections At a Single SQLiteOpenHelper
Everyone is aware that SQLiteOpenHelper has 2 methods providing access to the database getReadableDatabase() and getWritableDatabase(), to read and write data respectively. However in most cases there is one real connection. Moreover it is one and the same object:
SQLiteOpenHelper.getReadableDatabase()==SQLiteOpenHelper.getWritableDatabase()
It means that there is no difference in use of the methods the data is read from. However there is another undocumented issue which is more important – inside of the class SQLiteDatabase there are own locks – the variable mLock. Locks for writing at the level of the object SQLiteDatabase and since there is only one copy of SQLiteDatabase for read and write then data read is also blocked. It is more prominently visible when writing a large volume of data in a transaction.
Let’s consider an example of such an application that should download a large volume of data (approx. 7000 lines containing BLOB) in the background on first launch and save it to the database. If the data is saved inside the transaction then saving takes approx. 45 seconds but the user can not use the application since any of the reading queries are blocked. If the data is saved in small portions then the update process is dragging out for a rather lengthy period of time (10-15 minutes) but the user can use the application without any restrictions and inconvenience. “The double edge sword” – either fast or convenient.
Google has already fixed a part of issues related to SQLiteDatabase functionality as the following methods have been added:
beginTransactionNonExclusive() – creates a transaction in the “IMMEDIATE mode”.
yieldIfContendedSafely() – temporary seizes the transaction in order to allow completion of tasks by other threads.
isDatabaseIntegrityOk() – checks for database integrity
Please read in more details in the documentation.
However for the older versions of Android this functionality is required as well.
The Solution
First locking should be turned off and allow reading the data in any situation.
SQLiteDatabase.setLockingEnabled(false);
cancels using internal query locking – on the logic level of the java class (not related to locking in terms of SQLite)
SQLiteDatabase.execSQL(“PRAGMA read_uncommitted = true;”);
Allows reading data from cache. In fact, changes the level of isolation. This parameter should be set for each connection anew. If there are a number of connections then it influences only the connection that calls for this command.
SQLiteDatabase.execSQL(“PRAGMA synchronous=OFF”);
Change the writing method to the database – without “synchronization”. When activating this option the database can be damaged if the system unexpectedly fails or power supply is off. However according to the SQLite documentation some operations are executed 50 times faster if the option is not activated.
Unfortunately not all of PRAGMA is supported in Android e.g. “PRAGMA locking_mode = NORMAL” and “PRAGMA journal_mode = OFF” and some others are not supported. At the attempt to call PRAGMA data the application fails.
In the documentation for the method setLockingEnabled it is said that this method is recommended for using only in the case if you are sure that all the work with the database is done from a single thread. We should guarantee than at a time only one transaction is held. Also instead of the default transactions (exclusive transaction) the immediate transaction should be used. In the older versions of Android (below API 11) there is no option to create the immediate transaction thru the java wrapper however SQLite supports this functionality. To initialize a transaction in the immediate mode the following SQLite query should be executed directly to the database, – for example thru the method execSQL:
SQLiteDatabase.execSQL(“begin immediate transaction”);
Since the transaction is initialized by the direct query then it should be finished the same way:
SQLiteDatabase.execSQL(“commit transaction”);
Then TransactionManager is the only thing left to be implemented which will initiate and finish transactions of the required type. The purpose of TransactionManager – is to guarantee that all of the queries for changes (insert, update, delete, DDL queries) originate from the same thread.
Hope this helps the future visitors!!!

Not specific to SQLite:
1) Write your code to gracefully handle the situation where you get a locking conflict at the application level; even if you wrote your code so that this is 'impossible'. Use transactional re-tries (ie: SQLITE_LOCKED could be one of many codes that you interpret as "try again" or "wait and try again"), and coordinate this with application-level code. If you think about it, getting a SQLITE_LOCKED is better than simply having the attempt hang because it's locked - because you can go do something else.
2) Acquire locks. But you have to be careful if you need to acquire more than one. For each transaction at the application level, acquire all of the resources (locks) you will need in a consistent (ie: alphabetical?) order to prevent deadlocks when locks get acquired in the database. Sometimes you can ignore this if the database will reliably and quickly detect the deadlocks and throw exceptions; in other systems it may just hang without detecting the deadlock - making it absolutely necessary to take the effort to acquire the locks correctly.
Besides the facts of life with locking, you should try to design the data and in-memory structures with concurrent merging and rolling back planned in from the beginning. If you can design data such that the outcome of a data race gives a good result for all orders, then you don't have to deal with locks in that case. A good example is to increment a counter without knowing its current value, rather than reading the value and submitting a new value to update. It's similar for appending to a set (ie: adding a row, such that it doesn't matter which order the row inserts happened).
A good system is supposed to transactionally move from one valid state to the next, and you can think of exceptions (even in in-memory code) as aborting an attempt to move to the next state; with the option to ignore or retry.

You're fine with multithreading. The page you link lists what you cannot do while you're looping on the results of your SELECT (i.e. your select is active/pending) in the same thread.

Related

DDD: persisting domain objects into two databases. How many repositories should I use?

I need to persist my domain objects into two different databases. This use case is purely write-only. I don't need to read back from the databases.
Following Domain Driven Design, I typically create a repository for each aggregate root.
I see two alternatives. I can create one single repository for my AG, and implement it so that it persists the domain object into the two databases.
The second alternative is to create two repositories, one each for each database.
From a domain driven design perspective, which alternative is correct?
My requirement is that it must persist the AR in both databases - all or nothing. So if the first one goes through and the second fails, I would need to remove the AG from the first one.
If you had a transaction manager that were to span across those two databases, you would use that manager to automatically roll back all of the transactions if one of them fails. A transaction manager like that would necessarily add overhead to your writes, as it would have to ensure that all transactions succeeded, and while doing so, maintain a lock on the tables being written to.
If you consider what the transaction manager is doing, it is effectively writing to one database and ensuring that write is successful, writing to the next, and then committing those transactions. You could implement the same type of process using a two-phase commit process. Unfortunately, this can be complicated because the process of keeping two databases in sync is inherently complex.
You would use a process manager or saga to manage the process of ensuring that the databases are consistent:
Write to the first database and leave the record in a PENDING status (not visible to user reads).
Make a request to second database to write the record in a PENDING status.
Make a request to the first database to leave the record in a VALID status (visible to user reads).
Make a request to the second database to leave the record in a VALID status.
The issue with this approach is that the process can fail at any point. In this case, you would need to account for those failures. For example,
You can have a process that comes through and finds records in PENDING status that are older than X minutes and continues pushing them through the workflow.
You can can have a process that cleans up any PENDING records after X minutes and purges them from the database.
Ideally, you are using something like a queue based workflow that allows you to fire and forget these commands and a saga or process manager to identify and react to failures.
The second alternative is to create two repositories, one each for each database.
Based on the above, hopefully you can understand why this is the correct option.
If you don't need to write why don't build some sort of commands log?
The log acts as a queue, you write the operation in it, and two processes pulls new command from it and each one update a database, if you can accept that in worst case scenario the two dbs can have different version of the data, with the guarantees that eventually they will be consistent it seems to me much easier than does transactions spanning two different dbs.
I'm not sure how much DDD is your use case, as if you don't need to read back you don't have any state to manage, so no need for entities/aggregates

Concurrency issue with two SQLite connections in two different Threads

I am using System.Data.SQLite with C#
I have
Thread 1 (UI) - writes to table1
Thread 2 (worker) writes to table1
So I have two threads writing concurrently to same database. I write my data on batches and each batch has a transaction. Batches are needed to avoid locking the database for too long so the other thread can have write access to the database.
But that doesn't work. I hoped thread1 to be able to write to the database between the transaction-batches of thread2 but that doesn't happen unless I have Thread.Sleep(100) between batches. Note having a small value for Thread.Sleep(10) doesn't work either. I understand this is related to Thread context switching but I don't understand why small amounts of Thread.Sleep doesn't do the job.
Is there way to control the priority of who acquire the lock the database, because using Thread.Sleep is bad?
P.S. It seems this is a problem even without a transactions. If I have a loop with many insert statements, the other thread cannot execute anything between insert statements if there is no Thread.Sleep
I don't think SQlite supports true concurrent write transactions... I've used "non-exclusive" transactions on Android and they're documented (I'm writing in my own words from memory) as allowing concurrent reads while there may be a write going on.
Now more to the point... Looking at SQLite transaction docs:
https://sqlite.org/lang_transaction.html
Thus with a deferred transaction, the BEGIN statement itself does
nothing to the filesystem. Locks are not acquired until the first read
or write operation. The first read operation against a database
creates a SHARED lock and the first write operation creates a RESERVED
lock. Because the acquisition of locks is deferred until they are
needed, it is possible that another thread or process could create a
separate transaction and write to the database after the BEGIN on the
current thread has executed. If the transaction is immediate, then
RESERVED locks are acquired on all databases as soon as the BEGIN
command is executed, without waiting for the database to be used.
( emphasis - mine )
OK, so now we've learned that writing to a database requires a RESERVED lock.
Let's see what that is here:
https://sqlite.org/lockingv3.html#reserved_lock
A RESERVED lock means that the process is planning on writing to the
database file at some point in the future but that it is currently
just reading from the file. Only a single RESERVED lock may be active
at one time, though multiple SHARED locks can coexist with a single
RESERVED lock. RESERVED differs from PENDING in that new SHARED locks
can be acquired while there is a RESERVED lock.
OK, so this confirms that SQLite requires a RESERVED lock for writing to the database, and also tells us that only a single RESERVED lock may exist at a time -> only one transaction is allowed to have write access, others will wait.
Now if you're trying to interleave writing transactions from your two threads, each thread executing multiple (granular) transactions - then here is an idea:
Replacing Thread.Sleep with Thread.Yield
https://learn.microsoft.com/en-us/dotnet/api/system.threading.thread.yield
This may help with the issue of "the current writing thread's Sleep didn't cause a context switch to the thread we want".
Even with Yield there are still no guarantees that the OS / runtime will switch to the thread you want, but... perhaps worth a try, and at least you won't be artificially making your code run slower.
Given what we know about SQLite's "only one transaction is allowed to write", I'd consider the following pattern:
1 - Make a new thread whose job is to process database writes
2 - Queue up write operations to this thread from your current two threads
3 - Have the "operations" be self-contained / sufficient objects, containingf all the data they intend to write
4 - Finally, use a callback with a latch (in C# it's CountDownEvent I believe) to know when an operation is done, so your current threads can await completion
Then you'd only have one writing thread (as far as SQlite is concerned) and still have concurrency between your two current threads.
Pseudo code:
// Write thread
while (item = blockingQueue.getNextItemToWrite()) {
item.executeWrite(database)
item.signalCompletion()
}
// Thread 1
item = new WriteItem(some data that needs to be written)
WriteThread.enqeue(item)
item.awaitCompletion()
// Thread 2 - same as Thread 1
Where WriteItem base class has a CountDownEvent which is 1) awaited on by awaitCompletion and 2) signaled by signalCompletion.
I'm sure there is a way to wrap this into more elegant helper classes and maybe use async / await.
Take a look at busy_timeout. In the ideal world, both your threads (assuming they don not share the connection) should be allowed to read and write at their convenience. Why bother with timing with Sleep if you can avoid it?
Next, you are correctly using transactions. Have you looked into the three different behaviours for transactions? https://sqlite.org/lang_transaction.html
That however does not solve the issue that thread 1 might try to get a lock while thread 2 is. For this, see PRAGMA busy_timeout. Simply put the pragma on each connection to, say, 1000 (ms). If thread 2 has locked the database and thread 1 tries to get a lock, it will simply wait for 1000 ms until failing with a timeout error. (https://sqlite.org/pragma.html#pragma_busy_timeout)

sqlite setup for immediate writes but can't corrupt old data

I am trying to figure out how to best configure sqlite3. I need writes to be very fast but I can't risk the entire database getting corrupt in the event of a power failure. I don't care if the last write or last few writes are lost in the event of a power failure. I just don't want all the data to be lost. What would be the best settings to use to achieve this?
What you are looking for is the Write ahead log, or WAL journalling mode. Otherwise, there is also the asynchronous I/O module. You will find information about it here: An Asynchronous I/O Module For SQLite.
It saves writes to a queue which is dispatched to the filesystem in a background thread. The transactional guarantees still apply so as long as your transactions are composed correctly, there's no danger of corrupting the database.

SQLite Concurrent Access

Does SQLite3 safely handle concurrent access by multiple processes
reading/writing from the same DB? Are there any platform exceptions to that?
If most of those concurrent accesses are reads (e.g. SELECT), SQLite can handle them very well. But if you start writing concurrently, lock contention could become an issue. A lot would then depend on how fast your filesystem is, since the SQLite engine itself is extremely fast and has many clever optimizations to minimize contention. Especially SQLite 3.
For most desktop/laptop/tablet/phone applications, SQLite is fast enough as there's not enough concurrency. (Firefox uses SQLite extensively for bookmarks, history, etc.)
For server applications, somebody some time ago said that anything less than 100K page views a day could be handled perfectly by a SQLite database in typical scenarios (e.g. blogs, forums), and I have yet to see any evidence to the contrary. In fact, with modern disks and processors, 95% of web sites and web services would work just fine with SQLite.
If you want really fast read/write access, use an in-memory SQLite database. RAM is several orders of magnitude faster than disk.
Yes it does.
Lets figure out why
SQLite is transactional
All changes within a single transaction in SQLite either occur
completely or not at all
Such ACID support as well as concurrent read/writes are provided in 2 ways - using the so-called journaling (lets call it “old way”) or write-ahead logging (lets call it “new way”)
Journaling (Old Way)
In this mode SQLite uses DATABASE-LEVEL locking.
This is the crucial point to understand.
That means whenever it needs to read/write something it first acquires a lock on the ENTIRE database file.
Multiple readers can co-exist and read something in parallel.
During writing it makes sure an exclusive lock is acquired and no other process is reading/writing simultaneously and hence writes are safe.
(This is known as a multiple-readers-single-writer or MSRW lock)
This is why here they’re saying SQlite implements serializable transactions
Troubles
As it needs to lock an entire database every time and everybody waits for a process handling writing concurrency suffers and such concurrent writes/reads are of fairly low performance
Rollbacks/outages
Prior to writing something to the database file SQLite would first save the chunk to be changed in a temporary file. If something crashes in the middle of writing into the database file it would pick up this temporary file and revert the changes from it
Write-Ahead Logging or WAL (New Way)
In this case all writes are appended to a temporary file (write-ahead log) and this file is periodically merged with the original database.
When SQLite is searching for something it would first check this temporary file and if nothing is found proceed with the main database file.
As a result, readers don’t compete with writers and performance is much better compared to the Old Way.
Caveats
SQlite heavily depends on the underlying filesystem locking functionality so it should be used with caution, more details here
You're also likely to run into the database is locked error, especially in the journaled mode so your app needs to be designed with this error in mind
Yes, SQLite handles concurrency well, but it isn't the best from a performance angle. From what I can tell, there are no exceptions to that. The details are on SQLite's site: https://www.sqlite.org/lockingv3.html
This statement is of interest: "The pager module makes sure changes happen all at once, that either all changes occur or none of them do, that two or more processes do not try to access the database in incompatible ways at the same time"
Nobody seems to have mentioned WAL (Write Ahead Log) mode. Make sure the transactions are properly organised and with WAL mode set on, there is no need to keep the database locked whilst people are reading things whilst an update is going on.
The only issue is that at some point the WAL needs to be re-incorporated into the main database, and it does this when the last connection to the database closes. With a very busy site you might find it take a few seconds for all connections to be close, but 100K hits per day should not be a problem.
In 2019, there are two new concurrent write options not released yet but available in separate branches.
"PRAGMA journal_mode = wal2"
The advantage of this journal mode over regular "wal" mode is that writers may continue writing to one wal file while the other is checkpointed.
BEGIN CONCURRENT - link to detailed doc
The BEGIN CONCURRENT enhancement allows multiple writers to process write transactions simultanously if the database is in "wal" or "wal2" mode, although the system still serializes COMMIT commands.
When a write-transaction is opened with "BEGIN CONCURRENT", actually locking the database is deferred until a COMMIT is executed. This means that any number of transactions started with BEGIN CONCURRENT may proceed concurrently. The system uses optimistic page-level-locking to prevent conflicting concurrent transactions from being committed.
Together they are present in begin-concurrent-wal2 or each in a separate own branch.
SQLite has a readers-writer lock on the database level. Multiple connections (possibly owned by different processes) can read data from the same database at the same time, but only one can write to the database.
SQLite supports an unlimited number of simultaneous readers, but it will only allow one writer at any instant in time. For many situations, this is not a problem. Writer queue up. Each application does its database work quickly and moves on, and no lock lasts for more than a few dozen milliseconds. But there are some applications that require more concurrency, and those applications may need to seek a different solution. -- Appropriate Uses For SQLite # SQLite.org
The readers-writer lock enables independent transaction processing and it is implemented using exclusive and shared locks on the database level.
An exclusive lock must be obtained before a connection performs a write operation on a database. After the exclusive lock is obtained, both read and write operations from other connections are blocked till the lock is released again.
Implementation details for the case of concurrent writes
SQLite has a lock table that helps locking the database as late as possible during a write operation to ensure maximum concurrency.
The initial state is UNLOCKED, and in this state, the connection has not accessed the database yet. When a process is connected to a database and even a transaction has been started with BEGIN, the connection is still in the UNLOCKED state.
After the UNLOCKED state, the next state is the SHARED state. In order to be able to read (not write) data from the database, the connection must first enter the SHARED state, by getting a SHARED lock.
Multiple connections can obtain and maintain SHARED locks at the same time, so multiple connections can read data from the same database at the same time. But as long as even only one SHARED lock remains unreleased, no connection can successfully complete a write to the database.
If a connection wants to write to the database, it must first get a RESERVED lock.
Only a single RESERVED lock may be active at one time, though multiple SHARED locks can coexist with a single RESERVED lock. RESERVED differs from PENDING in that new SHARED locks can be acquired while there is a RESERVED lock. -- File Locking And Concurrency In SQLite Version 3 # SQLite.org
Once a connection obtains a RESERVED lock, it can start processing database modification operations, though these modifications can only be done in the buffer, rather than actually written to disk. The modifications made to the readout content are saved in the memory buffer.
When a connection wants to submit a modification (or transaction), it is necessary to upgrade the RESERVED lock to an EXCLUSIVE lock. In order to get the lock, you must first lift the lock to a PENDING lock.
A PENDING lock means that the process holding the lock wants to write to the database as soon as possible and is just waiting on all current SHARED locks to clear so that it can get an EXCLUSIVE lock. No new SHARED locks are permitted against the database if a PENDING lock is active, though existing SHARED locks are allowed to continue.
An EXCLUSIVE lock is needed in order to write to the database file. Only one EXCLUSIVE lock is allowed on the file and no other locks of any kind are allowed to coexist with an EXCLUSIVE lock. In order to maximize concurrency, SQLite works to minimize the amount of time that EXCLUSIVE locks are held.
-- File Locking And Concurrency In SQLite Version 3 # SQLite.org
So you might say SQLite safely handles concurrent access by multiple processes writing to the same DB simply because it doesn't support it! You will get SQLITE_BUSY or SQLITE_LOCKED for the second writer when it hits the retry limitation.
This thread is old but i think it would be good to share result of my tests done on sqlite:
i ran 2 instances of python program (different processes same program) executing statements SELECT and UPDATE sql commands within transaction with EXCLUSIVE lock and timeout set to 10 seconds to get a lock, and result were frustrating. Every instance did in 10000 step loop:
connect to db with exclusive lock
select on one row to read counter
update the row with new value equal to counter incremented by 1
close connection to db
Even if sqlite granted exclusive lock on transaction, the total number of really executed cycles were not equal to 20 000 but less (total number of iterations over single counter counted for both processes).
Python program almost did not throw any single exception (only once during select for 20 executions).
sqlite revision at moment of test was 3.6.20 and python v3.3 CentOS 6.5.
In mine opinion it is better to find more reliable product for this kind of job or restrict writes to sqlite to single unique process/thread.
It is natural when you specify the name for db or even in memory db if you have concurrent access (specially write) you will get this.
In my case, I am using Sqlite for testing and it is because there are several tests in the same solution it happens.
You can have two improvements:
Delete before creating db.Database.EnsureDeletedAsync();
Use an empty string for connection, in this case it will create a random name each call:
{
"ConnectionStrings": {
"ConnectionType": "sqlite",
"ConnectionString": ""
}
}

Optimizing Put Performance in Berkeley DB

I just started playing with Berkeley DB a few days ago so I'm trying to see if there's something I've been missing when it comes to storing data as fast as possible.
Here's some info about the data:
- it comes in 512 byte chunks
- chunks come in order
- chunks will be deleted in FIFO order
- if i lose some data off the end because of power failure that's ok as long as the whole db isn't broken
After reading the a bunch of the documentation it seemed like a Queue db was exactly what I wanted.
However, after trying some test code my fastest results were about 1MByte per second just looping through a DB->put with DB_APPEND set. I also tried using transactions and bulk puts but both of these slowed things down considerably so I didn't pursue them for much time. I was inserting into a fresh db created on a NANDFlash chip on my Freescale i.MX35 dev board.
Since we're looking to get at least 2MBytes per second write speeds, I was wondering if there's something I missed that can improve my speeds since I know that my hardware can write faster than this.
Try putting this into your DB_CONFIG:
set_flags DB_TXN_WRITE_NOSYNC
set_flags DB_TXN_NOSYNC
From my experience, these increase write performance a lot.
DB_TXN_NOSYNC
If set, Berkeley DB will not write or synchronously flush the log on transaction commit or prepare. This means that transactions exhibit the ACI (atomicity, consistency, and isolation) properties, but not D (durability); that is, database integrity will be maintained, but if the application or system fails, it is possible some number of the most recently committed transactions may be undone during recovery. The number of transactions at risk is governed by how many log updates can fit into the log buffer, how often the operating system flushes dirty buffers to disk, and how often the log is checkpointed
Calling DB_ENV->set_flags with the DB_TXN_NOSYNC flag only affects the specified DB_ENV handle (and any other Berkeley DB handles opened within the scope of that handle). For consistent behavior across the environment, all DB_ENV handles opened in the environment must either set the DB_TXN_NOSYNC flag or the flag should be specified in the DB_CONFIG configuration file.
The DB_TXN_NOSYNC flag may be used to configure Berkeley DB at any time during the life of the application.
DB_TXN_WRITE_NOSYNC
If set, Berkeley DB will write, but will not synchronously flush, the log on transaction commit or prepare. This means that transactions exhibit the ACI (atomicity, consistency, and isolation) properties, but not D (durability); that is, database integrity will be maintained, but if the system fails, it is possible some number of the most recently committed transactions may be undone during recovery. The number of transactions at risk is governed by how often the system flushes dirty buffers to disk and how often the log is checkpointed.
Calling DB_ENV->set_flags with the DB_TXN_WRITE_NOSYNC flag only affects the specified DB_ENV handle (and any other Berkeley DB handles opened within the scope of that handle). For consistent behavior across the environment, all DB_ENV handles opened in the environment must either set the DB_TXN_WRITE_NOSYNC flag or the flag should be specified in the DB_CONFIG configuration file.
The DB_TXN_WRITE_NOSYNC flag may be used to configure Berkeley DB at any time during the life of the application.
See http://www.mathematik.uni-ulm.de/help/BerkeleyDB/api_c/env_set_flags.html for more details.
I suggest you must use transactions / TDS datastore if as you mention you cannot recreate a database (i.e. it isnt just a local cache) if it gets corrupted. If you dont care about loosing a few items in event of a crash/power outage then DB_TXN_WRITE_NOSYNC will improve TDS performance, you database will still be integral and recoverable.
If you store using BTREE and a numeric index (if you have no natural key) and watch out for endian issues so you get good key locality and high page utilization then you should be able to get way more than 2000 inserts a second, especially to SSD, especially if you Use DbMultileKeyDataBuilder to do bulk inserts.

Resources