Multiple users access same Mariadb database simultaneously - mariadb

I've a Mariadb database and I want multiple db users from different systems at the same time should be able to access db to write/read datas inside it.
Is possible accomplish that goal?
Thank you in advance

Concurrent access is the only way to access MySQL/MariaDB. You would have to do something extra to prevent it.
That said, there are certain protocols to be concerned about.
Multiple connections can SELECT without any interference (other than resource contention, such as CPU).
Multiple writes (UPDATE, DELETE, INSERT) lead to contention among themselves, or sometimes even with SELECT. In some apps, this contention is not a problem. In others, you need to be careful to avoid "wrong" results. A classic problem is transferring money from one account to another. It should be performed by a "transaction" like this:
BEGIN;
UPDATE accounts SET balance = balance - 1000 WHERE id = 234;
UPDATE accounts SET balance = balance + 1000 WHERE id = 765;
if ... then COMMIT else ROLLBACK
You don't want any other connection messing with either of those rows while both UPDATEs are being performed, hence the BEGIN and COMMIT.
If you find something amiss, such is 'insufficient funds', or if the system crashes, you want a ROLLBACK so that these two rows are reverted to their state before the BEGIN.
Contention is handled in one of 3 ways:
Allow simultaneous access (eg just SELECTs)
"Wait" -- Some conflicts can be resolved by one connection waiting for another to release the row(s).
"Deadlock" -- Some conflicts cannot be resolved simply by waiting. In this case, one of the transactions is ROLLBACK'd for you.
See also SELECT ... FOR UPDATE

Related

DDD: persisting domain objects into two databases. How many repositories should I use?

I need to persist my domain objects into two different databases. This use case is purely write-only. I don't need to read back from the databases.
Following Domain Driven Design, I typically create a repository for each aggregate root.
I see two alternatives. I can create one single repository for my AG, and implement it so that it persists the domain object into the two databases.
The second alternative is to create two repositories, one each for each database.
From a domain driven design perspective, which alternative is correct?
My requirement is that it must persist the AR in both databases - all or nothing. So if the first one goes through and the second fails, I would need to remove the AG from the first one.
If you had a transaction manager that were to span across those two databases, you would use that manager to automatically roll back all of the transactions if one of them fails. A transaction manager like that would necessarily add overhead to your writes, as it would have to ensure that all transactions succeeded, and while doing so, maintain a lock on the tables being written to.
If you consider what the transaction manager is doing, it is effectively writing to one database and ensuring that write is successful, writing to the next, and then committing those transactions. You could implement the same type of process using a two-phase commit process. Unfortunately, this can be complicated because the process of keeping two databases in sync is inherently complex.
You would use a process manager or saga to manage the process of ensuring that the databases are consistent:
Write to the first database and leave the record in a PENDING status (not visible to user reads).
Make a request to second database to write the record in a PENDING status.
Make a request to the first database to leave the record in a VALID status (visible to user reads).
Make a request to the second database to leave the record in a VALID status.
The issue with this approach is that the process can fail at any point. In this case, you would need to account for those failures. For example,
You can have a process that comes through and finds records in PENDING status that are older than X minutes and continues pushing them through the workflow.
You can can have a process that cleans up any PENDING records after X minutes and purges them from the database.
Ideally, you are using something like a queue based workflow that allows you to fire and forget these commands and a saga or process manager to identify and react to failures.
The second alternative is to create two repositories, one each for each database.
Based on the above, hopefully you can understand why this is the correct option.
If you don't need to write why don't build some sort of commands log?
The log acts as a queue, you write the operation in it, and two processes pulls new command from it and each one update a database, if you can accept that in worst case scenario the two dbs can have different version of the data, with the guarantees that eventually they will be consistent it seems to me much easier than does transactions spanning two different dbs.
I'm not sure how much DDD is your use case, as if you don't need to read back you don't have any state to manage, so no need for entities/aggregates

How do I prevent SQLite database locks?

From sqlite FAQ I've known that:
Multiple processes can have the same database open at the same time.
Multiple processes can be doing a SELECT at the same time. But only
one process can be making changes to the database at any moment in
time, however.
So, as far as I understand I can:
1) Read db from multiple threads (SELECT)
2) Read db from multiple threads (SELECT) and write from single thread (CREATE, INSERT, DELETE)
But, I read about Write-Ahead Logging that provides more concurrency as readers do not block writers and a writer does not block readers. Reading and writing can proceed concurrently.
Finally, I've got completely muddled when I found it, when specified:
Here are other reasons for getting an SQLITE_LOCKED error:
Trying to CREATE or DROP a table or index while a SELECT statement is
still pending.
Trying to write to a table while a SELECT is active on that same table.
Trying to do two SELECT on the same table at the same time in a
multithread application, if sqlite is not set to do so.
fcntl(3,F_SETLK call on DB file fails. This could be caused by an NFS locking
issue, for example. One solution for this issue, is to mv the DB away,
and copy it back so that it has a new Inode value
So, I would like to clarify for myself, when I should to avoid the locks? Can I read and write at the same time from two different threads? Thanks.
For those who are working with Android API:
Locking in SQLite is done on the file level which guarantees locking
of changes from different threads and connections. Thus multiple
threads can read the database however one can only write to it.
More on locking in SQLite can be read at SQLite documentation but we are most interested in the API provided by OS Android.
Writing with two concurrent threads can be made both from a single and from multiple database connections. Since only one thread can write to the database then there are two variants:
If you write from two threads of one connection then one thread will
await on the other to finish writing.
If you write from two threads of different connections then an error
will be – all of your data will not be written to the database and
the application will be interrupted with
SQLiteDatabaseLockedException. It becomes evident that the
application should always have only one copy of
SQLiteOpenHelper(just an open connection) otherwise
SQLiteDatabaseLockedException can occur at any moment.
Different Connections At a Single SQLiteOpenHelper
Everyone is aware that SQLiteOpenHelper has 2 methods providing access to the database getReadableDatabase() and getWritableDatabase(), to read and write data respectively. However in most cases there is one real connection. Moreover it is one and the same object:
SQLiteOpenHelper.getReadableDatabase()==SQLiteOpenHelper.getWritableDatabase()
It means that there is no difference in use of the methods the data is read from. However there is another undocumented issue which is more important – inside of the class SQLiteDatabase there are own locks – the variable mLock. Locks for writing at the level of the object SQLiteDatabase and since there is only one copy of SQLiteDatabase for read and write then data read is also blocked. It is more prominently visible when writing a large volume of data in a transaction.
Let’s consider an example of such an application that should download a large volume of data (approx. 7000 lines containing BLOB) in the background on first launch and save it to the database. If the data is saved inside the transaction then saving takes approx. 45 seconds but the user can not use the application since any of the reading queries are blocked. If the data is saved in small portions then the update process is dragging out for a rather lengthy period of time (10-15 minutes) but the user can use the application without any restrictions and inconvenience. “The double edge sword” – either fast or convenient.
Google has already fixed a part of issues related to SQLiteDatabase functionality as the following methods have been added:
beginTransactionNonExclusive() – creates a transaction in the “IMMEDIATE mode”.
yieldIfContendedSafely() – temporary seizes the transaction in order to allow completion of tasks by other threads.
isDatabaseIntegrityOk() – checks for database integrity
Please read in more details in the documentation.
However for the older versions of Android this functionality is required as well.
The Solution
First locking should be turned off and allow reading the data in any situation.
SQLiteDatabase.setLockingEnabled(false);
cancels using internal query locking – on the logic level of the java class (not related to locking in terms of SQLite)
SQLiteDatabase.execSQL(“PRAGMA read_uncommitted = true;”);
Allows reading data from cache. In fact, changes the level of isolation. This parameter should be set for each connection anew. If there are a number of connections then it influences only the connection that calls for this command.
SQLiteDatabase.execSQL(“PRAGMA synchronous=OFF”);
Change the writing method to the database – without “synchronization”. When activating this option the database can be damaged if the system unexpectedly fails or power supply is off. However according to the SQLite documentation some operations are executed 50 times faster if the option is not activated.
Unfortunately not all of PRAGMA is supported in Android e.g. “PRAGMA locking_mode = NORMAL” and “PRAGMA journal_mode = OFF” and some others are not supported. At the attempt to call PRAGMA data the application fails.
In the documentation for the method setLockingEnabled it is said that this method is recommended for using only in the case if you are sure that all the work with the database is done from a single thread. We should guarantee than at a time only one transaction is held. Also instead of the default transactions (exclusive transaction) the immediate transaction should be used. In the older versions of Android (below API 11) there is no option to create the immediate transaction thru the java wrapper however SQLite supports this functionality. To initialize a transaction in the immediate mode the following SQLite query should be executed directly to the database, – for example thru the method execSQL:
SQLiteDatabase.execSQL(“begin immediate transaction”);
Since the transaction is initialized by the direct query then it should be finished the same way:
SQLiteDatabase.execSQL(“commit transaction”);
Then TransactionManager is the only thing left to be implemented which will initiate and finish transactions of the required type. The purpose of TransactionManager – is to guarantee that all of the queries for changes (insert, update, delete, DDL queries) originate from the same thread.
Hope this helps the future visitors!!!
Not specific to SQLite:
1) Write your code to gracefully handle the situation where you get a locking conflict at the application level; even if you wrote your code so that this is 'impossible'. Use transactional re-tries (ie: SQLITE_LOCKED could be one of many codes that you interpret as "try again" or "wait and try again"), and coordinate this with application-level code. If you think about it, getting a SQLITE_LOCKED is better than simply having the attempt hang because it's locked - because you can go do something else.
2) Acquire locks. But you have to be careful if you need to acquire more than one. For each transaction at the application level, acquire all of the resources (locks) you will need in a consistent (ie: alphabetical?) order to prevent deadlocks when locks get acquired in the database. Sometimes you can ignore this if the database will reliably and quickly detect the deadlocks and throw exceptions; in other systems it may just hang without detecting the deadlock - making it absolutely necessary to take the effort to acquire the locks correctly.
Besides the facts of life with locking, you should try to design the data and in-memory structures with concurrent merging and rolling back planned in from the beginning. If you can design data such that the outcome of a data race gives a good result for all orders, then you don't have to deal with locks in that case. A good example is to increment a counter without knowing its current value, rather than reading the value and submitting a new value to update. It's similar for appending to a set (ie: adding a row, such that it doesn't matter which order the row inserts happened).
A good system is supposed to transactionally move from one valid state to the next, and you can think of exceptions (even in in-memory code) as aborting an attempt to move to the next state; with the option to ignore or retry.
You're fine with multithreading. The page you link lists what you cannot do while you're looping on the results of your SELECT (i.e. your select is active/pending) in the same thread.

trigger # Execution plan.. Cartesian product

The idea is when a user runs a query and has a bad Cartesian who cost is above a certain threshold . Then oracle emails it to me and user. i have tried few things but they don't work on run time. If toad and sql developer can see the execution plan. then I believe there is information there I just find it. Or i may have to adopt another logic.
In general, this is probably not possible.
In theory, if you were really determined, you could generate fine-grained auditing (FGA) triggers for every table in your system that fire for every SELECT, INSERT, UPDATE, and DELETE, get the SQL_ID from V$SESSION, join to V$SQL_PLAN, and implements whatever logic you want. That's technically possible but it would involve quite a bit of code and you'd be adding a potentially decent amount of overhead to every query in your system. This is probably not practical.
Rather than trying to use a trigger, you could write a procedure that was scheduled to run every few minutes via the DBMS_JOB or DBMS_SCHEDULER packages that would query V$SESSION for all active sessions, joined to V$SQL_PLAN, and implemented whatever logic you wanted. That eliminates the overhead of trying to run a trigger every time any user executes any statement. But it still involves a decent amount of code.
Rather than writing any code, depending on the business problem you're trying to solve, it may be easier to create resource limits on the user's profile to let Oracle enforce limits on the amount of resources any single SQL statement can consume. For example, you could set the user's CPU_PER_CALL, LOGICAL_READS_PER_CALL, or COMPOSITE_LIMIT to limit the amount of CPU, the amount of logical I/O, or the composite limit of CPU and logical I/O that a single statement can do before Oracle kills it.
If you want even more control, you could use Oracle Resource Manager. That could allow you to do anything from prevent Oracle from running queries from certain users if they were estimated to run too long or to throttle the resources that a group of users can consume if there is contention over those resources. Oracle can automatically move long-running queries from specific users to lower-priority groups, it can kill long-running queries automatically, it can prevent them from running in the first place, or any combination of those things.

SQLite Concurrent Access

Does SQLite3 safely handle concurrent access by multiple processes
reading/writing from the same DB? Are there any platform exceptions to that?
If most of those concurrent accesses are reads (e.g. SELECT), SQLite can handle them very well. But if you start writing concurrently, lock contention could become an issue. A lot would then depend on how fast your filesystem is, since the SQLite engine itself is extremely fast and has many clever optimizations to minimize contention. Especially SQLite 3.
For most desktop/laptop/tablet/phone applications, SQLite is fast enough as there's not enough concurrency. (Firefox uses SQLite extensively for bookmarks, history, etc.)
For server applications, somebody some time ago said that anything less than 100K page views a day could be handled perfectly by a SQLite database in typical scenarios (e.g. blogs, forums), and I have yet to see any evidence to the contrary. In fact, with modern disks and processors, 95% of web sites and web services would work just fine with SQLite.
If you want really fast read/write access, use an in-memory SQLite database. RAM is several orders of magnitude faster than disk.
Yes it does.
Lets figure out why
SQLite is transactional
All changes within a single transaction in SQLite either occur
completely or not at all
Such ACID support as well as concurrent read/writes are provided in 2 ways - using the so-called journaling (lets call it “old way”) or write-ahead logging (lets call it “new way”)
Journaling (Old Way)
In this mode SQLite uses DATABASE-LEVEL locking.
This is the crucial point to understand.
That means whenever it needs to read/write something it first acquires a lock on the ENTIRE database file.
Multiple readers can co-exist and read something in parallel.
During writing it makes sure an exclusive lock is acquired and no other process is reading/writing simultaneously and hence writes are safe.
(This is known as a multiple-readers-single-writer or MSRW lock)
This is why here they’re saying SQlite implements serializable transactions
Troubles
As it needs to lock an entire database every time and everybody waits for a process handling writing concurrency suffers and such concurrent writes/reads are of fairly low performance
Rollbacks/outages
Prior to writing something to the database file SQLite would first save the chunk to be changed in a temporary file. If something crashes in the middle of writing into the database file it would pick up this temporary file and revert the changes from it
Write-Ahead Logging or WAL (New Way)
In this case all writes are appended to a temporary file (write-ahead log) and this file is periodically merged with the original database.
When SQLite is searching for something it would first check this temporary file and if nothing is found proceed with the main database file.
As a result, readers don’t compete with writers and performance is much better compared to the Old Way.
Caveats
SQlite heavily depends on the underlying filesystem locking functionality so it should be used with caution, more details here
You're also likely to run into the database is locked error, especially in the journaled mode so your app needs to be designed with this error in mind
Yes, SQLite handles concurrency well, but it isn't the best from a performance angle. From what I can tell, there are no exceptions to that. The details are on SQLite's site: https://www.sqlite.org/lockingv3.html
This statement is of interest: "The pager module makes sure changes happen all at once, that either all changes occur or none of them do, that two or more processes do not try to access the database in incompatible ways at the same time"
Nobody seems to have mentioned WAL (Write Ahead Log) mode. Make sure the transactions are properly organised and with WAL mode set on, there is no need to keep the database locked whilst people are reading things whilst an update is going on.
The only issue is that at some point the WAL needs to be re-incorporated into the main database, and it does this when the last connection to the database closes. With a very busy site you might find it take a few seconds for all connections to be close, but 100K hits per day should not be a problem.
In 2019, there are two new concurrent write options not released yet but available in separate branches.
"PRAGMA journal_mode = wal2"
The advantage of this journal mode over regular "wal" mode is that writers may continue writing to one wal file while the other is checkpointed.
BEGIN CONCURRENT - link to detailed doc
The BEGIN CONCURRENT enhancement allows multiple writers to process write transactions simultanously if the database is in "wal" or "wal2" mode, although the system still serializes COMMIT commands.
When a write-transaction is opened with "BEGIN CONCURRENT", actually locking the database is deferred until a COMMIT is executed. This means that any number of transactions started with BEGIN CONCURRENT may proceed concurrently. The system uses optimistic page-level-locking to prevent conflicting concurrent transactions from being committed.
Together they are present in begin-concurrent-wal2 or each in a separate own branch.
SQLite has a readers-writer lock on the database level. Multiple connections (possibly owned by different processes) can read data from the same database at the same time, but only one can write to the database.
SQLite supports an unlimited number of simultaneous readers, but it will only allow one writer at any instant in time. For many situations, this is not a problem. Writer queue up. Each application does its database work quickly and moves on, and no lock lasts for more than a few dozen milliseconds. But there are some applications that require more concurrency, and those applications may need to seek a different solution. -- Appropriate Uses For SQLite # SQLite.org
The readers-writer lock enables independent transaction processing and it is implemented using exclusive and shared locks on the database level.
An exclusive lock must be obtained before a connection performs a write operation on a database. After the exclusive lock is obtained, both read and write operations from other connections are blocked till the lock is released again.
Implementation details for the case of concurrent writes
SQLite has a lock table that helps locking the database as late as possible during a write operation to ensure maximum concurrency.
The initial state is UNLOCKED, and in this state, the connection has not accessed the database yet. When a process is connected to a database and even a transaction has been started with BEGIN, the connection is still in the UNLOCKED state.
After the UNLOCKED state, the next state is the SHARED state. In order to be able to read (not write) data from the database, the connection must first enter the SHARED state, by getting a SHARED lock.
Multiple connections can obtain and maintain SHARED locks at the same time, so multiple connections can read data from the same database at the same time. But as long as even only one SHARED lock remains unreleased, no connection can successfully complete a write to the database.
If a connection wants to write to the database, it must first get a RESERVED lock.
Only a single RESERVED lock may be active at one time, though multiple SHARED locks can coexist with a single RESERVED lock. RESERVED differs from PENDING in that new SHARED locks can be acquired while there is a RESERVED lock. -- File Locking And Concurrency In SQLite Version 3 # SQLite.org
Once a connection obtains a RESERVED lock, it can start processing database modification operations, though these modifications can only be done in the buffer, rather than actually written to disk. The modifications made to the readout content are saved in the memory buffer.
When a connection wants to submit a modification (or transaction), it is necessary to upgrade the RESERVED lock to an EXCLUSIVE lock. In order to get the lock, you must first lift the lock to a PENDING lock.
A PENDING lock means that the process holding the lock wants to write to the database as soon as possible and is just waiting on all current SHARED locks to clear so that it can get an EXCLUSIVE lock. No new SHARED locks are permitted against the database if a PENDING lock is active, though existing SHARED locks are allowed to continue.
An EXCLUSIVE lock is needed in order to write to the database file. Only one EXCLUSIVE lock is allowed on the file and no other locks of any kind are allowed to coexist with an EXCLUSIVE lock. In order to maximize concurrency, SQLite works to minimize the amount of time that EXCLUSIVE locks are held.
-- File Locking And Concurrency In SQLite Version 3 # SQLite.org
So you might say SQLite safely handles concurrent access by multiple processes writing to the same DB simply because it doesn't support it! You will get SQLITE_BUSY or SQLITE_LOCKED for the second writer when it hits the retry limitation.
This thread is old but i think it would be good to share result of my tests done on sqlite:
i ran 2 instances of python program (different processes same program) executing statements SELECT and UPDATE sql commands within transaction with EXCLUSIVE lock and timeout set to 10 seconds to get a lock, and result were frustrating. Every instance did in 10000 step loop:
connect to db with exclusive lock
select on one row to read counter
update the row with new value equal to counter incremented by 1
close connection to db
Even if sqlite granted exclusive lock on transaction, the total number of really executed cycles were not equal to 20 000 but less (total number of iterations over single counter counted for both processes).
Python program almost did not throw any single exception (only once during select for 20 executions).
sqlite revision at moment of test was 3.6.20 and python v3.3 CentOS 6.5.
In mine opinion it is better to find more reliable product for this kind of job or restrict writes to sqlite to single unique process/thread.
It is natural when you specify the name for db or even in memory db if you have concurrent access (specially write) you will get this.
In my case, I am using Sqlite for testing and it is because there are several tests in the same solution it happens.
You can have two improvements:
Delete before creating db.Database.EnsureDeletedAsync();
Use an empty string for connection, in this case it will create a random name each call:
{
"ConnectionStrings": {
"ConnectionType": "sqlite",
"ConnectionString": ""
}
}

SQL Server deadlock while inserting rows in a table

The topic of sql server deadlock has been discussed many times, however, I was unsure that even two simultaneous inserts on a table can end up in a deadlock situation.
Scenario:
While testing our application (SQL Server 2005 as backend, ASP.net 3.5) we inserted records into a table simultaneously (simplified overview) and that resulted into a deadlock for more than 70% of users.
I could not get a hang of this as how an insert is being deadlock as this is not a case of multiple resources. After a detailed analysis (of reproducing the bug by two users) I found that both the processes were holding a RangeS-S lock on the primary key index of the table and were trying to convert this into RangeI-N lock, that resulted into a deadlock and one transaction being killed.
Question:
Can we avoid or reduce these kind of deadlocks as this is not a case of change in order of access of resources? Cant' we force the transaction to get exclusive lock initially so that it blocks the other process and avoid deadlock? What (adverse) effects that may have?
Also could some one explain more about RangeI-N lock.
Isolation Level for this was "Serializable".
Any suggestion will be helpful.
Thanks,
Gaurav
Change your ADO isolation level. Unless you have clear requirements for Serializable, you shouldn't use it. If you do use it, then you must clearly understand the consequences, and frequent deadlocks due to range locks are one of these consequences.
The isolation level of System.Transactions is controlled by the IsolationLevel property.
Use sp_getapplock to acquire a custom exclusive lock

Resources