I am using Flyway with Postgres and I have noticed that if I have my tomcat server running, and I try to execute a DROP SCHEMA foo it does not work until tomcat shuts down. I am assuming that flyway has some mechanism to block modifications to the schema after it runs. How is the blocking of other clients modifying the schema accomplished in flyway.
Flyway doesn't lock the schema.
When it starts applying a migration, it begins a transaction. It then acquires a lock on the metadata table using SELECT * FROM metadatatable FOR UPDATE. This lock is released automatically after the migration completes when the transaction is commited or rolled back.
Related
I'm considering MariaDB master-master configuration for a webapp database.
My application has some index locking, something like:
SELECT COUNT(*) FROM person WHERE event=? FOR UPDATE;
The transaction checks the subscribers amount to an event and finally insert the new person only if the capacity is not reached, and then the lock is released with the COMMIT command.
I was wondering what happened if I enabled the master-master replica to two servers, for a first test is looks like the lock is completely ignored.
Do you please confirm index/table lock is not working with this configuration?
How usualy is resolved thing kind of problem when someone needs a multi-dbserver environment?
I gave the command to CREATE TABLE and INSERT the values into the table but its not saved. The next time I login and the table is there but the data inserted is not there. "no selected rows" is the statement shown in screen.
please help to save the changes in ORACLE 11g.
Solution
First set your Oracle Database client application to always implicitly commit or rollback transactions before termination (behaviour recommended by Oracle). Then learn about database transactions and either set your Oracle Database client application in autocommit mode or explicitly commit your D.M.L. transactions (with the COMMIT statement) because you should not rely on the commit behaviour of a client application.
Summary on database transactions
Database transactions are a fundamental concept of database management systems. The essential property of a transaction is its atomicity: a transaction is either
committed (database changes are all made permanent, that is stored on disk); or
rolled back (database changes are all discarded).
Lexically, a transaction consists of one or more S.Q.L. statements of these kinds:
data definition language (D.D.L.) statements (CREATE, DROP, ALTER, GRANT, REVOKE, …) that change the structure of the database;
data manipulation language (D.M.L.) statements (INSERT, DELETE, UPDATE, SELECT, …) that retrieve or change the contents of the database.
Database client applications can operate in two different modes:
autocommit mode, where implicit transactions are implicitly started before D.D.L. and D.M.L. statements and implicitly committed on success and rolled back on failure after D.D.L. and D.M.L. statements, and explicit transactions are explicitly started at BEGIN statements and explicitly committed at COMMIT statements and rolled back at ROLLBACK statements;
non-autocommit mode, where mixed transactions are implicitly started before D.D.L. or D.M.L. statements and explicitly committed at COMMIT statements and rolled back at ROLLBACK statements.
In both modes, transactions are implicitly committed before the database client application terminates normally and implicitly rolled back before the database client terminates abnormally. For database management systems that do not support rolling back D.D.L. statements (Maria D.B., Oracle Database, S.Q.L. Server), transactions are also implicitly committed after D.D.L. statements.
Explanation
When you issued your CREATE statement, since it is a D.D.L. statement and Oracle Database does not support rolling back D.D.L. statements, the transaction that creates the table was implicitly committed. That is why the table structure was still there after you started a new session. But when you issued an INSERT statement, since it is a D.M.L. statement and you are not in autocommit mode, the transaction that populates the table was not implicitly committed. And as your Oracle Database client application was not set to implicitly commit transactions before termination, that is why the table contents had gone after you started a new session.
I have a C application that inserts data directly to the database of my Meteor application. The app works fine (withoud delays) when I run it in development mode (with "meteor"). However, if I run the app as a node app (bundled) and with external MongoDB, there's an annoying delay in screen updates (5-10s).
I have noticed some previous discussions about this:
Meteor: server-side db insert delays
Using node ddp-client to insert into a meteor collection from Node
Questions:
Is there any other way than building a server-side API for doing the db inserts through Meteor?
Why the delay is only when using external MongoDB?
Is there a way in Meteor to shorten this database polling interval?
You need to enable oplog tailing. Without oplog tailing, when your C program makes a database write, the Meteor server doesn't realise anything has changed until it polls MongoDB again. With oplog tailing, it can pick up the changes much more quickly and efficiently. In development mode, oplog tailing is enabled automatically, but for production it needs some additional setup.
Your MongoDB must be set up as a replica set (a replica set of one node does work).
You have to pass in a mongo URL for the replica set's local database with the environment variable MONGO_OPLOG_URL.
For more information, see this article.
I have a ASP.NET web application which does some changes on a table in SQL Server 2008 R2.
On this table there is a trigger that does updates on another table in another database on the same database server.
When saving the changes I'm having the following error:
- The error message: The underlying provider failed on Commit.
- InnerException: This SqlTransaction has completed; it is no longer usable.
Also allowing the database user to connect to the other database isn't working.
Somebody knows how I can make this work?
The trigger is not related to Entity Framework.
The trigger fires when changes are made to the table irrespective of where that change came from.
This is probably a rights issue, the system is attempting to make the change to the second database with the security context that was used to connect to the first database. It the change that was caused by the trigger fails, then everything in the same transaction will fail.
Since you are accessing 2 databases in a transaction, you are using MSDTC, make sure that it is started and that you have rights to it.
I'm working with SQLite for my Android application and after some research I figured out how to do multiple insert transactions using the UNION statement.
But this is quite inefficient. From what I see at http://www.sqlite.org/speed.html, and in a lot of other forums, I can speed up the process using the BEGIN - COMMIT statements. But when I use them I get this error:
Cannot start a transaction within a transaction.
Why? What is the most efficient way of doing multiple insert?
Which JDBC driver are you using? Is there only one that's built into the Android distribution?
The problem is most likely with java.sql.Connection#setAutoCommit(). If the connection already has auto-commit enabled—which you can check with Connection#getAutoCommit()—then your JDBC driver is already issuing the SQL commands to start a transaction before your manual attempt to do, which renders your manual command redundant and invalid.
If you're looking to control transaction extent, you need to disable auto-commit mode for the Connection by calling
connection.setAutoCommit(false);
and then later, after your individual DML statements have all been issued, either commit or roll back the active transaction via Connection#commit() or Connection#rollback().
I have noticed that some JDBC drivers have a hard time coordinating auto-commit mode with PreparedStatement's batch-related methods. In particular, the Xerial JDBC driver and the Zentus driver on which it's based both fight against a user controlling the auto-commit mode with batch statement execution.