How to maintain transactions in two different databases - asp.net

I have to maintain transactions between two different databases. I need to rollback if there is any error occurred in Database1 then all changes in database2 should be rollback.
I have two connection string in my web.config file.

The answer depends on if you need distributed transactions between two database server instances, or transactions between two databases in a single instance. In the first case you'll need a transaction manager like MSDTC, but in the second case the database server should be able to do the job by itself.
TransactionScope will escalate the transaction to MSDTC if necessary. The rules for this are somewhat subtle. If the two databases are on a single SQL Server 2008 instance, you shouldn't need MSDTC.
See also:
TransactionScope automatically
escalating to MSDTC on some
machines?
Common Gotchas when
using TransactionScope and MS DTC

You could use the TransactionScope class:
using(var tx = new TransactionScope())
{
// TODO: your DB queries here
tx.Complete();
}

Related

DB SQL for Vault in corda 3

In corda open documentation I read the following:
The ORM mapping is specified using the Java Persistence API (JPA) as annotations and is converted to database table rows by the node automatically every time a state is recorded in the node’s local vault as part of a transaction.
Presently the node includes an instance of the H2 database but any database that supports JDBC is a candidate and the node will in the future support a range of database implementations via their JDBC drivers. Much of the node internal state is also persisted there.
Can I replace h2 DB with an SQL one using JDBC?
As I understood, the FinalityFlow is used to record the transaction in the local Vault using h2 DB.
If I implement a custom Flow to record in an SQL DB, i have to avoid the FinalityFlow call?
Yes, it is possible to run a node with a SQL database other than H2. In fact, support for PostgreSQL and SQLServer has been contributed by the open-source community. See the set-up instructions here. However, be aware that the Corda continuous integration pipeline does not run unit tests or integration tests of these databases, so they must be used at your own risk.
Note that in both cases, you configure the node to use the alternative database via the configuration file, and it stores all its data in this alternative database (transactions, states, identities, etc.). You are not expected to access the database directly in a flow to do this, and can rely upon the standard ServiceHub operations and standard flows like FinalityFlow.

How to troubleshoot old Sql connections with open_tran > 0?

We have an ASP.NET API web site which connects using NHibernate to a SQL Server.
The problem we are experiencing is that gradually throughout the day, the number of connections to the SQL server creeps up, and there are many connections that do not appear to be returned to the pool. By this, I mean that if I run the following query:
select * from master..sysprocesses s where datediff(minute, s.last_batch, getdate())>10
the number of rows returned just keeps climbing. Nothing in the API should be taking 10 minutes to complete. And there are connections in there from hours ago.
Here's another clue: the open_tran column of all these rows has a value of 1. So it seems to me that somewhere inside the API call, we're creating a transaction boundary, and that transaction is never being closed. Perhaps DTC may have a hand in this (we sometimes do connect to more than one database in a call).
The thing is, I haven't a clue how to troubleshoot this further. I've tried running DBCC INPUTBUFFER on the rogue spids, and there's nothing consistent between them.
What are some of the anti-patterns/other possible causes that might lead to this behavior?
Update: here's how the DB connection is being created. We're using StructureMap for Dependency Injection. We create two DB connections on each unit of work: one "normal" connection for regular read/write access, and an "uncommitted" connection that runs in a transaction with "ReadUncommitted" access (we were having a problem with table locking when reading from large tables).
Here's the code from the DI Registry:
For<ISession>().Transient().Use(context => context.GetInstance<ISessionFactory>().OpenSession());
For<ISessionUncommittedWrapper>().Transient().Use(context => new SessionUncommittedWrapper { Session = context.GetInstance<ISessionFactory>().OpenSession() });
Then, inside the unit of work middleware, we create a UnitOfWork (with a using block, of course), which takes an ISession and an ISessionUncommittedWrapper in the constructor. In the Begin() method, we have:
_uncommittedTransaction = SessionUncommittedWrapper.Session.BeginTransaction(IsolationLevel.ReadUncommitted);
which gets disposed (along with the ISession and ISessionUncommittedWrapper) in the UnitOfWork's Dispose() method.
I eventually found the problem.
The way I found the problem was by creating a logging table that tracked the creation and disposal of Sessions, along with the URI of the endpoint called. By querying all the undisposed connections, I found that in every case where the connection was not disposed, the path began with "/signalr".
<facepalm>D'oh!</facepalm>
Since the OWIN middleware was proactively creating the Sql connections, it was also doing so for SignalR, which in its nature, keeps the transaction open! So every client that logged in with SignalR was hogging two Sql connections.
I made the appropriate changes to exclude SignalR connections from the middleware, and now we have no more hanging Sql connections.

System.Data.SQLite in-memory database multi-threading

I am creating a System.Data.SQLite in-memory database using connection string as
"Data Source=:memory:",
and want to access this database among multi-threads.
Now what I do is to clone the SQLiteConnection object and pass the copy to worker threads.
But I found that different threads actually get individual instances of in-memory database, not a shared one. How can I share one in-memory database among threads?
Thanks!
Based on the SQLite documentation for in-memory databases, I would try a datasource named with URI filename convention file::memory:?cache=shared or the like instead of :memory: (and note specifically the cache name that all connections are being told to use). As explained on the page, every instance of a :memory: is distinct from one another, exactly as you found.
Note you may also have to first enable shared-cache mode before making the connections to the in-memory database (as specified in the shared cache documentation with a call to sqlite3_enable_shared_cache(int) for this to work.

Difference between transaction in SQL Server and ADO.NET?

What is difference between transaction in SQL Server and using transaction in ADO.NET?
Please reply with proper logic. I just want to know in terms of performance.
I just want to know if i am using transaction(Begin End Trans) and using SqlTransaction class in ado.net for similar set of queries then which is better to use ?
There is no difference between a ADO.Net transaction and SQL Server transaction, as far as transaction handling. Personally, I prefer initiating transactions at a higher level that ADO.NET offers, because it normally gives me greater flexibility in setting the scope of the transaction.
SQL Server level transactions only when I need to update Multiple Tables, like I have a Master Table, and a Detail Table, I want to update both the Master and Detail tables.
ADO.Net (.Net Level) transaction only for one project, it is an SQL 2008 project, there was some requirement like, I need to save some DOC files in Database, it is using a SQL 2008 feature, called FileStream. If enable FileStream it will create some shared folder on the server, and Saves all the file data into this Folder, which can only read by SQL Server.
An ADO.net transaction is more convenient if you are making changes to multiple databases within a transaction and want to roll all of them back in case of an error.

Why isn't TransactionScope rolling back distributed transactions?

I am using an object persistence framework called ECO for updating data to SQL Server. I've noticed that if I create a TransactionScope and then deliberately throw an exception after my first transaction has committed but before my second has committed the first database is updated and the 2nd is not.
I thought that simply creating the TransactionScope around the numerous updates is all I had to do once the distributed transaction coordinator is running on the main DB?
Can anyone think of any reason why this would per permitting a scenario where the first DB is updated but not the second?
Got it!
ECO supports the following databases...
BlackFish
DB2
FireBird
Mimer
MySql
NexusDB
Oracle
SQLite
SQLServer
Sybase
Borland data provider
Borland database eXpress (DBX)
I remembered this morning that some of these don't support connection pooling, so on an abstract PersistenceMapper class ECO has implemented its own connection pooling. This is what was happening
App starts
I have opted to store my object mapping info in the DB itself, so ECO gets a connection and reads that mapping info
ECO returns the connection to the pool, but its OWN pool
I later start a distributed transaction
I update my objects to the database
ECO retrieves a connection from its own pool
As a consequence the connection is not enlisted in the current distributed transaction. Considering SqlConnection does its own pooling it was acceptable to set PersistenceMapperSqlServer.MaxPoolSize to ZERO.
Now it uses the SqlConnection component to deal with the creation/disposal of Connections, not only does that component pool the connections but it also deals with distributed transactions properly too!
I've written to the developers to let them know that they should mark this property obsolete.

Resources