I am following the EJB standard for CMP as specified in the specification but it does not rollback changes in a database. When I have commented-out Connection.close() (Connection is retrieved from a Data-source) it is rollbacked successfully.
Is it recommended for WebLogic to not close a connection received from a data source?
Is it recommended for WebLogic to not close a connection received from a data source?
There is a rule that when inside a container managed transaction you should never call any method that manually or natively interacts with a transaction on a transactional resource.
But Connection.close() is not such a method. Even though connections are managed, when you obtain one from an injected data source, you indeed have to close it. Note that this will in most cases not actually close the connection, but with a transaction in progress the transaction manager will most likely keep the connection on hold to be able to do the commit or rollback on it when the overall transaction commits resp. rollbacks.
Note that a connection can be closed automatically when using a try-with-resources construct. Otherwise you indeed have to call the close() method on it (wrapped in some amount of finally clauses).
This is a fairly standard pattern:
#Stateless
public class MyEJB {
#Resource(lookup = "java:app/dataSource")
private DataSource dataSource;
public void doStuff() {
try (Connection connection = dataSource.getConnection()) {
// do work on connection
} catch (SQLException e) {
// handle exception
}
}
}
See this link for some more discussion on this topic: http://www.coderanch.com/t/485357/EJB-JEE/java/releasing-connection-CMT
Related
We have a Java class that listens to a database (Oracle) queue table and process it if there are records placed in that queue. It worked normally in UAT and development environments. Upon deployment in production, there are times when it cannot read a record from the queue. When a record is inserted, it cannot detect it and the records remain in the queue. This seldom happens but it happens. If I would give statistic, out of 30 records queued in a day, about 8 don't make it. We would need to restart the whole app for it to be able to read the records.
Here is a code snippet of my class..
public class SomeListener implements MessageListener{
public void onMessage(Message msg){
InputStream input = null;
try {
TextMessage txtMsg = (TextMessage) msg;
String text = txtMsg.getText();
input = new ByteArrayInputStream(text.getBytes());
} catch (Exception e1) {
// TODO Auto-generated catch block
logger.error("Parsing from the queue.... failed",e1);
e1.printStackTrace();
}
//process text message
}
}
Weird thing we cant find any traces of exceptions from the logs.
Can anyone help? by the way we set the receiveTimeout to 10 secs
We would need to restart the whole app for it to be able to read the records.
The most common reason for this is the listener thread is "stuck" in user code (//process text message). You can take a thread dump with jstack or jvisualvm or similar to see what the thread is doing.
Another possibility (with low volume apps like this) is the network (most likely a router someplace in the network) silently closes an idle socket because it has not been used for some time. If the container (actually the broker's JMS client library) doesn't know the socket is dead, it will never receive any more messages.
The solution to the first is to fix the code; the solution to the second is to enable some kind of heartbeat or keepalives on the connection so that the network/router does not close the socket when it has no "real" traffic on it.
You would need to consult your broker's documentation about configuring heartbeats/keepalives.
I want to use Datastore from Cloud Compute through Java and I am following Getting started with Google Cloud Datastore.
My use case is quite standard - read one entity (lookup), modify it and save the new version. I want to do it in a transaction so that if two processes do this, the second one won't overwrite the changes made by the first one.
I managed to issue a transaction and it works. However I don't know what would happen if the transaction fails:
How to identify a failed transaction? Probably a DatastoreException with some specific code or name will be thrown?
Should I issue a rollback explicitly? Can I assume that if a transaction fails, nothing from it will be written?
Should I retry?
Is there any documentation on that?
How to identify a failed transaction? Probably a DatastoreException
with some specific code or name will be thrown?
Your code should always ensure that a transaction is either successfully committed or rolled back. Here's an example:
// Begin the transaction.
BeginTransactionRequest begin = BeginTransactionRequest.newBuilder()
.build();
ByteString txn = datastore.beginTransaction(begin)
.getTransaction();
try {
// Zero or more transactional lookup()s or runQuerys().
// ...
// Followed by a commit().
CommitRequest commit = CommitRequest.newBuilder()
.setTransaction(txn)
.addMutation(...)
.build();
datastore.commit(commit);
} catch (Exception e) {
// If a transactional operation fails for any reason,
// attempt to roll back.
RollbackRequest rollback = RollbackRequest.newBuilder()
.setTransaction(txn);
.build();
try {
datastore.rollback(rollback);
} catch (DatastoreException de) {
// Rollback may fail due to a transient error or if
// the transaction was already committed.
}
// Propagate original exception.
throw e;
}
An exception might be thrown by commit() or by another lookup() or runQuery() call inside the try block. In each case, it's important to clean up the transaction.
Should I issue a rollback explicitly? Can I assume that if a
transaction fails, nothing from it will be written?
Unless you're sure that the commit() succeeded, you should explicitly issue a rollback() request. However, a failed commit() does not necessarily mean that no data was written. See the note on this page.
Should I retry?
You can retry using exponential backoff. However, frequent transaction failures may indicate that you are attempting to write too frequently to an entity group.
Is there any documentation on that?
https://cloud.google.com/datastore/docs/concepts/transactions
I need to execute two things on update():
commit entity to the database
send the entity through JMS
Because the object is quite large the send through JMS has to be outside the database transaction. Problem is that Seam adds the transaction based on JSF phases and so the database transaction is already active as soon as my own overridden update() is called.
Adding a call-back to the update like afterUpdate() would be nice but this does not seem to be possible.
Question:
How can I commit the entity and after that execute code outside the transaction?
Thanks for any help!
I found that the transaction used is a Spring transaction. That allows for something like:
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronizationAdapter() {
#Override
public void afterCompletion(int status) {
switch (status) {
case STATUS_COMMITTED:
LOGGER.debug("update::afterCompletion");
afterCompletionCallback();
break;
case STATUS_ROLLED_BACK:
break;
case STATUS_UNKNOWN:
default:
break;
}
}
});
The transaction is then still available but the database is not locked anymore and any timeouts don't effect the Seam transaction.
I am executing a Submit routine in ASP.Net. The problem is, while debugging the code in try-catch block, if I/user encounters an error, the SQL Transaction never rollbacks.
SQL Server 2008 hangs totally if I break this submit routine in between. I am unable to do Select/Insert operations even from SSMS. At the end, i have to restart SQL Server in order to rollback the transactions.
Code for submit:
SqlConnection conn = Db.getConn();
if (conn.State == ConnectionState.Closed) conn.Open();
SqlTransaction trn;
trn = conn.BeginTransaction();
SqlCommand sqlCmd = new SqlCommand("", conn);
sqlCmd.Transaction = trn;
try
{
string query = GetQuery(); // works fine
sqlCmd.CommandText = query;
sqlCmd.ExecuteNonQuery();
using (SqlBulkCopy bcp = new SqlBulkCopy(conn,SqlBulkCopyOptions.Default, trn))
{
bcp.ColumnMappings.Add("FaYear", "FaYear");
bcp.ColumnMappings.Add("CostCode", "CostCode");
bcp.ColumnMappings.Add("TokenNo", "TokenNo");
bcp.DestinationTableName = "ProcessTokenAddress";
bcp.WriteToServer(globaltblAddress.DefaultView.ToTable());
}
trn.commit();
}
catch (SqlException ex)
{
trn.Rollback();
}
NOTE: Just while writing the code here, i realized i have catched SqlException and not Exception. Is that what is causing the error? phew?
IMPORTANT: Do i need to rollback the transaction in Page_UnLoad or some other event handler which could handle unexpected situations (for eg. user closes the browser while the transaction is in progress, user hits back button etc).
First, in .Net you shouldn't be maintaining a single open connection which you reuse over and over. The result of this is actually exactly what you're experiencing—in a situation where a connection should be closed, it isn't.
Second, connections implement IDisposable. This means they should be created, and used, within a using statement, or a try-catch block with a finally that explicitly closes the connection. You can break this rule if you have a class that itself implements IDisposable and holds onto the connection for its own lifetime, then closes the connection when it is disposed.
You might be tempted to think that you're increasing efficiency by not opening and closing connections all the time. In fact, you'd be mistaken, because .Net deals with connection pooling for you. The standard practice is for you to pass around connection strings, not open connection objects. You can wrap a connection string in a class that will return a new connection for you, but you shouldn't maintain open connections. Doing so can lead to errors just such as you have experienced.
So instead, do these things:
Use a using statement. This will properly clean up your connections after creating and using them.
using (SqlConnection conn = Db.getConn()) {
conn.Open();
// your code here
}
The fact that you have to check whether the connection is open or not points to the problem. Don't do this. Instead, change your code in your Db class to hand out a newly-created connection every time. Then you can be certain that the state will be closed, and you can open it confidently. Alternately, open the connection in your Db class, but name your method to indicate the connection will be open such as GetNewOpenConnection. (Try to avoid abbreviation in method and class names.)
I would recommend that you throw the error. While logging it and not throwing it is a possible option, in any context where a user is sitting at a computer expecting a result, simply swallowing the error will not be the correct action, because then how will your later code know that an error occurred and let the user know? Some means of passing the exception information on to the user is necessary. It's better to not handle the exception at all than to swallow it silently.
Finally, a little style note is that getConn() does not follow the normal capitalization practices found in the C# community and recommended by Microsoft. Public methods in classes should start with a capital letter: GetConn().
I have one piece of code that gets run on Application_Start for seeding demo data into my database, but I'm getting an exception saying:
The ObjectContext instance has been disposed and can no longer be used for operations that require a connection
While trying to enumerate one of my entities DB.ENTITY.SELECT(x => x.Id == value);
I've checked my code and I'm not disposing my context before my operation, Below is an outline of my current implementation:
protected void Application_Start()
{
SeedDemoData();
}
public static void SeedDemoData()
{
using(var context = new DBContext())
{
// my code is run here.
}
}
So I was wondering if Application_Start is timing out and forcing my db context to close its connection before it completes.
Note: I know the code because I'm using it on a different place and it is unit tested and over there it works without any issues.
Any ideas of what could be the issue here? or what I'm missing?
After a few hours investigating the issue I found that it is being caused by the data context having pending changes on a different thread. Our current implementation for database upgrades/migrations runs on a parallel thread to our App_Start method so I noticed that the entity I'm trying enumerate is being altered at the same time, even that they are being run on different data contexts EF is noticing that something is wrong while accessing the entity and returning an incorrect error message saying that the datacontext is disposed while the actual exception is that the entity state is modified but not saved.
The actual solution for my issue was to move all the seed data functions to the database upgrades/migrations scripts so that the entities are only modified on one place at the time.