I need to execute two things on update():
commit entity to the database
send the entity through JMS
Because the object is quite large the send through JMS has to be outside the database transaction. Problem is that Seam adds the transaction based on JSF phases and so the database transaction is already active as soon as my own overridden update() is called.
Adding a call-back to the update like afterUpdate() would be nice but this does not seem to be possible.
Question:
How can I commit the entity and after that execute code outside the transaction?
Thanks for any help!
I found that the transaction used is a Spring transaction. That allows for something like:
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronizationAdapter() {
#Override
public void afterCompletion(int status) {
switch (status) {
case STATUS_COMMITTED:
LOGGER.debug("update::afterCompletion");
afterCompletionCallback();
break;
case STATUS_ROLLED_BACK:
break;
case STATUS_UNKNOWN:
default:
break;
}
}
});
The transaction is then still available but the database is not locked anymore and any timeouts don't effect the Seam transaction.
Related
So I am attempting to use the Cloud Firestore offline cache ONLY as an API for my instrumentation tests, to avoid having to read and write from the server database during my integration tests.
First, in my test setup method, I call this method
protected fun setFirestoreToOfflineMode() {
Tasks.await(FirebaseFirestore.getInstance().disableNetwork())
}
Then, at the beginning of each relevant test, I use
fun givenHasTrips(vararg trips: Trip) {
GlobalScope.launch(Dispatchers.Default) {
trips.forEach {
firestoreTripApi.put(it)
}
}
}
In that put method, I have the following code:
try {
Tasks.await(tripCollection().document(tripData.id).set(tripData)),
firestoreApiTimeoutSeconds, TimeUnit.SECONDS)
Either.Right(Unit)
} catch (e: Throwable) {
Either.Left(Failure.ServerError)
}
I am calling the set() method and am waiting for a successful result in order to be able to return that the operation was a success, to update my UI afterward.
What happens is the cache DB is written correctly BUT the "set()" function times out because the database is in offline mode. I have read that Firestore only confirms a success if the Server DB was correctly written. If that is the case, I do not know if it is possible to have this call not time-out when operating strictly in offline-cache mode.
Is there a solution to have Firestore act as if the local cache database was the source of truth and return successes if placed in offline mode, just for tests?
The Task returned by the methods that modify the database (set, update, delete) only issues a callback when the data is fully committed to the cloud. There is no way to change this behavior.
What you can do instead is set up a listener to the document(s) that are expected to change, and wait for the listener to trigger. The listener will trigger even while offline.
All,
I am using Change Feed Processor Library.Want to know the best way to handle service failure along with the exceptions/errors scenario's in ProcessChangesAsync method. Below are the events am referring to.
1) Service failure - Service having the processor library crashed in the middle of some operation. How to start the process from the same document(doc on failure instance)? is there any inbuilt mechanism where change feed will start with the last failed documents? E.g. Let assume,in current batch we have 10 docs.5 processed successfully and then service breaks because of network failure or by some other reasons.Will my process starts with 6th document once service is re-started? How to achieve this?
2) Exception and Errors- Any errors in ProcessChangesAsync method can be handle using try catch at the global level but how to persist those failure records and make them available for the next batch? Again,looking for any available inbuilt mechanism in change feed process.
1) The Processor Library, by default, checkpoints after a successful run of ProcessChangesAsync. In the latest library version, you can customize the Checkpointer to do manual checkpoints in case you need it. If for some reason the processor shuts down before checkpointing, then it will start processing next from the the last successful checkpoint stored in the Leases collection. In your case, it will start with the first document again, so you will never lose a change but you could experience double processing (this is an "at least once" model).
2) There is no built-in mechanism that you can leverage, handling exceptions within the ProcessChangesAsync is your responsibility. You could not only add a global try/catch but, in the case you are looping over the documents, add a try/catch inside the loop, to handle a failing document (maybe send it to queue for later analysis/post-process) without losing the batch. If you require logging for those errors (I'm assuming that's what you mean by persisting errors?), then the latest version is compatible with LibLog, so plugging your own custom logging is as simple as:
using Microsoft.Azure.Documents.ChangeFeedProcessor.Logging;
var hostName = "SampleHost";
var tracelogProvider = new TraceLogProvider(); //You can use any provider supported by LibLog
using (tracelogProvider.OpenNestedContext(hostName))
{
LogProvider.SetCurrentLogProvider(tracelogProvider);
// After this, create IChangeFeedProcessor instance and start/stop it.
}
Source
Extra info for the comments
To avoid exceptions halting the batch or causing a batch to be reprocessed, you can have handling like this:
public async Task ProcessChangesAsync(IChangeFeedObserverContext context, IReadOnlyList<Document> documents, CancellationToken cancellationToken)
{
try
{
foreach(var document in documents)
{
try
{
// Do your work for the document
}
catch(Exception ex)
{
// Something happened with the current document, handle it, send it to a queue / another storage to analyze, log it. This catch will make the loop continue with the next.
}
}
}
catch(Exception ex)
{
// Something unhandled happened, log it and avoid throwing it again so the next batch is processed
}
}
I want to use Datastore from Cloud Compute through Java and I am following Getting started with Google Cloud Datastore.
My use case is quite standard - read one entity (lookup), modify it and save the new version. I want to do it in a transaction so that if two processes do this, the second one won't overwrite the changes made by the first one.
I managed to issue a transaction and it works. However I don't know what would happen if the transaction fails:
How to identify a failed transaction? Probably a DatastoreException with some specific code or name will be thrown?
Should I issue a rollback explicitly? Can I assume that if a transaction fails, nothing from it will be written?
Should I retry?
Is there any documentation on that?
How to identify a failed transaction? Probably a DatastoreException
with some specific code or name will be thrown?
Your code should always ensure that a transaction is either successfully committed or rolled back. Here's an example:
// Begin the transaction.
BeginTransactionRequest begin = BeginTransactionRequest.newBuilder()
.build();
ByteString txn = datastore.beginTransaction(begin)
.getTransaction();
try {
// Zero or more transactional lookup()s or runQuerys().
// ...
// Followed by a commit().
CommitRequest commit = CommitRequest.newBuilder()
.setTransaction(txn)
.addMutation(...)
.build();
datastore.commit(commit);
} catch (Exception e) {
// If a transactional operation fails for any reason,
// attempt to roll back.
RollbackRequest rollback = RollbackRequest.newBuilder()
.setTransaction(txn);
.build();
try {
datastore.rollback(rollback);
} catch (DatastoreException de) {
// Rollback may fail due to a transient error or if
// the transaction was already committed.
}
// Propagate original exception.
throw e;
}
An exception might be thrown by commit() or by another lookup() or runQuery() call inside the try block. In each case, it's important to clean up the transaction.
Should I issue a rollback explicitly? Can I assume that if a
transaction fails, nothing from it will be written?
Unless you're sure that the commit() succeeded, you should explicitly issue a rollback() request. However, a failed commit() does not necessarily mean that no data was written. See the note on this page.
Should I retry?
You can retry using exponential backoff. However, frequent transaction failures may indicate that you are attempting to write too frequently to an entity group.
Is there any documentation on that?
https://cloud.google.com/datastore/docs/concepts/transactions
I am following the EJB standard for CMP as specified in the specification but it does not rollback changes in a database. When I have commented-out Connection.close() (Connection is retrieved from a Data-source) it is rollbacked successfully.
Is it recommended for WebLogic to not close a connection received from a data source?
Is it recommended for WebLogic to not close a connection received from a data source?
There is a rule that when inside a container managed transaction you should never call any method that manually or natively interacts with a transaction on a transactional resource.
But Connection.close() is not such a method. Even though connections are managed, when you obtain one from an injected data source, you indeed have to close it. Note that this will in most cases not actually close the connection, but with a transaction in progress the transaction manager will most likely keep the connection on hold to be able to do the commit or rollback on it when the overall transaction commits resp. rollbacks.
Note that a connection can be closed automatically when using a try-with-resources construct. Otherwise you indeed have to call the close() method on it (wrapped in some amount of finally clauses).
This is a fairly standard pattern:
#Stateless
public class MyEJB {
#Resource(lookup = "java:app/dataSource")
private DataSource dataSource;
public void doStuff() {
try (Connection connection = dataSource.getConnection()) {
// do work on connection
} catch (SQLException e) {
// handle exception
}
}
}
See this link for some more discussion on this topic: http://www.coderanch.com/t/485357/EJB-JEE/java/releasing-connection-CMT
I have one piece of code that gets run on Application_Start for seeding demo data into my database, but I'm getting an exception saying:
The ObjectContext instance has been disposed and can no longer be used for operations that require a connection
While trying to enumerate one of my entities DB.ENTITY.SELECT(x => x.Id == value);
I've checked my code and I'm not disposing my context before my operation, Below is an outline of my current implementation:
protected void Application_Start()
{
SeedDemoData();
}
public static void SeedDemoData()
{
using(var context = new DBContext())
{
// my code is run here.
}
}
So I was wondering if Application_Start is timing out and forcing my db context to close its connection before it completes.
Note: I know the code because I'm using it on a different place and it is unit tested and over there it works without any issues.
Any ideas of what could be the issue here? or what I'm missing?
After a few hours investigating the issue I found that it is being caused by the data context having pending changes on a different thread. Our current implementation for database upgrades/migrations runs on a parallel thread to our App_Start method so I noticed that the entity I'm trying enumerate is being altered at the same time, even that they are being run on different data contexts EF is noticing that something is wrong while accessing the entity and returning an incorrect error message saying that the datacontext is disposed while the actual exception is that the entity state is modified but not saved.
The actual solution for my issue was to move all the seed data functions to the database upgrades/migrations scripts so that the entities are only modified on one place at the time.