EJB 3.0 - Sequence of transactionally independent EJB calls with CMT - asynchronous

There is an MDB and a sequence of stateless EJBs to do some work with message (WAS 7.0, Java EE 5, EJB 3.0, JPA).
Sequence (using CMT):
MDB accepts message
MDB persists entity with the message details
MDB calls EJB 1 passing entities ID,
EJB1 does its piece of work with the message, depending on whether EJB1 succeed or not it calls
EJB2 or EJB5 passing the ID
EJB2 does its piece of ...
and so on till the last EJB (running time is several minutes).
All of those happens in one transaction: so if something is thrown in EJB4 - everything that happened before in this transaction will be rolled back.
I've tried to use REQUIRES_NEW for all the subsequent calls but it seems that changes in previous calls is not visible for subsequent calls.
Also, transaction becomes to long and sometimes timed out.
I'd like to have separate, independent transactions for:
A receiving and persisting message
B processing in EJB1
C processing in EJB2
....
so if execution of EJB2 failed, message should remain in DB and result of execution in EJB1 should be persisted as well.
So main question is, using CMT - is it possible to have short and indipendent transactions for the sequence?
Some more
can the transaction originated in MDB be commited independently of
the results of the call to EJB1? Moreover -- commited before the call
to EJB1..?
can changes on an entity made inside the MDB be visile inside the call to a EJB1 method with REQUIRES_NEW attribute?
is there a way except BTM or WorkManager to achieve the goal?

You shouldn't have any issues with REQUIRES_NEW.
If you had this:
#EJB(..)
EJB1 ejb1;
#EJB(..)
EJB2 ejb2;
public void onMessage(Message message) {
Thing thing = getThingFromMessage(message);
persistThingStuff(thing);
ejb1.doThingStuffWithRequiresNew(thing);
ejb2.doThingStuffWithRequiresNew(thing);
}
That should Just Work(tm) with one caveat.
If ejb2 throws an exception, ejb1's work will be committed, but persistThingStuff will rollback.
But if you do something like:
public void onMessage(Message message) {
Thing thing = getThingFromMessage(message);
persistThingStuff(thing);
ejb1.doThingStuffWithRequiresNew(thing);
try {
ejb2.doThingStuffWithRequiresNew(thing);
} catch (Throwable t) {
youBetterLogThis();
}
}
That should prevent any exceptions from blowing out the work of the MDB, however if EJB2 is long running, any work done by the MDB is still pending and in an open transaction, waiting for it.
To get around a lot of these thing, we have a utility EJB function:
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public <T extends Runnable> T runInTransaction(T runner) {
runner.run();
return runner;
}
Then we don't necessarily have to annotate specific methods for this.
#EJB(...)
UtilEJB utilEjb;
public void onMessage(Message message) {
final Thing thing = getThingFromMessage(message);
utilEjb.runInTransaction(new Runnable() {
public void run() {
persistThingStuff(thing);
}
});
ejb1.doThingStuffWithRequiresNew(thing);
try {
ejb2.doThingStuffWithRequiresNew(thing);
} catch (Throwable t) {
youBetterLogThis();
}
}
This will commit the MDB work immediately, even before EJB1.

Related

Spring Kafka - "ErrorHandler threw an exception" and lost some records

Having Consumer polling 2 records at a time, i.e.:
#Bean
ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> config = Map.of(
BOOTSTRAP_SERVERS_CONFIG, "localhost:9092",
GROUP_ID_CONFIG, "my-consumers",
AUTO_OFFSET_RESET_CONFIG, "earliest",
MAX_POLL_RECORDS_CONFIG, 2);
return new DefaultKafkaConsumerFactory<>(config, new StringDeserializer(), new StringDeserializer());
}
and ErrorHandler which can fail handling faulty record:
class MyListenerErrorHandler implements ContainerAwareErrorHandler {
#Override
public void handle(Exception thrownException,
List<ConsumerRecord<?, ?>> records,
Consumer<?, ?> consumer,
MessageListenerContainer container) {
simulateBugInErrorHandling(records.get(0));
skipFailedRecord(); // seek offset+1, which never happens
}
private void simulateBugInErrorHandling(ConsumerRecord<?, ?> record) {
throw new NullPointerException(
"DB transaction failed when saving info about failure on offset = " + record.offset());
}
}
Then such scenario is possible:
Topic gets 3 records
Consumer polls 2 records at a time
MessageListener fails to process the first record due to faulty payload
ErrorHandler fails to process the failure and itself throws an exception, e.g. due to some temporary issue
Third record gets processed
Second record is never processed (never enters MessageListener)
How to ensure no record is left unprocessed when ErrorHandler throws an exception with above scenario?
My goal is to achieve stateful retry logic with delays, but for brevity I omitted code responsible for tracking failed records and delaying retry.
I'd expect that after ErrorHandler throws an exception, skipping an entire batch of records should not happen. But it does.
Is it correct behavior?
Should I rather deal with commits manually that use Spring/Kafka defaults?
Should I use different ErrorHandler or handle method? (I need an access to Container to make a pause() for delayed retry logic; cannot use Thread.sleep())
Somehow related issue: https://github.com/spring-projects/spring-kafka/issues/1265
Full code: https://github.com/ptomaszek/spring-kafka-error-handler
The consumer has to be re-positioned (using seeks) in order to re-fetch the records after the failed one.
Use a DefaultErrorHandler (2.8.x and later) or a SeekToCurrentErrorHandler with earlier versions.
You can add retry options and a recoverer to deal with the failed record; by default it is just logged.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#default-eh
https://docs.spring.io/spring-kafka/docs/2.7.x/reference/html/#seek-to-current
You need to do the seeks first (or in a finally block), before any exceptions can be thrown; the container does not commit the offset if the error handler throws an exception.
Kafka maintains 2 offsets - the current committed offset and the current position (set to the committed offset when the consumer starts). The next poll always returns the next record after the last poll. unless a seek is performed.
The default error handlers catch any exceptions thrown by the recoverer and makes sure that the current (and subsequent) records will be returned by the next poll. See SeekUtils.doSeeks().

when using #StreamListener, customization to KafkaListenerContainerFactory are getting reflected in generated KafkaMessageListenerContainer?

I am using spring-cloud-stream with kafka binder to consume message from kafka . The application is basically consuming messages from kafka and updating a database.
There are scenarios when DB is down (which might last for hours) or some other temporary technical issues. Since in these scenarios there is no point in retrying a message for a limited amount of time and then move it to DLQ , i am trying to achieve infinite number of retries when we are getting certain type of exceptions (e.g. DBHostNotAvaialableException)
In order to achieve this i tried 2 approaches (facing issues in both the approaches) -
In this approach, Tried setting an errorhandler on container properties while configuring ConcurrentKafkaListenerContainerFactory bean but the error handler is not getting triggered at all. While debugging the flow i realized in the KafkaMessageListenerContainer that are created have the errorHandler field is null hence they use the default LoggingErrorHandler. Below are my container factory bean configurations -
the #StreamListener method for this approach is the same as 2nd approach except for the seek on consumer.
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object>
kafkaListenerContainerFactory(ConsumerFactory<String, Object> kafkaConsumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(kafkaConsumerFactory);
factory.getContainerProperties().setAckOnError(false);
ContainerProperties containerProperties = factory.getContainerProperties();
// even tried a custom implementation of RemainingRecordsErrorHandler but call never went in to the implementation
factory.getContainerProperties().setErrorHandler(new SeekToCurrentErrorHandler());
return factory;
}
Am i missing something while configuring factory bean or this bean is only relevant for #KafkaListener and not #StreamListener??
The second alternative was trying to achieve it using manual acknowledgement and seek, Inside a #StreamListener method getting Acknowledgment and Consumer from headers, in case a retryable exception is received, I do certain number of retries using retrytemplate and when those are exhausted I trigger a consumer.seek() . Example code below -
#StreamListener(MySink.INPUT)
public void processInput(Message<String> msg) {
MessageHeaders msgHeaders = msg.getHeaders();
Acknowledgment ack = msgHeaders.get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
Consumer<?,?> consumer = msgHeaders.get(KafkaHeaders.CONSUMER, Consumer.class);
Integer partition = msgHeaders.get(KafkaHeaders.RECEIVED_PARTITION_ID, Integer.class);
String topicName = msgHeaders.get(KafkaHeaders.RECEIVED_TOPIC, String.class);
Long offset = msgHeaders.get(KafkaHeaders.OFFSET, Long.class);
try {
retryTemplate.execute(
context -> {
// this is a sample service call to update database which might throw retryable exceptions like DBHostNotAvaialableException
consumeMessage(msg.getPayload());
return null;
}
);
}
catch (DBHostNotAvaialableException ex) {
// once retries as per retrytemplate are exhausted do a seek
consumer.seek(new TopicPartition(topicName, partition), offset);
}
catch (Exception ex) {
// if some other exception just log and put in dlq based on enableDlq property
logger.warn("some other business exception hence putting in dlq ");
throw ex;
}
if (ack != null) {
ack.acknowledge();
}
}
Problem with this approach - since I am doing consumer.seek() while there might be pending records from last poll those might be processed and committed if DB comes up during that period(hence out of order). Is there a way to clear those records while a seek is performed?
PS - we are currently in 2.0.3.RELEASE version of spring boot and Finchley.RELEASE or spring cloud dependencies (hence cannot use features like negative acknowledgement either and upgrade is not possible at this moment).
Spring Cloud Stream does not use a container factory. I already explained that to you in this answer.
Version 2.1 introduced the ListenerContainerCustomizer and if you add a bean of that type it will be called after the container is created.
Spring Boot 2.0 went end-of-life over a year ago and is no longer supported.
The answer I referred you shows how you can use reflection to add an error handler.
Doing the seek in the listener will only work if you have max.poll.records=1.

JavaFX - Handle Exceptions in one place

I am working on JavaFX application and I want to know if there is a way to handle exceptions in one place.
I am doing inserts into database. And when an insert fails, I get an SQLException.
So, is it possible to handle all SQLExceptions (for all inserts) in one place?
I'm aware of:
Thread.setDefaultUncaughtExceptionHandler(...);
But this is probably not the way to go?
It is bad practice to call any code that executes your SQL query (or any other business logic that may take long to execute) directly in the JavaFX application Thread. (I've observed that under Windows JavaFX applications crash without even printing a stacktrace when an uncaught exeption is thrown in the application thread.)
I would suggest to call your SQL-related code using an javafx.concurrent.Task.
Using the setOnFailed() method you can have code invoked whenever an Execption is thrown. There you can look for the type of exception and call any method that handles your SQLException.
Task<SOME_TYPE> mySqlTask = new Task<>() {
#Override
protected SOME_TYPE call() throws Exception {
... // do sql stuff
return mySqlResult; // or null if not needed
}
};
mySqlTask.setOnFailed(event -> {
Throwable exception = mySqlTask.getException();
if (exception instanceof SQLException) {
// call code that handles the sql exception
}
});
// start the task in a separate thread (or better use an Executor)
new Thread(mySqlTask).start();
By the way, I don't think that using Thread.setDefaultUncaughtExceptionHandler(...); is the way to go neither.

nhibernate transactions and unit testing

I've got a piece of code that looks like this:
public void Foo(int userId)
{
try {
using (var tran = NHibernateSession.Current.BeginTransaction())
{
var user = _userRepository.Get(userId);
user.Address = "some new fake user address";
_userRepository.Save(user);
Validate();
tran.Commit();
}
}
catch (Exception) {
logger.Error("log error and don't throw")
}
}
private void Validate()
{
throw new Exception();
}
And I'd like to unit test if validations ware made correctly. I use nunit and and SQLLite database for testing. Here is test code:
protected override void When()
{
base.When();
ownerOfFooMethod.Foo(1);
Session.Flush();
Session.Clear();
}
[Test]
public void FooTest()
{
var fakeUser = userRepository.GetUserById(1);
fakeUser.Address.ShouldNotEqual("some new fake user address");
}
My test fails.
While I'm debugging I can see that exception is thrown, Commit has not been called. But my user still has "some new fake user address" in Address property, although I was expecting that it will be rollbacked.
While I'm looking in nhibernate profiler I can see begin transaction statement, but it is not followed neither by commit nor by rollback.
What is more, even if I put there try-catch block and do Rollback explicitly in catch, my test still fails.
I assume, that there is some problem in testing environment, but everything seems fine for me.
Any ideas?
EDIT: I've added important try-catch block (at the beginning I've simplified code too much).
If the exception occurs before NH has flushed the change to the database, and if you then keep using that session without evicting/clearing the object and a flush occurs later for some reason, the change will still be persisted, since the object is still dirty according to NHibernate. When rolling back a transaction you should immediately close the session to avoid this kind of problem.
Another way to put it: A rollback will not rollback in-memory changes you've made to persistent entities.
Also, if the session is a regular session, that call to Save() isn't needed, since the instance is already tracked by NH.

NHibernate tries to flush operation that already failed - how to aviod it

In my web application, somewhere during request cycle I call following method on repository:
repository.Delete(objectToDelete);
and this is NHibernate implementation:
public void Delete(T entity)
{
if (!session.Transaction.IsActive)
{
using (ITransaction transaction = session.BeginTransaction())
{
session.Delete(entity);
transaction.Commit();
}
}
else
{
session.Delete(entity);
}
}
And session.Delete(entity) (inside using statement) fails - which is fine cos I have some database constraints and this is what I expected. However, at the end of the request in Global.asax.cs I close the session with following code:
protected void Application_EndRequest(object sender, EventArgs e)
{
ISession session = ManagedWebSessionContext.Unbind(HttpContext.Current, sessionFactory);
if (session != null)
{
if (session.Transaction != null && session.Transaction.IsActive)
{
session.Transaction.Rollback();
}
else
{
session.Flush();
}
session.Close();
}
}
This is the same session that was used to delete the object. And now, when:
session.Flush();
is called, NHibernate tries to perform the same DELETE operation - that throws exception and application crushes. Not exactly what I wanted, as I have already handled exception before (on repository level) and I show preety UI message box.
How can I prevent NHibernate from trying to perform DELETE (and I guess UPDATE operation in other scenarios) action once again when that session.Flush is called. Basicall I haven't designed that Application_EndRequest so I'm not sure whether it's a good approach to Flush everything.
Thanks
The flush-mode of NHibernate is by default set to 'auto', which means (amongst other things) that committing the transaction will cause NHibernate to flush and the delete fails.
At the end of the request you manually flush the session again, telling NHibernate to do the delete again (since there is no active transaction).
The reason that this is not working as expected, is because your expectations are wrong. A NHibernate session is a unit of work, i.e. everything your application does in one request.
Transactions are completely unrelated. The fact that flushing the session fails the first time, is also the reason that it fails the second time.
If you want to prevent NHibernate to perform the delete a second time, you shouldn't flush twice, only once. Either by committing the transaction or by doing it manually.
On a somewhat unrelated note: you are using NHibernate and transactions wrong. It will give you massive problems later on. There are some good resources online about how to use NHibernate in a web application.

Resources