Can I use CompletableFuture.runAsync inside a spring kafka batch listener? - spring-kafka

considering my question:
#KafkaListener(..)
public void receive(
List<ConsumerRecord<String, String>> records,
Acknowledgment ack) {
records.stream().forEach(r -> CompletableFuture.runAsync(ConsumerService::process);
ack.acknowledge();
}
What are the pitfalls? Is it a good code?
My process method will to repost to kafka if fail, in this case I can commit if or not I get some error...

You run the risk of losing messages because you are committing the offsets before the async tasks complete. (If there is a failure (server crash, power failure etc.).

Related

How to retry in kafka with separate retry topics and exponentional backoff

To implement Retry architecture in spring kafka ( kafka version 2.3.6) with backoffs.
Requirement.
Event is first published in main-topic.
If the processing fails, then its pushed to retry-topic-2s after 2s. This retry topic has a separate processing code as compared to the main topic (consequently have a different listener function).
If the processing fails in retry-topic-2s, then its pushed to retry-topic-6s after 6s. This retry topic has a separate processing code as compared to the previous retry topic.
Finally if processing fails, event is pushed to DLT (dead-letter-topic).
Problem.
How to push events into a custom topic with a delay ( without using Thread.sleep ).
I have tried ContainerCustomizer to direct messages to retry-topic-2s directly without and delay and applying retry backoffs in this topic. But this is not satisfying the requirement.
private ContainerCustomizer<String,String, ConcurrentMessageListenerContainer<String, String>> customizer(KafkaTemplate<String,String> template) {
return container -> {
ExponentialBackOff exponentialBackOff = new ExponentialBackOff(retryFixedBackoff, retryMultiplier);
exponentialBackOff.setMaxElapsedTime(retryMaxElapsedTime);
if (container.getContainerProperties().getTopics()[0].equals(callbackRetryTopic)) {
container.setErrorHandler(new SeekToCurrentErrorHandler(
new DeadLetterPublishingRecoverer(template,
(cr, ex) -> new TopicPartition(callbackDeadLetterTopic, cr.partition())),
exponentialBackOff));
}
};
}
Can someone help me with this usecase?
It is non-trivial and is now supported natively by all supported versions of Spring for Apache Kafka. There is a lot of code involved in supporting this feature.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#retry-topic

#KafkaListener skipping messages when using Acknowledgment.nack()

Spring Kafka version - 2.8.5
```#KafkaListener
consumeMessages(#Payload List<String> messages,Ack ack)
{
//process records in separate Thread and acknowledge
// Thread is not available nack it after 2 seconds
}```
Nack records should be reprocessed after 2 seconds in KafkaListener. However, Skipped records were not processed by KafkaListener. The missing message is consumed again after restarting the Spring Boot app.
You can't use nack() from another thread; only from the listener thread
#Override
public void nack(long sleepMillis) {
Assert.state(Thread.currentThread().equals(ListenerConsumer.this.consumerThread),
"nack() can only be called on the consumer thread");
Even when called from the listener thread, nack() does not support "skipping" records; when using manual commits, it is the application's responsibility to commit the offset to skip a record (by calling acknowledge()).

How AppDynamics 4.4 to track async transaction

Consider below code:
public class Job {
private final ExecutorService executorService;
public void process() {
executorService.submit(() -> {
// do something slow
}
}
}
I could use AppDynamics "Java POJO" rule to create a business transaction to track all the calls to Job.process() method. But the measured response time didn't reflect real cost by the async thread started by java.util.concurrent.ExecutorService. This exact problem is also described in AppDynamics document: End-to-End Latency Performance that:
The return of control stops the clock on the transaction in terms of measuring response time, but meanwhile the logical processing for the transaction continues.
The same AppDynamics document tries to give a solution to address this issue but the instructions it provides is not very clear to me.
Could anyone give more executable guide on how to configure AppD to track async calls like the one shown above?
It seems that you schould be able to define your custom Asynchronous Transaction Demarcator as described in: https://docs.appdynamics.com/display/PRO44/Asynchronous+Transaction+Demarcators
which will point to the last method of Runnable that you passes to the Executor. Then according to the documentation all you need is to attach the Demarcator to your Business Transaction and it will collect the asynchronous call.

How is the batch asynchronous approach with the intuit SDK any different to the batch synchronous approach?

I have looked at the documentation for both synchronous and asynchronous approaches for the QuickBooks Online API V3. They both allow the creation of a data object and the adding of requests to a batch operation followed by the execution of the batch. In both the documentations they state:
"Batch items are executed sequentially in the order specified in the
request..."
This confuses me because I don't understand how asynchronous processing is allowed if the batch process executes each batch operation sequentially.
The documentation for asynchronous processing states at the top:
"To asynchronously access multiple data objects in a single request..."
I don't understand how this can occur if batch operations are executed sequentially within a batch process request.
Would someone kindly clarify.
In asyn call( from devkit ), calling thread doesn't wait for the response from service. You can associate a handler which will take care of that.
for Ex -
public void asyncAddAccount() throws FMSException, Exception {
Account accountIn = accountHelper.getBankAccountFields();
try {
service.addAsync(accountIn, new CallbackHandler() {
#Override
public void execute(CallbackMessage callbackMessage) {
callbackMessageResult = callbackMessage;
lock_add.countDown();
}
});
} catch (FMSException e) {
Assert.assertTrue(false, e.getMessage());
}
lock_add.await();
Account accountOut = (Account) callbackMessageResult.getEntity();
Assert.assertNotNull(accountOut);
accountHelper.verifyAccountFields(accountIn, accountOut);
}
Server always executes the requests sequentially.
In a batch, if you specify multiple operations, then server will execute it sequentially (top - down).
Thanks

ActiveMQ Override scheduled message

I am trying to implement delayed queue with overriding of messages using Active MQ.
Each message is scheduled to be delivered with delay of x (say 60 seconds)
In between if same message is received again it should override previous message.
So even if I receive 10 messages say in x seconds. Only one message should be processed.
Is there clean way to accomplish this?
The question has two parts that need to be addressed separately:
Can a message be delayed in ActiveMQ?
Yes - see Delay and Schedule Message Delivery. You need to set <broker ... schedulerSupport="true"> in your ActiveMQ config, as well as setting the AMQ_SCHEDULED_DELAY property of the JMS message saying how long you want the message to be delayed (10000 in your case).
Is there any way to prevent the same message being consumed more than once?
Yes, but that's an application concern rather than an ActiveMQ one. It's often referred to as de-duplication or idempotent consumption. The simplest way if you only have one consumer is to keep track of messages received in a map, and check that map whether you receive a message. It it has been seen, discard.
For more complex use cases where you have multiple consumers on different machines, or you want that state to survive application restart, you will need to keep a table of messages seen in a database, and query it each time.
Please vote this answer up if it helps, as it encourages people to help you out.
Also according to method from ActiveMQ BrokerService class you should configure persistence to have ability to use scheduler functionality.
public boolean isSchedulerSupport() {
return this.schedulerSupport && (isPersistent() || jobSchedulerStore != null);
}
you can configure activemq broker to enable "schedulerSupport" with the following entry in your activemq.xml file located in conf directory of your activemq home directory.
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}" schedulerSupport="true">
You can Override the BrokerService in your configuration
#Configuration
#EnableJms
public class JMSConfiguration {
#Bean
public BrokerService brokerService() throws Exception {
BrokerService brokerService = new BrokerService();
brokerService.setSchedulerSupport(true);
return brokerService;
}
}

Resources