I am using spring-kafka latest version and using #KafkaListener. I am using BatchListener. In the method that is listening to the list of messages i want to call the acknowledge only if the batch of records are processed. But the spring framework is not sending those messages again until I restart the application. So I used stop() and start() methods on KafkaListenerEndpointRegistry if the records were not processed but I feel like its not a good way of solving the problem. Is there a better way of handling this.
See the documentation for the SeekToCurrentBatchErrorHandler.
The SeekToCurrentBatchErrorHandler seeks each partition to the first record in each partition in the batch so the whole batch is replayed. This error handler does not support recovery because the framework cannot know which message in the batch is failing.
Related
I have batch #KafkaListener as follows:
#KafkaListener(
topicPattern = "ProductTopic",
containerFactory = "kafkaBatchListenerFactory")
public void onBatch(List<Message<String>> messages, Acknowledgment acknowledgment) {
consume(messages); // goes to DB
acknowledgment.acknowledge();
}
I also have 3 more topics created: ProductTopic.Retry-1, ProductTopic.Retry-2 and ProductTopic.Retry-DLT. Idea is to consume batch of messages from ProductTopic, and to do non-blocking exponential retries if DB bulk insert fails. I would like to publish message to ProductTopic.Retry-# each time the retry fails, and finally send it to ProductTopic.Retry-DLT. Also lets assume that because of some other limitations, I cannot let the framework create retry and dlt topics for me.
What's the best approach for such situation? Should I use RetryTopicConfigurer to configure such logic? How can I manually define names of my retry and dead lettered topics? Should I create #KafkaListener for each of the retry and dl topics?
Or is the best approach to use RecoveringBatchErrorHandler?
Please share any examples and good practices on this. I came across lots of comments and support on such topics, but some of the comments are old now and as such related to the older versions of spring-kafka. I can see there are few of the modern approaches to work with batch listeners, but I would also like to ask #Garry Russell and the team to point me in the right direction. Thanks!
The framework non-blocking retry mechanism does not support batch listeners.
EDIT
The built-in infrastructure is strongly tied to the KafkaBackoffAwareMessageListenerAdapter; you would need to create a version of that implements BatchAcknowledgingConsumerAwareMessageListener.
It should then be possible to wrap your existing listener with that but you would also need a custom error handler to send the whole batch to the next retry topic.
It would not be trivial.
When my spring aplication come up and makes an attempt to issue any command using send method NoHandlerForCommandException is observed. This exception is observed just after the startup of the application and after a few moments it can find the handler and everything works as expected.
How can I know if the command bus and all other command handling components are setup before initiating any command?
I have read somewhere on stackoverflow that in coming version of Axon Framework an event would be emitted after setting up or after receiving the start signal from command handling configuration, has that been introduced?
I believe the issue you are talking about is this one which is not done yet but you can follow it up there.
To your problem, the only way to do that right now is to wait a few seconds before you start your testing (not the best approach).
There are ways to check using Axon Server API if the command handlers are already registered there or not but that's not an easy task and not beautiful as well so I would stick with the wait approach by now until it gets properly fixed.
In below scenario, what would be the bahavior of Axon -
Command Bus recieved the command
It creates an event
However messaging infra is down (say kafka)
Does Axon has re-queing capability for event or any other alternative to handle this scenario.
If you're using Axon, you know it differentiates between Command, Event and Query messages. I'd suggest to be specific in your question which message type you want to retry.
However, I am going to make the assumption it's about events, as your stating Kafka.
If this is the case, I'd highly recommend reading the reference guide on the matter, as it states how you can uncouple Kafka publication from actual event storage in Axon.
Simply put, use a TrackingEventProcessor as the means to publish events on Kafka, as this will ensure a dedicate thread is used for publication instead of the same thread storing the event. Added, the TrackingEventProcessor can be replayed, thus "re-process" events.
I want to execute database operation in a handler and then send three commands to other handlers.
I want to make sure that all the execution of database operation together with sending commands occur in a transaction and whether all succeed or all fail.
I am using .net core and when I try to do this I get an exception that "This platform does not support distributed Transactions"
I was using RabbitMQ Transport and then SQL server transport but still getting the same problem.
I would like to know the best way to ensure that all the execution is ATOMIC under .NET Core and RabbitMQ or SQL Server transport.
Thanks
I am surprised that you get this particular exception, because Rebus does not participate in distributed transactions (at least not with any of the supported transports, and especially not with RabbitMQ).
Could you maybe update your question to include the full exception details (with stack trace and everything)? And maybe tell a little bit about how you're performing your database operations?
This is a question related to :
https://github.com/spring-projects/spring-kafka/issues/575
I'm using spring-kafka 1.3.7 and transactions in a read-process-write cycle.
For this purpose, I should use a KTM on the spring kafka container to enable transaction on the whole listener process and automatic handling the transaction id based on the partition for zombie fencing(1.3.7 changes).
If I understand well from the issue #575, I can not use a RetryTemplate in a container when using a transaction manager.
How am I supposed to handle errors and retries in a such case ?
The default behavior with transaction is infinite retries ? This seems really dangerous. An unexpected exception might simply block the whole process in production.
The upcoming 2.2 release adds recovery to the DefaultAfterRollbackProcessor - so you can stop retrying after some number of attempts.
Docs Here, PR here.
It also provides an optional mechanism to send the failed record to a dead-letter topic.
If you can't move to 2.2 (release candidate due at the end of this week, with GA in October), you can provide a custom AfterRollbackProcessor with similar functionality.
EDIT
Or, you could add code to your listener (or its error handler) to keep track of how many times the same record has been delivered, and handle the error in your listener, or its listener-level error handler.