Looking for non-blocking spring kafka ErrorHandler - spring-kafka

After use of SeekToCurrentErrorHandler i am looking for a non-blocking kafka ErrorHandler. Because of some unstable subsystems we need to set high interval times as 5 minutes or more. Which would block our processing.
My idea is to use the topic itself to re-queue failing messages. But with two additional header values kafka_try-counter and kafka_try-timestamp.
Based on the SeekToCurrentErrorHandler and the DeadLetterPublishingRecoverer i implemented a draft of RePublishingErrorHandler and a RePublishingRecoverer
The RePublishingRecoverer update the kafka headers and produce the message in the same topic.
The RePublishingErrorHandler check header values and if kafka_try-counter exeeds max-attempts calls another ConsumerRecordRecoverer like the DLT or Logging.
The kafka_try-timestamp used determine the wait time of a message. If it returns to fast it should re-queued without the incremention of the try-counter.
The expectation of this aproach is to get a non-blocking listener.
Because of i am new to spring-kafka implementation and also kafka itself. I'm not sure if this aproach is OK.
And i am also somehow stuck in the implementation of that concept.

My idea is to use the topic itself to re-queue failing messages.
That won't work; you would have to publish it to another topic and have a (delaying) consumer on that topic, perhaps polling at some interval rather than using a message-driven consumer. Then have that consumer publish it back to the original topic.
All of this assumes that strict ordering within a partition is not a requirement for you.
It's easy enough to subclass the DeadLetterPublishingRecoverer and override the createProducerRecord() method. Call super() and then add your headers.
Set the BackOff in the SeekToCurrentErrorHandler to have a zero back off and 0 retries to immediately publish to the DLT.

Related

Spring Kafka Non-Blocking retries

I have batch #KafkaListener as follows:
#KafkaListener(
topicPattern = "ProductTopic",
containerFactory = "kafkaBatchListenerFactory")
public void onBatch(List<Message<String>> messages, Acknowledgment acknowledgment) {
consume(messages); // goes to DB
acknowledgment.acknowledge();
}
I also have 3 more topics created: ProductTopic.Retry-1, ProductTopic.Retry-2 and ProductTopic.Retry-DLT. Idea is to consume batch of messages from ProductTopic, and to do non-blocking exponential retries if DB bulk insert fails. I would like to publish message to ProductTopic.Retry-# each time the retry fails, and finally send it to ProductTopic.Retry-DLT. Also lets assume that because of some other limitations, I cannot let the framework create retry and dlt topics for me.
What's the best approach for such situation? Should I use RetryTopicConfigurer to configure such logic? How can I manually define names of my retry and dead lettered topics? Should I create #KafkaListener for each of the retry and dl topics?
Or is the best approach to use RecoveringBatchErrorHandler?
Please share any examples and good practices on this. I came across lots of comments and support on such topics, but some of the comments are old now and as such related to the older versions of spring-kafka. I can see there are few of the modern approaches to work with batch listeners, but I would also like to ask #Garry Russell and the team to point me in the right direction. Thanks!
The framework non-blocking retry mechanism does not support batch listeners.
EDIT
The built-in infrastructure is strongly tied to the KafkaBackoffAwareMessageListenerAdapter; you would need to create a version of that implements BatchAcknowledgingConsumerAwareMessageListener.
It should then be possible to wrap your existing listener with that but you would also need a custom error handler to send the whole batch to the next retry topic.
It would not be trivial.

Out-of-the-box capabilities for Spring-Kafka consumer to avoid duplicate message processing

I stumbled over Handling duplicate messages using the Idempotent consumer pattern :
Similar, but slightly different is the Transactional Inbox Pattern which acknowledges the kafka message receipt after the transaction INSERT into messages (no business transaction) concluded successfully and having a background polling to detect new messages in this table and trigger the real business logic (i.e. the message listener) subsequently.
Now I wonder, if there is a Spring magic to just provide a special DataSource config to track all received messages and discard duplicated message deliveries?
Otherwise, the application itself would need to take care to ack the kafka message receipt, message state changes and data cleanup of the event table, retry after failure and probably a lot of other difficult things that I did not yet thought about.
The framework does not provide this out of the box (there is no general solution that will work for all), but you can implement it via a filter, to avoid putting this logic in your listener.
https://docs.spring.io/spring-kafka/docs/2.7.9/reference/html/#filtering-messages

spring kafka message processing telemetry

I'm trying to create some kind of "kafka message processing graph" - which service is consuming which topics and what messages - with some additional metadata(processing duration, whether it was processed OK or it ended with exception,...).
I could create some interceptor that would be invoked before each message processing, but in interceptor I don't know whether there is some handler for this type of event, nor do I know whether message was later processed OK or it ended in error handler.
For checking whether there is some handler I suppose there is some registry i could peek into (?), but is there also some way of wrapping message processing (like filters in spring-mvc) so I can calculate processing duration and end result?
Micrometer timers have been supported since 2.3 (for successes and failures).
https://docs.spring.io/spring-kafka/docs/current/reference/html/#micrometer
You can also add an AOP around advice to your listener beans.

How to create command by consuming message from kafka topic rather than through Rest API

I'm using Axon version (3.3) which seamlessly supports Kafka with annotation in the SpringBoot Main class using
#SpringBootApplication(exclude = KafkaAutoConfiguration.class)
In our use case, the command side microservice need to pick message from kafka topic rather than we expose it as Rest api. It will store the event in event store and then move it to another kafka topic for query side microservice to consume.
Since KafkaAutoCOnfiguration is disabled, I cannot use spring-kafka configuration to write a consumer. How can I consume a normal message in Axon?
I tried writing a normal Kafka spring Consumer but since Kafka Auto COnfiguration is disabled, initial trigger for the command is not picked up from the Kafka topic
I think I can help you out with this.
The Axon Kafka Extension is solely meant for Events.
Thus, it is not intended to dispatch Commands or Queries from one node to another.
This is very intentionally, as Event messages have different routing needs apposed to Command and Query messages.
Axon views Kafka a fine fit as an Event Bus and as such this is supported through the framework.
It is however not ideal for Command messages (should be routed to a single handler, always) or Query messages (can be routed to a single handler, several handlers or have a subscription model).
Thus, I you'd want to "abuse" Kafka for different types of messages in conjunction with Axon, you will have to write your own component/service for it.
I would however stick to the messaging paradigm and separate these concerns.
For far increasing simplicity when routing messages between Axon applications, I'd highly recommend trying out Axon Server.
Additionally, here you can hear/see Allard Buijze point out the different routing needs per message type (thus the reason why Axon's Kafka Extension only deals with Event messages).

Get topic metadata from KafkaTemplate

I have seen from the KafkaTemplate implementation that there is no access to the actual Kafka Producer. While this Producer wrapping might be good, there are some methods from the Kafka Producer that are needed like metrics() and partitionsFor(java.lang.String topic).
In KafkaTemplate we could have these same methods wrapping the actual Kafka Producer methods.
Is this something likely to be implemented in newer versions?
Could I implement it and make a pull request?
In accordance with Kafka guidelines, the DefaultKafkaProducerFactory always returns the same producer, so it's safe to call createProducer to get a reference to the single producer.
Calling close() on the producer is ignored.
However, I have opened a GitHub Issue to provide access to the producer from the template.

Resources