Spring Kafka : Skip error message using CommonErrorHandler - spring-kafka

I am using spring-kafka 2.8.9 and kafka-clients 2.8.1 . I want to skip a message which is failed to de-serialize . Since setErrorHandler is deprecated , I tried using CommonErrorHandler . But I am not sure how to skip current error message and move to next record . The only option I can see is using pattern matching by extracting relevant details from below line like offset and partition .
org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition test-0 at offset 1. If needed, please seek past the record
Is there any other way like RecordDeserializationException to get necessary information from the exception or any other means without pattern matching . I can not upgrade to kafka 3.X.X .
My config
#Bean
public ConsumerFactory<String, Farewell> farewellConsumerFactory()
{
groupId = LocalTime.now().toString();
Map<String, Object> props = new HashMap<>();
props.put( ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put( ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"earliest");
return new DefaultKafkaConsumerFactory<>(props,new StringDeserializer(),new JsonDeserializer<>(Farewell.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Farewell> farewellKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Farewell> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setCommonErrorHandler(new CommonErrorHandler()
{
#Override
public void handleOtherException(Exception thrownException, Consumer<?, ?> consumer, MessageListenerContainer container, boolean batchListener)
{
CommonErrorHandler.super.handleOtherException(thrownException, consumer, container, batchListener);
}
});
factory.setConsumerFactory(farewellConsumerFactory());
return factory;
}
My listener class
#KafkaListener(topics = "${topicId}",
containerFactory = "farewellKafkaListenerContainerFactory")
public void farewellListener(Farewell message) {
System.out.println("Received Message in group " + groupId + "| " + message);
}
Domain class
public class Farewell {
private String message;
private Integer remainingMinutes;
public Farewell(String message, Integer remainingMinutes)
{
this.message = message;
this.remainingMinutes = remainingMinutes;
}
// standard getters, setters and constructor
}
I have checked these links
How to skip a msg that have error in kafka when i use ConcurrentMessageListenerContainer?
Better way of error handling in Kafka Consumer

Use an ErrorHandlingDeserializer as a wrapper around your real deserializer.
Serialization exceptions will be sent directly to the DefaultErrorHandler, which treats such exceptions as fatal (by default) and sends them directly to the recoverer.

Related

Aggregate identifier must be non-null after applying an event. Make sure the aggregate identifier is initialized at the latest

I am getting the below error. Axon version 3.3.3
org.axonframework.eventsourcing.IncompatibleAggregateException:
Aggregate identifier must be non-null after applying an event. Make
sure the aggregate identifier is initialized at the latest when
handling the creation event.
I have created a UserAggregate. It contains 2 events:
UserCreated
UpdateUserEvent
I am able to generate the first (UserCreated) event and it was saved in the event store with sequence 0, But while generating the second event I got the above-mentioned error.
Any suggestions on this?
UserAggregate.java
#Aggregate
public class UserAggregate {
#AggregateIdentifier
private String id;
private String email;
private String password;
public UserAggregate(String id, String email, String password) {
super();
this.id = id;
this.email = email;
this.password = password;
}
#CommandHandler
public UserAggregate(CreateUser cmd) {
AggregateLifecycle.apply(new UserCreated(cmd.getId(), cmd.getEmail(), cmd.getPassword()));
}
#CommandHandler
public void handle(UpdateUserCmd cmd) {
AggregateLifecycle.apply(new UpdateUserEvent(cmd.getId(), cmd.getEmail(),""));
}
#EventSourcingHandler
public void userCreated(UserCreated event) {
System.out.println("new User: email " + event.getEmail() +" Password: "+ event.getPassword());
setId(event.getId());
}
#EventSourcingHandler
public void updateUserEvent(UpdateUserEvent event) {
System.out.println("new User: email " + event.getEmail());
setId(event.getId());
}
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getEmail() {
return email;
}
public void setEmail(String email) {
this.email = email;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public UserAggregate() {
}
}
I am still getting to know Axon but here's how I managed to resolve the issue. Basically what the error is saying is that, when the UserAggregate is being instantiated the aggregate identifier (aka Primary Key) must not be null and must have a value.
There sequence of the life-cycle is that
It calls a no args constructor
It calls the constructor with your initial command, in your case. At this point, your aggregate identifier is still null and we will assign a value in the next step
It then calls a EventSourcingHandler that handles the event your applied from the previous step
Based on the steps above here's what you need to do:
Create a no args constructor
protected UserAggregate() {
}
Create a constructor which handles your first command:
#CommandHandler
public UserAggregate(CreateUser cmd) {
AggregateLifecycle.apply(
new UserCreated(cmd.getId(),cmd.getEmail(),cmd.getPassword()));
}
Finally add an event sourcing handler to handle the UserCreated event
#EventSourcingHandler
public void on(UserCreated userCreated) {
// this is where we instantiate the aggregate identifier
this.id = userCreated.getId();
//assign values to other fields
this.email = userCreated.getEmail();
this.password = userCreated.getPassword();
}
And here's the complete example:
#Aggregate
public class UserAggregate {
#AggregateIdentifier
private String id;
private String password;
private String email;
protected UserAggregate() {
}
#CommandHandler
public UserAggregate(CreateUser cmd) {
AggregateLifecycle.apply(
new UserCreated(cmd.getId(),cmd.getEmail(),cmd.getPassword()));
}
#EventSourcingHandler
public void on(UserCreated userCreated) {
// this is where we instantiate the aggregate identifier
this.id = userCreated.getId();
//assign values to other fields
this.email = userCreated.getEmail();
this.password = userCreated.getPassword();
}
}
When you are following the Event Sourcing paradigm for your Aggregates, I'd typically suggest two types of constructors to be present in the code:
A default no-arg constructor with no settings in it.
One (or more) constructor(s) which handles the 'Aggregate creation command'
In your example I see a third constructor to set id, email and password.
My guess is that this constructor might currently obstruct the EventSourcedAggregate implementation for correct validation.
The exception you are receiving can only occur if the #AggregateIdentifier annotated field is not set after the constructor command handler (in your case UserAggregate(CreateUser) has ended it's Unit of Work.
Thus, seeing your code, my only hunch is this "wild, unused" constructor which might obstruct things.
Lastly, I need to recommend you to use a more recent version of Axon.
3.3.3 is already quite far away from the current release, being 4.2.
Additionally, no active development is taking place on Axon 3.x versions.
It is thus wise to upgrade the version, which I assume shouldn't be a big deal as you are still defining your Command Model.
Update
I've just closed the Framework issue you've opened up. Axon provides entirely different means to tie in to the Message dispatching and handling, giving you cleaner intercept points than (Spring) AOP.
If you following the suggested guidelines to use a MessageDispatchInterceptor/MessageHandlerInterceptor or the more fine grained option with HandlerEnhancer, you can achieve these cross-cutting concerns you are looking for.
As far as logging goes, the framework even provides a LoggingInterceptor to do exactly what you need. No AOP needed.
Hope this helps you out Narasimha.
Thank you #Steven for the response.
I am able to reproduce this issue with Axon 4.2(latest) version also.
After removing the below AOP code in my project, The issue solved automatically.
Looks like Axon is missing compatible with the AOP feature.
AOP Code:
#Around("execution(* com.ms.axonspringboot..*(..))")
public Object methodInvoke(ProceedingJoinPoint jointPoint) throws Throwable {
LOGGER.debug(jointPoint.getSignature() + "::: Enters");
Object obj = jointPoint.proceed();
LOGGER.debug(jointPoint.getSignature() + "::: Exits");
return obj;
}
Axon 4.2 version error logs
2019-10-07 12:52:41.689 WARN 31736 --- [ault-executor-0] o.a.c.gateway.DefaultCommandGateway : Command 'com.ms.axonspringboot.commands.UpdateUserCmd' resulted in org.axonframework.commandhandling.CommandExecutionException(Aggregate identifier must be non-null after applying an event. Make sure the aggregate identifier is initialized at the latest when handling the creation event.)
2019-10-07 12:52:41.710 ERROR 31736 --- [nio-7070-exec-3] o.a.c.c.C.[.[.[.[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] threw exception
org.axonframework.axonserver.connector.command.AxonServerRemoteCommandHandlingException: An exception was thrown by the remote message handling component: Aggregate identifier must be non-null after applying an event. Make sure the aggregate identifier is initialized at the latest when handling the creation event.
at org.axonframework.axonserver.connector.ErrorCode.lambda$static$8(ErrorCode.java:84) ~[axon-server-connector-4.2.jar:4.2]
at org.axonframework.axonserver.connector.ErrorCode.convert(ErrorCode.java:180) ~[axon-server-connector-4.2.jar:4.2]

DLT not being created when using SeekToCurrentErrorHandler and DeadLetterPublishingRecoverer for de-serialization failures

This is my first Spring Boot, Kafka project and my first Stack Overflow post.
I'm using Spring Boot 2.1.1 and spring-kafka 2.2.7.RELEASE.I am trying to configure Spring SeekToCurrentErrorHandler with a DeadLetterPublishingRecoverer to send de-serialization failure messages to a different topic. The new DLT queue is not being created.
While I am able to see the error message due to de-serialization failure as an ERROR in the application logs/IDE Console (and process subsequent messages when feeding the topic manually), the "originalTopic.DLT" topic is not created and hence the incorrect message is not written to the .DLT topic. I read in Spring documentation that “By default, the dead-letter record is sent to a topic named originalTopic.DLT (the original topic name suffixed with .DLT) and to the same partition as the original record”
Instead, I see the failed message in the log file (.log) along with the valid messages of the topic listed in #KafkaListner annotation.
I am trying to write the error message as-is to the .DLT topic for further Error processing.
Here is the configuration I have so far. Any direction regarding where I'm going wrong would be really helpful.
I referred the following links https://docs.spring.io/spring-kafka/reference/html/#serdes, Configuring Spring Kafka to use DeadLetterPublishingRecoverer and SeekToCurrentErrorHandler: DeadLetterPublishingRecoverer is not handling deserialize errors to figure out a solution. But the issue I am facing is that the .DLT is not being created.
#EnableKafka
#Configuration
#ConditionalOnMissingBean(type = "org.springframework.kafka.core.KafkaTemplate")
public class SubscriberConfig {
#Value("${spring.kafka.bootstrap-servers}")
private String bootstrapServers;
#Autowired
private KafkaTemplate<Object, Object> kafkaTemplate;
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer2.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer2.class);
props.put(ErrorHandlingDeserializer2.KEY_DESERIALIZER_CLASS, StringDeserializer.class);
props.put(ErrorHandlingDeserializer2.VALUE_DESERIALIZER_CLASS, JsonDeserializer.class.getName());
props.put(JsonDeserializer.KEY_DEFAULT_TYPE, "java.lang.String");
props.put(JsonDeserializer.VALUE_DEFAULT_TYPE, "com.sample.main.entity.Transaction");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "json");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return props;
}
#Bean
public ConsumerFactory<String, Transaction> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),
new JsonDeserializer<>(Transaction.class, false));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Transaction> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Transaction> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setErrorHandler(new SeekToCurrentErrorHandler(new DeadLetterPublishingRecoverer(kafkaTemplate),3));
return factory;
#KafkaListener(topics = "${spring.kafka.subscription.topic}", groupId="json")
public void consume(#Payload Transaction message, #Headers MessageHeaders headers) {
//Business Logic......
this.sendMsgToNewTopic(newTopicName, transformedTrans);
}
}
}
Console output is 2019-07-29 15:28:03 ERROR LoggingErrorHandler:37 - Error while processing: ConsumerRecord(topic = trisyntrans, partition = 0, offset = 10, CreateTime = 1564432082456, serialized key size = -1, serialized value size = 30, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = this is failed deserialization)
org.springframework.kafka.support.converter.ConversionException: Failed to convert from JSON; nested exception is com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'this': was expecting 'null', 'true', 'false' or NaN
at [Source: (String)"this is failed deserialization"; line: 1, column: 5]
at org.springframework.kafka.support.converter.StringJsonMessageConverter.extractAndConvertValue(StringJsonMessageConverter.java:128)
at org.springframework.kafka.support.converter.MessagingMessageConverter.toMessage(MessagingMessageConverter.java:132)
at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.toMessagingMessage(MessagingMessageListenerAdapter.java:264)
at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:74)
at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:50)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeOnMessage(KafkaMessageListenerContainer.java:1275)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeOnMessage(KafkaMessageListenerContainer.java:1258)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:1219)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:1200)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:1120)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:935)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:751)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:700)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'this': was expecting 'null', 'true', 'false' or NaN
at [Source: (String)"this is failed deserialization"; line: 1, column: 5]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1804)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:679)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._reportInvalidToken(ReaderBasedJsonParser.java:2839)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._reportInvalidToken(ReaderBasedJsonParser.java:2817)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._matchToken(ReaderBasedJsonParser.java:2606)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._matchTrue(ReaderBasedJsonParser.java:2558)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:717)
at com.fasterxml.jackson.databind.ObjectMapper._initForReading(ObjectMapper.java:4141)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4000)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3042)
at org.springframework.kafka.support.converter.StringJsonMessageConverter.extractAndConvertValue(StringJsonMessageConverter.java:125)
... 15 more
Example for non-conforming message could be a simple string such as "This is a test message"
You have to create the DLT topic yourself.
The framework will do it for you, if you add a bean to the application context
#Bean
public NewTopic dlt(#Value("${spring.kafka.subscription.topic}" String mainTopic) {
return new NewTopic(mainTopic + ".DLT", 10, (short) 3);
}
As long as there is a KafkaAdmin #Bean in the application context (if you are using Spring Boot, one will be auto-configured for you).

How to use spring-kafka for sending a message again

We are using spring-kafka 1.2.2.RELEASE.
What we want
1. As soon as a message is consumed and processed successfully, offset is committed in spring-kafka. I am using Manaul Commit/Acknowledgement for it, it is working fine.
2. In case of any exception we want spring-kafka to resend the same message. We are throwing RunTime exception on any system error, which was logged by spring-kafka and never committed.
This is fine as we don't want it to commit, but that message stays in spring-kafka and never comes back unless we restart the service. On restart message comes back and executes once again and then stay in spring-kafka
What we tried
1. I have tried both ErrorHandler and RetryingMessageListenerAdapter, but in both cases we have to code in service how to process the message again
This is my consumer
public class MyConsumer{
#KafkaListener
public void receive(...){
// application logic to return success/failure
if(success){
acknowledgement.acknowledge();
}else{
throw new RunTimeException();
}
}
}
Also I have following configurations for container factory
factory.getContainerProperties().setErrorHandler(new ErrorHandler(){
#Override
public void handle(...){
throw new RunTimeException("");
}
});
While executing the flow, control is coming inside both first to receive and then handle method. After that service waits for new message. However I was expecting, since we threw an exception, and message is not committed, same message should land in receive method again.
Is there any way, we can tell spring kafka "do not commit this message and send it again asap?"
1.2.x is no longer supported; 1.x users are recommended to upgrade to at least 1.3.x (currently 1.3.8) because of its much simpler threading model, thanks to KIP-62.
The current version is 2.2.2.
2.0.1 introduced the SeekToCurrentErrorHandler which re-seeks the failed record so that it is redelivered.
With earlier versions, you had to stop and restart the container to redeliver a failed message, or add retry to the listener adapter.
I suggest you upgrade to the newest possible release.
Unfortunately version available for us to use is 1.3.7.RELEASE.
I have tried implementing the ConsumerSeekAware interface. Below is how I am doing it and I can see message delivering repreatedly
Consumer
public class MyConsumer implements ConsumerSeekAware{
private ConsumerSeekCallback consumerSeekCallback;
if(condition) {
acknowledgement.acknowledge();
}else {
consumerSeekCallback.seek((String)headers.get("kafka_receivedTopic"),
(int) headers.get("kafka_receivedPartitionId"),
(int) headers.get("kafka_offset"));
}
}
#Override
public void registerSeekCallback(ConsumerSeekCallback consumerSeekCallback) {
this.consumerSeekCallback = consumerSeekCallback;
}
#Override
public void onIdleContainer(Map<TopicPartition, Long> arg0, ConsumerSeekCallback arg1) {
LOGGER.debug("onIdleContainer called");
}
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> arg0, ConsumerSeekCallback arg1) {
LOGGER.debug("onPartitionsAssigned called");
}
}
Config
public class MyConsumerConfig {
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
// Set server, deserializer, group id
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
return props;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, MyModel> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, MyModel> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(consumerConfigs()));
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
return factory;
}
#Bean
public MyConsumer receiver() {
return new MyConsumer();
}
}

Spring kafka : Kafka Listener- consumer.seek issue

we are using Spring KafkaListener which acknowledges each records after it is processed to DB. If we have problems writing to DB we don't acknowledge the record so that offsets are not committed for the consumer. this works fine. Now we want to get the failed messages in next poll to retry them. we added errorhandler to our listener and invoked ConsumerAwareListenerErrorHandler and tried to do consumer.seek() for the failed message offset. Expectation is during next poll, we should received the failed messages. This is not happening. Next poll fetches only the new messages and not the failed messages Code snippet is given below.
#Service
public class KafkaConsumer {
#KafkaListener(topics = ("${kafka.input.stream.topic}"), containerFactory = "kafkaManualAckListenerContainerFactory", errorHandler = "listen3ErrorHandler")
public void onMessage(ConsumerRecord<Integer, String> record,
Acknowledgment acknowledgment ) throws Exception {
try {
msg = JaxbUtil.convertJsonStringToMsg(record.value());
onHandList = DCMUtil.convertMsgToOnHandDTO(msg);
TeradataDAO.updateData(onHandList);
acknowledgment.acknowledge();
recordSuccess = true;
LOGGER.info("Message Saved in Teradata DB");
} catch (Exception e) {
LOGGER.error("Error Processing On Hand Data ", e);
recordSuccess = false;
}
}
#Bean
public ConsumerAwareListenerErrorHandler listen3ErrorHandler() throws InterruptedException {
return (message, exception, consumer) -> {
this.listen3Exception = exception;
MessageHeaders headers = message.getHeaders();
consumer.seek(new org.apache.kafka.common.TopicPartition(
headers.get(KafkaHeaders.RECEIVED_TOPIC, String.class),
headers.get(KafkaHeaders.RECEIVED_PARTITION_ID, Integer.class)),
headers.get(KafkaHeaders.OFFSET, Long.class));
return null;
};
}
}
Container Class
#Bean
public Map<Object,Object> consumerConfigs() {
Map<Object,Object> props = new HashMap<Object,Object> ();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
localhost:9092);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "example-1");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
return props;
}
#SuppressWarnings({ "rawtypes", "unchecked" })
#Bean
public ConsumerFactory consumerFactory() {
return new DefaultKafkaConsumerFactory(consumerConfigs());
}
#SuppressWarnings("unchecked")
#Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Integer, String>>
kafkaManualAckListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
return factory;
}
It's supposed to work like this:
The error handler needs to throw an exception if you want to discard additional records from the previous poll.
Since you are "handling" the error, the container knows nothing and will continue to call the listener with the remaining records from the poll.
That said, I see that the container is also ignoring an exception thrown by the error handler (it will discard if the error handler throws an Error not an exception). I will open an issue for this.
Another work around would be to add the Consumer to the listener method signature and do the seek there (and throw an exception). If there is no error handler, the rest of the batch is discarded.
Correction
If the container has no ErrorHandler, any Throwable thrown by a ListenerErrorHandler will cause the remaining records to be discarded.
Please try using SeekToCurrentErrorHandler. The doc says "This allows implementations to seek all unprocessed topic/partitions so the current record (and the others remaining) will be retrieved by the next poll. The SeekToCurrentErrorHandler does exactly this.
The container will commit any pending offset commits before calling the error handler."
https://docs.spring.io/autorepo/docs/spring-kafka-dist/2.1.0.BUILD-SNAPSHOT/reference/htmlsingle/#_seek_to_current_container_error_handlers

spring-integration-dsl: Make Feed-Flow Work

I'm trying to code a RSS-feed reader with a configured set of RSS-feeds. I thought that a good approach is to solve that by coding a prototype-#Bean and call it with each RSS-feed found in the configuration.
However, I guess that I'm missing a point here as the application launches, but nothing happens. I mean the beans are created as I'd expect, but there is no logging happening in that handle()-method:
#Component
public class HomeServerRunner implements ApplicationRunner {
private static final Logger logger = LoggerFactory.getLogger(HomeServerRunner.class);
#Autowired
private Configuration configuration;
#Autowired
private FeedConfigurator feedConfigurator;
#Override
public void run(ApplicationArguments args) throws Exception {
List<IntegrationFlow> feedFlows = configuration.getRssFeeds()
.entrySet()
.stream()
.peek(entry -> System.out.println(entry.getKey()))
.map(entry -> feedConfigurator.feedFlow(entry.getKey(), entry.getValue()))
.collect(Collectors.toList());
// this one appears in the log-file and looks good
logger.info("Flows: " + feedFlows);
}
}
#Configuration
public class FeedConfigurator {
private static final Logger logger = LoggerFactory.getLogger(FeedConfigurator.class);
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public IntegrationFlow feedFlow(String name, FeedConfiguration configuration) {
return IntegrationFlows
.from(Feed
.inboundAdapter(configuration.getSource(), getElementName(name, "adapter"))
.feedFetcher(new HttpClientFeedFetcher()),
spec -> spec.poller(Pollers.fixedRate(configuration.getInterval())))
.channel(MessageChannels.direct(getElementName(name, "in")))
.enrichHeaders(spec -> spec.header("feedSource", configuration))
.channel(getElementName(name, "handle"))
//
// it would be nice if the following would show something:
//
.handle(m -> logger.debug("Payload: " + m.getPayload()))
.get();
}
private String getElementName(String name, String postfix) {
name = "feedChannel" + StringUtils.capitalize(name);
if (!StringUtils.isEmpty(postfix)) {
name += "." + postfix;
}
return name;
}
}
What's missing here? It seems as if I need to "start" the flows somehow.
Prototype beans need to be "used" somewhere - if you don't have a reference to it anywhere, no instance will be created.
Further, you can't put an IntegrationFlow #Bean in that scope - it generates a bunch of beans internally which won't be in that scope.
See the answer to this question and its follow-up for one technique you can use to create multiple adapters with different properties.
Alternatively, the upcoming 1.2 version of the DSL has a mechanism to register flows dynamically.

Resources