Why are the eventHandlers for ListenerContainerIdleEvent is not getting called? - spring-kafka

I have configured 2 listeners in my app config.
#Bean
public ConsumerFactory<String, String> consumerFactoryForReceiveEvent() {
Map<String, Object> props = kafkaProps();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "<server 1>");
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public ConsumerFactory<String, String> consumerFactoryForPutawayEvent() {
Map<String, Object> props = kafkaProps();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "<server 2>");
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactoryForReceiveEvent() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactoryForReceiveEvent());
factory.setConcurrency(1);
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setErrorHandler(new SeekToCurrentErrorHandler());
factory.getContainerProperties().setIdleEventInterval(Long.valueOf(30000L);
factory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.MANUAL);
return factory;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactoryForPutawayEvent() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactoryForPutawayEvent());
factory.setConcurrency(1);
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setErrorHandler(new SeekToCurrentErrorHandler());
factory.getContainerProperties().setIdleEventInterval(30000L);
factory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.MANUAL);
return factory;
}
In order to get my app to start I had to turn off kafka auto config. When messages are posted, they are getting read so I am sure that the 2 listeners are working. However when no messages are posted, the eventHandlers for ListenerContainerIdleEvent are not getting called. What could be preventing these eventHandlers from being called?
#EventListener
public void eventHandler(ListenerContainerIdleEvent event) {
LOG.info("No messages received for " + event.getIdleTime() + " milliseconds");
}

I have just met this issue, and I believe the reason for none of the events being fired in the original code was the following:
https://github.com/spring-projects/spring-kafka/blob/master/spring-kafka/src/main/java/org/springframework/kafka/listener/ConcurrentMessageListenerContainer.java, which publishes only a fraction of application events (does not publish the "ListenerContainerIdleEvent" event).
https://github.com/spring-projects/spring-kafka/blob/master/spring-kafka/src/main/java/org/springframework/kafka/listener/KafkaMessageListenerContainer.java publishes these events.
Using #KafkaListener defaults to KafkaMessageListenerContainer, which is why those events worked in Gary's example.

What version are you using? Where is your event listener defined? I just tested it with 2.1.5 and everything works fine...
#SpringBootApplication
public class So49889974Application {
private static final Log logger = LogFactory.getLog(So49889974Application.class);
public static void main(String[] args) {
SpringApplication.run(So49889974Application.class, args);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory(
ConsumerFactory<String, String> consumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
factory.setConcurrency(1);
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setErrorHandler(new SeekToCurrentErrorHandler());
factory.getContainerProperties().setIdleEventInterval(30000L);
factory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.MANUAL);
return factory;
}
#KafkaListener(id="foo", topics = "so49889974")
public void listen(String in) {
}
#EventListener
public void eventHandler(ListenerContainerIdleEvent event) {
logger.info(event);
}
}
and
2018-04-18 10:39:04.238 INFO 92080 --- [ foo-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [so49889974-0]
2018-04-18 10:39:31.339 INFO 92080 --- [ foo-0-C-1] com.example.So49889974Application : ListenerContainerIdleEvent [idleTime=30.242s, listenerId=foo-0, container=KafkaMessageListenerContainer [id=foo-0, clientIndex=-0, topicPartitions=[so49889974-0]], paused=false, topicPartitions=[so49889974-0]]
2018-04-18 10:40:01.451 INFO 92080 --- [ foo-0-C-1] com.example.So49889974Application : ListenerContainerIdleEvent [idleTime=60.354s, listenerId=foo-0, container=KafkaMessageListenerContainer [id=foo-0, clientIndex=-0, topicPartitions=[so49889974-0]], paused=false, topicPartitions=[so49889974-0]]

Related

Kafka Synchronous Communication/Two way communication using ReplyingKafkaTemplate causing Lags in Response / Reply Topic

We are having several microservices in our product, there are some business use cases where one microservice (TryServiceOne) have to delegate request to another microserice (TryServiceThree). For this end user is waiting for response from API. So we used ReplyingKafkaTemplate So that we can instantly respond back to Caller. Everything seems to be working, but we are seeing LAGs in REPLY Topic which is causing our Alert system to bombard with alerts. But behind the scenes messages are getting read by RequestReplyFuture and processed successfully lag is keep increasing from Kafka broker. Please suggest how to avoid LAGs.
IMPORTANT
We are using cluster deployment of microsrvices with more than one node. Hence we are using Custom Partitioning to assign response/ reply topic to one partition all the time.
TryServiceOne
KafkaConfiguration.class
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(org.apache.kafka.clients.producer.ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaBootstrapServers);
props.put(org.apache.kafka.clients.producer.ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(org.apache.kafka.clients.producer.ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return props;
}
#Bean
public Map<String,Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaBootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, consumerGroupId);
return props;
}
#Bean
public ProducerFactory<String, RequestModel> requestProducerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public KafkaTemplate<String, RequestModel> kafkaTemplate() {
return new KafkaTemplate<>(requestProducerFactory());
}
#Bean
public ReplyingKafkaTemplate<String, RequestModel, ResponseModel> replyKafkaTemplate(ProducerFactory<String, RequestModel> pf,
KafkaMessageListenerContainer<String, ResponseModel> container){
return new ReplyingKafkaTemplate<>(pf, container);
}
#Bean
public KafkaMessageListenerContainer<String, ResponseModel> replyContainer(ConsumerFactory<String, ResponseModel> cf) {
TopicPartitionOffset topicPartitionOffset = new TopicPartitionOffset("RESPONSE_TOPIC",0);
ContainerProperties containerProperties = new ContainerProperties(topicPartitionOffset);
containerProperties.setAckMode(ContainerProperties.AckMode.MANUAL);
return new KafkaMessageListenerContainer<>(cf, containerProperties);
}
My SendAndReceive Service Component looks like below
RequestModel requestModel= new RequestModel();
distributorRequestEvent.setDistributorModel(producerRecord);
// create producer record
ProducerRecord<String, RequestModel> record = new ProducerRecord<String, RequestModel>("REQUEST_TOPIC", requestModel);
// set reply topic in header
record.headers().add(new RecordHeader(KafkaHeaders.REPLY_TOPIC, "RESPONSE_TOPIC".getBytes(StandardCharsets.UTF_8)));
kafkaTemplate.setDefaultReplyTimeout(Duration.ofSeconds(30));
LOGGER.info("Sending message ... {}",producerRecord);
RequestReplyFuture<String, RequestModel, ResponseModel> sendAndReceive = kafkaTemplate.sendAndReceive(record);
// confirm if producer produced successfully
SendResult<String, RequestModel> sendResult = sendAndReceive.getSendFuture().get();
// get consumer record
ConsumerRecord<String, ResponseModel> consumerRecord = sendAndReceive.get();
return consumerRecord.value();
TryServiceThree Microservice
Kafka Configuration
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaBootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, consumerGroupId);
props.put(JsonDeserializer.TYPE_MAPPINGS,RequestModel.class);
return props;
}
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaBootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
props.put(ProducerConfig.PARTITIONER_CLASS_CONFIG,CustomPartitioner.class);
props.put(ProducerConfig.ACKS_CONFIG, "all");
return props;
}
#Bean
public ConsumerFactory<String, RequestModel> requestConsumerFactory() {
JsonDeserializer<RequestModel> deserializer = new JsonDeserializer<>(RequestModel.class);
deserializer.setRemoveTypeHeaders(false);
deserializer.addTrustedPackages("*");
deserializer.setUseTypeMapperForKey(true);
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),
deserializer);
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, RequestModel>> requestListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, RequestModel> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(requestConsumerFactory());
// factory.setMessageConverter(new JsonMessageConverter());
factory.setReplyTemplate(replyTemplate());
return factory;
}
#Bean
public ProducerFactory<String, ResponseModel> replyProducerFactory() {
ProducerFactory<String, ResponseModel> producerFactory = new DefaultKafkaProducerFactory<>(producerConfigs());
return producerFactory;
}
#Bean
public KafkaTemplate<String, ResponseModel> replyTemplate() {
return new KafkaTemplate<>(replyProducerFactory());
}
CustomPartitioning on TryServiceThree
public class CustomPartitioner implements Partitioner {
#Override
public int partition(String s, Object o, byte[] bytes, Object o1, byte[] bytes1, Cluster cluster) {
return 0;
}
#Override
public void close() {
}
#Override
public void configure(Map<String, ?> map) {
}
Use
containerProperties.setAckMode(ContainerProperties.AckMode.BATCH);
in the reply container.

Kafka consumer can't connect to broker other than localhost:9092 using Spring Boot 2.2.0.M4

I'm using Spring Boot 2.2.0.M4 and Kafka 2.2.0 trying to build an application based on the sample at https://www.baeldung.com/spring-kafka. When I enable the listener for my topic, I get the following error on the consumer.
[AdminClient clientId=adminclient-2] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
The following is defined in my application properties.
kafka.bootstrapAddress=172.22.22.55:9092
Here's the #KafkaListener annotated method.
#KafkaListener(topics = "add_app", groupId = "foo")
public void listen(String message) {
System.out.println("Received Message in group foo: " + message);
}
Below is the Consumer configuration class that is referencing the kafka.bootstrapAddress value. It is logged properly.
#Configuration
#Slf4j
public class KafkaConsumerConfig {
#Value(value = "${kafka.bootstrapAddress}")
private String bootstrapAddress;
public ConsumerFactory<String, String> consumerFactory(String groupId) {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
log.info("Created {} using address {}.", this.getClass(), bootstrapAddress);
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> fooKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory("foo"));
return factory;
}
The solution to this is fairly simple. I just needed to add the following to the application.properties file.
spring.kafka.bootstrap-servers=174.22.22.55:9092
After looking at KafkaProperties.java, I found this line:
private List<String> bootstrapServers = new ArrayList<>(Collections.singletonList("localhost:9092"));
and this method actually builds them:
private Map<String, Object> buildCommonProperties() {
Map<String, Object> properties = new HashMap();
if (this.bootstrapServers != null) {
properties.put("bootstrap.servers", this.bootstrapServers);
}
if (this.clientId != null) {
properties.put("client.id", this.clientId);
}
properties.putAll(this.ssl.buildProperties());
if (!CollectionUtils.isEmpty(this.properties)) {
properties.putAll(this.properties);
}
return properties;
}
Since it's already predefined on the class, the broker initially defined on the KafkaConsumerConfig is not used.
Update
Adding the containerFactory attribute to the listener annotation also fixes it and removes the need for the change to application.properties.
#KafkaListener(topics = "add_app", groupId = "foo", containerFactory = "fooKafkaListenerContainerFactory")
public void listen(String message) {
System.out.println("Received Message in group foo: " + message);
}
In order to use your custom property kafka.bootstrapAddress you need to create #Bean KafkaAdmin. It has its own configuration class AdminClientConfig which is by default configured to connect to 127.0.0.1:9092. To override the configuration you have to use something like this:
#Value(value = "${kafka.bootstrapAddress}")
private String bootstrapAddress;
#Bean
public KafkaAdmin kafkaAdmin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
return new KafkaAdmin(configs);
}

SeekToCurrentErrorHandler: DeadLetterPublishingRecoverer is not handling deserialize errors

I am trying to write kafka consumer using spring-kafka version 2.3.0.M2 library.
To handle run time errors I am using SeekToCurrentErrorHandler.class with DeadLetterPublishingRecoverer as my recoverer. This works fine only when my consumer code throws exception, but fails when unable to deserialize the message.
I tried implementing ErrorHandler myself and I was successful but with this approach I myself end up writing DLT code to handle error messages which I do not want to do.
Below are my kafka properties
spring:
kafka:
consumer:
bootstrap-servers: localhost:9092
group-id: group_id
auto-offset-reset: latest
key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer2
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer2
properties:
spring.json.trusted.packages: com.mypackage
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
public ConcurrentKafkaListenerContainerFactory kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> kafkaConsumerFactory,
KafkaTemplate<Object, Object> template) {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
configurer.configure(factory, kafkaConsumerFactory);
factory.setErrorHandler(new SeekToCurrentErrorHandler(new DeadLetterPublishingRecoverer(template), maxFailures));}
It works fine for me (note that Boot will auto-configure the error handler)...
#SpringBootApplication
public class So56728833Application {
public static void main(String[] args) {
SpringApplication.run(So56728833Application.class, args);
}
#Bean
public SeekToCurrentErrorHandler errorHandler(KafkaTemplate<String, String> template) {
SeekToCurrentErrorHandler eh = new SeekToCurrentErrorHandler(new DeadLetterPublishingRecoverer(template), 3);
eh.setClassifier( // retry for all except deserialization exceptions
new BinaryExceptionClassifier(Collections.singletonList(DeserializationException.class), false));
return eh;
}
#KafkaListener(id = "so56728833"
+ "", topics = "so56728833")
public void listen(Foo in) {
System.out.println(in);
if (in.getBar().equals("baz")) {
throw new IllegalStateException("Test retries");
}
}
#KafkaListener(id = "so56728833dlt", topics = "so56728833.DLT")
public void listenDLT(Object in) {
System.out.println("Received from DLT: " + (in instanceof byte[] ? new String((byte[]) in) : in));
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so56728833").partitions(1).replicas(1).build();
}
#Bean
public NewTopic dlt() {
return TopicBuilder.name("so56728833.DLT").partitions(1).replicas(1).build();
}
public static class Foo {
private String bar;
public Foo() {
super();
}
public Foo(String bar) {
this.bar = bar;
}
public String getBar() {
return this.bar;
}
public void setBar(String bar) {
this.bar = bar;
}
#Override
public String toString() {
return "Foo [bar=" + this.bar + "]";
}
}
}
spring:
kafka:
consumer:
auto-offset-reset: earliest
enable-auto-commit: false
key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer2
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer2
properties:
spring.json.trusted.packages: com.example
spring.deserializer.key.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
spring.json.value.default.type: com.example.So56728833Application$Foo
producer:
key-serializer: org.springframework.kafka.support.serializer.JsonSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
logging:
level:
org.springframework.kafka: trace
I have 3 records in the topic:
"badJSON"
"{\"bar\":\"baz\"}"
"{\"bar\":\"qux\"}"
I see the first one going directly to the DLT, and the second one goes there after 3 attempts.

Kafka Spring: How to write unit tests for ConcurrentKafkaListenerContainerFactory and ConcurrentMessageListenerContainer?

I have 2 classes; 1 for the factories and the other for listener containers:
public class ConsumerFactories() {
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Byte[]> adeKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Byte[]> factory = null;
factory = new ConcurrentKafkaListenerContainerFactory<String, Byte[]>();
factory.setConsumerFactory(consumerFactory1());
factory.setConsumerFactory(consumerFactory2());
factory.getContainerProperties().setPollTimeout(3000);
return factory;
}
}
And my listener class has multiple containers:
#Bean
public ConcurrentMessageListenerContainer<String, byte[]> adeListenerContainer() throws BeansException, ClassNotFoundException {
final ContainerProperties containerProperties =
new ContainerProperties("topic1");
containerProperties.setMessageListener(new MessageListener<String, byte[]>() {
#Override
public void onMessage(ConsumerRecord<String, byte[]> record) {
System.out.println("Thread is: " + Thread.currentThread().getName());
}
});
ConcurrentMessageListenerContainer<String, byte[]> container =
new ConcurrentMessageListenerContainer<>(consumerFactory1, containerProperties);
container.setBeanName("bean1");
container.setConcurrency(60);
container.start();
return container;
}
#Bean
public ConcurrentMessageListenerContainer<String, byte[]> adeListenerContainer() throws BeansException, ClassNotFoundException {
final ContainerProperties containerProperties =
new ContainerProperties("topic1");
containerProperties.setMessageListener(new MessageListener<String, byte[]>() {
#Override
public void onMessage(ConsumerRecord<String, byte[]> record) {
System.out.println("Thread is: " + Thread.currentThread().getName());
}
});
ConcurrentMessageListenerContainer<String, byte[]> container =
new ConcurrentMessageListenerContainer<>(consumerFactory2, containerProperties);
container.setBeanName("bean2");
container.setConcurrency(60);
container.start();
return container;
}
1) How can I write unit tests for these 2 classes and methods?
2) Since all my listener containers are doing the same processing work but for a different set of topics, can I pass the topics when I'm setting consumerFactory or any other way?
1.
container.start();
Never start() components in bean definitions - the application context is not ready yet; the container will automatically start the containers at the right time (as long as autoStartup is true (default).
Why do you need a container factory if you are creating the containers youself?
It's not clear what you want to test.
EDIT
Here's an example of programmatically registering containers, using Spring Boot's auto-configured container factory (2.2 and above)...
#SpringBootApplication
public class So53752783Application {
public static void main(String[] args) {
SpringApplication.run(So53752783Application.class, args);
}
#SuppressWarnings("unchecked")
#Bean
public SmartInitializingSingleton creator(ConfigurableListableBeanFactory beanFactory,
ConcurrentKafkaListenerContainerFactory<String, String> factory) {
return () -> Stream.of("foo", "bar", "baz").forEach(topic -> {
ConcurrentMessageListenerContainer<String, String> container = factory.createContainer(topic);
container.getContainerProperties().setMessageListener((MessageListener<String, String>) record -> {
System.out.println("Received " + record);
});
container.getContainerProperties().setGroupId(topic + ".group");
container = (ConcurrentMessageListenerContainer<String, String>)
beanFactory.initializeBean(container, topic + ".container");
beanFactory.registerSingleton(topic + ".container", container);
container.start();
});
}
}
To unit test your listener,
container.getContainerProperties().getMessagelistener()
cast it and invoke onMessage() and verify it did what you expected.
EDIT2 Unit Testing the listener
#SpringBootApplication
public class So53752783Application {
public static void main(String[] args) {
SpringApplication.run(So53752783Application.class, args);
}
#SuppressWarnings("unchecked")
#Bean
public SmartInitializingSingleton creator(ConfigurableListableBeanFactory beanFactory,
ConcurrentKafkaListenerContainerFactory<String, String> factory,
MyListener listener) {
return () -> Stream.of("foo", "bar", "baz").forEach(topic -> {
ConcurrentMessageListenerContainer<String, String> container = factory.createContainer(topic);
container.getContainerProperties().setMessageListener(listener);
container.getContainerProperties().setGroupId(topic + ".group");
container = (ConcurrentMessageListenerContainer<String, String>)
beanFactory.initializeBean(container, topic + ".container");
beanFactory.registerSingleton(topic + ".container", container);
container.start();
});
}
#Bean
public MyListener listener() {
return new MyListener();
}
public static class MyListener implements MessageListener<String, String> {
#Autowired
private Service service;
public void setService(Service service) {
this.service = service;
}
#Override
public void onMessage(ConsumerRecord<String, String> data) {
this.service.callSomeService(data.value().toUpperCase());
}
}
public interface Service {
void callSomeService(String in);
}
#Component
public static class DefaultService implements Service {
#Override
public void callSomeService(String in) {
// ...
}
}
}
and
#RunWith(SpringRunner.class)
#SpringBootTest
public class So53752783ApplicationTests {
#Autowired
private ApplicationContext context;
#Test
public void test() {
ConcurrentMessageListenerContainer<?, ?> container = context.getBean("foo.container",
ConcurrentMessageListenerContainer.class);
MyListener messageListener = (MyListener) container.getContainerProperties().getMessageListener();
Service service = mock(Service.class);
messageListener.setService(service);
messageListener.onMessage(new ConsumerRecord<>("foo", 0, 0L, "key", "foo"));
verify(service).callSomeService("FOO");
}
}

Multiple consumers using spring kafka

I am looking to setup multiple listeners on a kafka topic inside my application. Below is my setup. it is supposed to be consumed by both the groups, but it is consumed by only one listener. What am i missing here?
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<String, Object>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupName);
return props;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
ConsumerFactory<String, String> consumerFactory = new DefaultKafkaConsumerFactory(consumerConfigs());
return consumerFactory;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConcurrency(100);
factory.setConsumerFactory(consumerFactory());
return factory;
}
#Bean("notificationFactory")
public ConcurrentKafkaListenerContainerFactory<String, String> notificationFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConcurrency(100);
factory.setConsumerFactory(consumerFactory());
return factory;
}
#Bean("insertContainerFactory")
public ConcurrentKafkaListenerContainerFactory<String, String> insertContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConcurrency(100);
factory.setConsumerFactory(consumerFactory());
return factory;
}
#KafkaListener(id = "insert_listener", topics = "${kafka.topic.readlocation}", group = "insert_listener", containerFactory = "insertContainerFactory")
public void receiveForInsert(String message) {
locationProcessor.insertLocationData(message);
}
#KafkaListener(id = "notification_listener", topics = "${kafka.topic.readlocation}", group = "notification_listener",containerFactory="notificationFactory")
public void receiveForNotification(String message) {
locationProcessor.processNotificationMessage(message);
}
Edit: Below is the code that worked
#KafkaListener(id = "insert_listener", topics = "${kafka.topic.readlocation}", groupId = "insert_listener")
public void receiveForInsert(String message) {
locationProcessor.insertLocationData(message);
}
You need a different group.id for each; the group property is not the group.id - see the javadocs. In the upcoming 1.3 release, there is a new groupId property and we also can use the id as a group if present.
For earlier version you need a different consumer factory for each.

Resources