KafkaListenerContainerFactory not getting created properly - spring-kafka

I have two listener container factories one for main topic and another for retry topic as given below
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> primaryKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(primaryConsumerFactory());
factory.setConcurrency(3);
factory.setAutoStartup(false);
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(AckMode.RECORD);
errorHandler.setAckAfterHandle(true);
factory.setErrorHandler(errorHandler);
return factory;
}
#Bean
public ConsumerFactory<String, Object> primaryConsumerFactory() {
Map<String, Object> map = new HashMap<>();
Properties consumerProperties = getConsumerProperties();
consumerProperties.put(ConsumerConfig.GROUP_ID_CONFIG, "groupid");
consumerProperties.forEach((key, value) -> map.put((String) key, value));
ErrorHandlingDeserializer2<Object> errorHandlingDeserializer = new ErrorHandlingDeserializer2<>(
getSoapMessageConverter());
DefaultKafkaConsumerFactory<String, Object> consumerFactory = new DefaultKafkaConsumerFactory<>(map);
consumerFactory.setValueDeserializer(errorHandlingDeserializer);
return consumerFactory;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> kafkaRetryListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(retryConsumerFactory());
factory.setConcurrency(3);
factory.setAutoStartup(false);
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.setErrorHandler(new SeekToCurrentErrorHandler(
new MyDeadLetterPublishingRecoverer("mytopic",
deadLetterKafkaTemplate()),
new FixedBackOff(5000, 2)));
return factory;
}
#Bean
public ConsumerFactory<String, Object> retryConsumerFactory() {
Map<String, Object> map = new HashMap<>();
Properties consumerProperties = getConsumerProperties();
consumerProperties.put(ConsumerConfig.GROUP_ID_CONFIG, "retry.id");
consumerProperties.put("max.poll.interval.ms", "60000");
consumerProperties.forEach((key, value) -> map.put((String) key, value));
DefaultKafkaConsumerFactory<String, Object> retryConsumerFactory = new DefaultKafkaConsumerFactory<>(map);
retryConsumerFactory.setValueDeserializer(getCustomMessageConverter());
return retryConsumerFactory;
}
I have two separate listener classes which uses each of the aforementioned containers
There are two issues here
Spring complains about - Error creating bean with name 'kafkaListenerContainerFactory' defined Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'org.springframework.kafka.core.ConsumerFactory' available: expected at least 1 bean which qualifies as autowire candidate.
To Fix this I have to rename primaryKafkaListenerContainerFactory to kafkaListenerContainerFactory. Why this is so?
Second issue is kafkaRetryListenerContainerFactory is not seems to be taking whatever properties I try to set in retryConsumerFactory.(Especially "max.poll.interval.ms") instead it uses the properties set on primaryConsumerFactory in kafkaListenerContainerFactory

To Fix this I have to rename primaryKafkaListenerContainerFactory to kafkaListenerContainerFactory. Why this is so?
That is correct, kafkaListenerContainerFactory is the default name when no containerProperty is on the listener and Boot will try to auto-configure it.
You should name one of your custom factory with that name to override the Boot's auto configuration because you have an incompatible consumer factory.
Your second question makes no sense to me.
Perhaps your getConsumerProperties() is returning the same object each time - you need a copy.
When asking questions like this, it's best to show all the relevant code.

Related

Declare a Map<String, Object> as a scoped variable cause problem

When I declare a Map<String, Object> scoped variable in my SpringMVC project as below:
#Bean
#SessionScope
public Map<String, Object> allProjects() {
return new TreeMap<>();
}
It is weird that it contains many unexpected things even I didn't put anything into it. Just like it is the whole session scope. It will not happen if I declare it as Map<String, String>. Is there any formal statement in document talked about this?

Spring Kafka bean return types

The documentation for spring kafka stream support shows something like:
#Bean
public KStream<Integer, String> kStream(StreamsBuilder kStreamBuilder) {
KStream<Integer, String> stream = kStreamBuilder.stream("streamingTopic1");
// ... stream config
return stream;
}
However, I might want a topology dependent on multiple streams or tables. Can I do:
#Bean
public KStream<Integer, String> kStream(StreamsBuilder kStreamBuilder) {
KStream<Integer, String> stream1 = kStreamBuilder.stream("streamingTopic1");
KStream<Integer, String> stream2 = kStreamBuilder.stream("streamingTopic1");
// ... stream config
return stream;
}
In other words, is the bean returned relevant, or is it only important that kStreamBuilder is being mutated?
It depends.
If you don't need a reference to the KStream elsewhere, there is no need to define it as a bean at all you can auto wire the StreamsBuilder which is created by the factory bean.
If you need a reference, then each one must be its own bean.
For example, Spring Cloud Stream builds a partial stream which the application then modifies. See here.

Spring kafka: What’s the relationship b/w concurrency we set and listeners?

I'm using ConcurrentKafkaListenerContainerFactory like this:
#Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(40);
factory.getContainerProperties().setPollTimeout(3000);
return factory;
}
I also have multiple listeners for specific topics:
#KafkaListener(id = "id1", topicPattern = "test1.*")
public void listenTopic1(ConsumerRecord<String, String> record) {
System.out.println("Topic: " + record.topic());
}
#KafkaListener(id = "id2", topicPattern = "test2.*")
public void listenTopic2(ConsumerRecord<String, String> record) {
System.out.println("Topic: " + record.topic());
}
The concurrency I'm setting, is it specific to a listener or all listeners? Note: All topics have 40 partitions.
Some topics have more load than the rest.
Each container will get 40 consumer threads. The factory creates a container for each listener, with the same properties.
Your topic will need at least 40 partitions for this to be effective since a partition can only be consumed by one consumer in a group.

spring-kafka-test polling records from topics

I am using KafkaTestUtils to fetch the consumer records for validation along with other utilities which is handy, however I am seeing when I am making a call to KafkaTestUtils.getSingleRecord(..,...) which seems fetching all the records from the topic which was sent from other test methods (one example : verifyEmptinessErrorMessageInTopic()), and it's failing at the assertion at getSingleRecord method because the count is not 1. In my listener of actual business logic which is a manual ack and I do acknowledgment.acknowledge() to commit the offset but in test code, which seems fetching all records from the topic instead of last one. I also tried the consumer.commitsync() which normally do the offset commit and that's not working as well.
Am I missing any configuration in test util here? Thanks for the input.
private static KafkaTemplate<String, Object> kafkaTemplate;
private static Consumer<String, Object> consumerCardEventTopic;
private static Consumer<String, Object> consumerCardEventErrorTopic;
#BeforeClass
public static void setup() throws Exception {
// Producer Setup
Map<String, Object> producerConfig = KafkaTestUtility.producerProps(kafkaEmbedded);
ProducerFactory<String, Object> pf = new DefaultKafkaProducerFactory<>(producerConfig);
kafkaTemplate = new KafkaTemplate<>(pf);
// Consumer for cardEventTopic setup
Map<String, Object> consumerConfig =
KafkaTestUtils.consumerProps("cardEventGroup", "true", kafkaEmbedded);
consumerConfig.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerConfig.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
ConsumerFactory<String, Object> cf = new DefaultKafkaConsumerFactory<>(consumerConfig);
consumerCardEventTopic = cf.createConsumer();
kafkaEmbedded.consumeFromAnEmbeddedTopic(consumerCardEventTopic, cardEventTopic);
// Consumer for cardEventErrorTopic setup
Map<String, Object> consumerConfigError =
KafkaTestUtils.consumerProps("cardEventErrorGroup", "true", kafkaEmbedded);
consumerConfigError.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerConfigError.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
ConsumerFactory<String, Object> cf1 = new DefaultKafkaConsumerFactory<>(consumerConfigError);
consumerCardEventErrorTopic = cf1.createConsumer();
kafkaEmbedded.consumeFromAnEmbeddedTopic(consumerCardEventErrorTopic, cardEventErrorTopic);
}
#Test
public void verifyProcessedSuccessfully() {
kafkaTemplate.send(cardEventTopic, accountValid());
ConsumerRecord<String, Object> received =
KafkaTestUtils.getSingleRecord(consumerCardEventTopic,cardEventTopic);
assertThat(received).isNotNull();
assertThat(received.value()).isInstanceOf(String.class);
}
#Test
public void verifyEmptinessErrorMessageInTopic() {
kafkaTemplate.send(cardEventTopic, accountInValid());
ConsumerRecord<String, Object> received =
KafkaTestUtils.getSingleRecord(consumerCardEventErrorTopic,cardEventErrorTopic);
assertThat(received).isNotNull();
consumerCardEventTopic.commitSync();
}
#Test
public void testMethod3(){
}
#Test
public void testMethod4(){
}

Adding PathVariable changes view path on RequestMapping

I have a view resolver:
#Bean
public InternalResourceViewResolver viewResolver() {
InternalResourceViewResolver resolver = new InternalResourceViewResolver();
resolver.setPrefix("WEB-INF/jsp/");
resolver.setSuffix(".jsp");
return resolver;
}
and a controller:
#Controller
public class WorkflowListController {
#RequestMapping(path = "/workflowlist", method = RequestMethod.GET)
public ModelAndView index() throws LoginFailureException, PacketException,
NetworkException {
String profile = "dev";
List<WorkflowInformation> workflows = workflows(profile);
Map<String, Object> map = new HashMap<String, Object>();
map.put("profile", profile);
map.put("workflows", workflows);
return new ModelAndView("workflowlist", map);
}
}
and when I call the page http://127.0.0.1:8090/workflowlist it serves the jsp from src/main/webapp/WEB-INF/jsp/workflowlist.jsp. That all seems to work well.
However when I try to add a #PathVariable:
#RequestMapping(path = "/workflowlist/{profile}", method = RequestMethod.GET)
public ModelAndView workflowlist(#PathVariable String profile)
throws LoginFailureException, PacketException, NetworkException {
List<WorkflowInformation> workflows = workflows(profile);
Map<String, Object> map = new HashMap<String, Object>();
map.put("profile", profile);
map.put("workflows", workflows);
return new ModelAndView("workflowlist", map);
}
When I call the page http://127.0.0.1:8090/workflowlist/dev gives the following message:
There was an unexpected error (type=Not Found, status=404).
/workflowlist/WEB-INF/jsp/workflowlist.jsp
Can someone explain why I'm returning the same view name in both cases but in the second example it is behaving differently?
How can I get it to work?
The problem was with my viewResolver:
resolver.setPrefix("WEB-INF/jsp/");
should have been:
resolver.setPrefix("/WEB-INF/jsp/");
With a / at the front the path is taken from the root (webapps folder) but when the / is missing it becomes a relative path.
I never got an answer as to why the view resolver only took the directory part of the path but that's what appeared to happen.
It's probably so you can define subtrees of views with different roots.

Resources