spring-kafka-test polling records from topics - spring-kafka

I am using KafkaTestUtils to fetch the consumer records for validation along with other utilities which is handy, however I am seeing when I am making a call to KafkaTestUtils.getSingleRecord(..,...) which seems fetching all the records from the topic which was sent from other test methods (one example : verifyEmptinessErrorMessageInTopic()), and it's failing at the assertion at getSingleRecord method because the count is not 1. In my listener of actual business logic which is a manual ack and I do acknowledgment.acknowledge() to commit the offset but in test code, which seems fetching all records from the topic instead of last one. I also tried the consumer.commitsync() which normally do the offset commit and that's not working as well.
Am I missing any configuration in test util here? Thanks for the input.
private static KafkaTemplate<String, Object> kafkaTemplate;
private static Consumer<String, Object> consumerCardEventTopic;
private static Consumer<String, Object> consumerCardEventErrorTopic;
#BeforeClass
public static void setup() throws Exception {
// Producer Setup
Map<String, Object> producerConfig = KafkaTestUtility.producerProps(kafkaEmbedded);
ProducerFactory<String, Object> pf = new DefaultKafkaProducerFactory<>(producerConfig);
kafkaTemplate = new KafkaTemplate<>(pf);
// Consumer for cardEventTopic setup
Map<String, Object> consumerConfig =
KafkaTestUtils.consumerProps("cardEventGroup", "true", kafkaEmbedded);
consumerConfig.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerConfig.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
ConsumerFactory<String, Object> cf = new DefaultKafkaConsumerFactory<>(consumerConfig);
consumerCardEventTopic = cf.createConsumer();
kafkaEmbedded.consumeFromAnEmbeddedTopic(consumerCardEventTopic, cardEventTopic);
// Consumer for cardEventErrorTopic setup
Map<String, Object> consumerConfigError =
KafkaTestUtils.consumerProps("cardEventErrorGroup", "true", kafkaEmbedded);
consumerConfigError.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerConfigError.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
ConsumerFactory<String, Object> cf1 = new DefaultKafkaConsumerFactory<>(consumerConfigError);
consumerCardEventErrorTopic = cf1.createConsumer();
kafkaEmbedded.consumeFromAnEmbeddedTopic(consumerCardEventErrorTopic, cardEventErrorTopic);
}
#Test
public void verifyProcessedSuccessfully() {
kafkaTemplate.send(cardEventTopic, accountValid());
ConsumerRecord<String, Object> received =
KafkaTestUtils.getSingleRecord(consumerCardEventTopic,cardEventTopic);
assertThat(received).isNotNull();
assertThat(received.value()).isInstanceOf(String.class);
}
#Test
public void verifyEmptinessErrorMessageInTopic() {
kafkaTemplate.send(cardEventTopic, accountInValid());
ConsumerRecord<String, Object> received =
KafkaTestUtils.getSingleRecord(consumerCardEventErrorTopic,cardEventErrorTopic);
assertThat(received).isNotNull();
consumerCardEventTopic.commitSync();
}
#Test
public void testMethod3(){
}
#Test
public void testMethod4(){
}

Related

Spring Kafka : Skip error message using CommonErrorHandler

I am using spring-kafka 2.8.9 and kafka-clients 2.8.1 . I want to skip a message which is failed to de-serialize . Since setErrorHandler is deprecated , I tried using CommonErrorHandler . But I am not sure how to skip current error message and move to next record . The only option I can see is using pattern matching by extracting relevant details from below line like offset and partition .
org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition test-0 at offset 1. If needed, please seek past the record
Is there any other way like RecordDeserializationException to get necessary information from the exception or any other means without pattern matching . I can not upgrade to kafka 3.X.X .
My config
#Bean
public ConsumerFactory<String, Farewell> farewellConsumerFactory()
{
groupId = LocalTime.now().toString();
Map<String, Object> props = new HashMap<>();
props.put( ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put( ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"earliest");
return new DefaultKafkaConsumerFactory<>(props,new StringDeserializer(),new JsonDeserializer<>(Farewell.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Farewell> farewellKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Farewell> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setCommonErrorHandler(new CommonErrorHandler()
{
#Override
public void handleOtherException(Exception thrownException, Consumer<?, ?> consumer, MessageListenerContainer container, boolean batchListener)
{
CommonErrorHandler.super.handleOtherException(thrownException, consumer, container, batchListener);
}
});
factory.setConsumerFactory(farewellConsumerFactory());
return factory;
}
My listener class
#KafkaListener(topics = "${topicId}",
containerFactory = "farewellKafkaListenerContainerFactory")
public void farewellListener(Farewell message) {
System.out.println("Received Message in group " + groupId + "| " + message);
}
Domain class
public class Farewell {
private String message;
private Integer remainingMinutes;
public Farewell(String message, Integer remainingMinutes)
{
this.message = message;
this.remainingMinutes = remainingMinutes;
}
// standard getters, setters and constructor
}
I have checked these links
How to skip a msg that have error in kafka when i use ConcurrentMessageListenerContainer?
Better way of error handling in Kafka Consumer
Use an ErrorHandlingDeserializer as a wrapper around your real deserializer.
Serialization exceptions will be sent directly to the DefaultErrorHandler, which treats such exceptions as fatal (by default) and sends them directly to the recoverer.

KafkaListenerContainerFactory not getting created properly

I have two listener container factories one for main topic and another for retry topic as given below
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> primaryKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(primaryConsumerFactory());
factory.setConcurrency(3);
factory.setAutoStartup(false);
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(AckMode.RECORD);
errorHandler.setAckAfterHandle(true);
factory.setErrorHandler(errorHandler);
return factory;
}
#Bean
public ConsumerFactory<String, Object> primaryConsumerFactory() {
Map<String, Object> map = new HashMap<>();
Properties consumerProperties = getConsumerProperties();
consumerProperties.put(ConsumerConfig.GROUP_ID_CONFIG, "groupid");
consumerProperties.forEach((key, value) -> map.put((String) key, value));
ErrorHandlingDeserializer2<Object> errorHandlingDeserializer = new ErrorHandlingDeserializer2<>(
getSoapMessageConverter());
DefaultKafkaConsumerFactory<String, Object> consumerFactory = new DefaultKafkaConsumerFactory<>(map);
consumerFactory.setValueDeserializer(errorHandlingDeserializer);
return consumerFactory;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> kafkaRetryListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(retryConsumerFactory());
factory.setConcurrency(3);
factory.setAutoStartup(false);
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.setErrorHandler(new SeekToCurrentErrorHandler(
new MyDeadLetterPublishingRecoverer("mytopic",
deadLetterKafkaTemplate()),
new FixedBackOff(5000, 2)));
return factory;
}
#Bean
public ConsumerFactory<String, Object> retryConsumerFactory() {
Map<String, Object> map = new HashMap<>();
Properties consumerProperties = getConsumerProperties();
consumerProperties.put(ConsumerConfig.GROUP_ID_CONFIG, "retry.id");
consumerProperties.put("max.poll.interval.ms", "60000");
consumerProperties.forEach((key, value) -> map.put((String) key, value));
DefaultKafkaConsumerFactory<String, Object> retryConsumerFactory = new DefaultKafkaConsumerFactory<>(map);
retryConsumerFactory.setValueDeserializer(getCustomMessageConverter());
return retryConsumerFactory;
}
I have two separate listener classes which uses each of the aforementioned containers
There are two issues here
Spring complains about - Error creating bean with name 'kafkaListenerContainerFactory' defined Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'org.springframework.kafka.core.ConsumerFactory' available: expected at least 1 bean which qualifies as autowire candidate.
To Fix this I have to rename primaryKafkaListenerContainerFactory to kafkaListenerContainerFactory. Why this is so?
Second issue is kafkaRetryListenerContainerFactory is not seems to be taking whatever properties I try to set in retryConsumerFactory.(Especially "max.poll.interval.ms") instead it uses the properties set on primaryConsumerFactory in kafkaListenerContainerFactory
To Fix this I have to rename primaryKafkaListenerContainerFactory to kafkaListenerContainerFactory. Why this is so?
That is correct, kafkaListenerContainerFactory is the default name when no containerProperty is on the listener and Boot will try to auto-configure it.
You should name one of your custom factory with that name to override the Boot's auto configuration because you have an incompatible consumer factory.
Your second question makes no sense to me.
Perhaps your getConsumerProperties() is returning the same object each time - you need a copy.
When asking questions like this, it's best to show all the relevant code.

Spring kafka: What’s the relationship b/w concurrency we set and listeners?

I'm using ConcurrentKafkaListenerContainerFactory like this:
#Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(40);
factory.getContainerProperties().setPollTimeout(3000);
return factory;
}
I also have multiple listeners for specific topics:
#KafkaListener(id = "id1", topicPattern = "test1.*")
public void listenTopic1(ConsumerRecord<String, String> record) {
System.out.println("Topic: " + record.topic());
}
#KafkaListener(id = "id2", topicPattern = "test2.*")
public void listenTopic2(ConsumerRecord<String, String> record) {
System.out.println("Topic: " + record.topic());
}
The concurrency I'm setting, is it specific to a listener or all listeners? Note: All topics have 40 partitions.
Some topics have more load than the rest.
Each container will get 40 consumer threads. The factory creates a container for each listener, with the same properties.
Your topic will need at least 40 partitions for this to be effective since a partition can only be consumed by one consumer in a group.

Retrofit RxJava Simple test

I'm learning Retrofit and RxJava and I'v created test to connect github:
public class GitHubServiceTests {
RestAdapter restAdapter;
GitHubService service;
#Before
public void setUp(){
Gson gson = new GsonBuilder()
.setFieldNamingPolicy(FieldNamingPolicy.LOWER_CASE_WITH_UNDERSCORES)
.create();
restAdapter = new RestAdapter.Builder()
.setEndpoint("https://api.github.com")
.setConverter(new GsonConverter(gson))
.build();
service = restAdapter.create(GitHubService.class);
}
#Test
public void GitHubUsersListObservableTest(){
service.getObservableUserList().flatMap(Observable::from)
.subscribe(user -> System.out.println(user.login));
}
when I execute test, I see nothing in my console. But when I execute another test
#Test
public void GitHubUsersListTest(){
List<User> users = service.getUsersList();
for (User user : users) {
System.out.println(user.login);
}
it works, and I see user's logins in my console
Here is my Interface for Retrofit:
public interface GitHubService {
#GET("/users")
List<User> getUsersList();
#GET("/users")
Observable<List<User>> getObservableUserList();
}
where I'm wrong?
Because of the asynchronous call your test completes before a result is downloaded. That's typical issue and you have to 'tell' test to wait for the result. In plain java it would be:
#Test
public void GitHubUsersListObservableTest(){
CountDownLatch latch = new CountDownLatch(N);
service.getObservableUserList()
.flatMap(Observable::from)
.subscribe(user -> {
System.out.println(user.login);
latch.countDown();
});
latch.await();
}
Or you can use BlockingObservable from RxJava:
// This does not block.
BlockingObservable<User> observable = service.getObservableUserList()
.flatMap(Observable::from)
.toBlocking();
// This blocks and is called for every emitted item.
observable.forEach(user -> System.out.println(user.login));

Invoke Windows Workflow Recursively

I have a workflow, that at a certain point, needs to be triggered recursively.
I can't seem to figure out how to do this.
I tried the following code but context ends up being null??
private void codeTriggerChildren_ExecuteCode(object sender, EventArgs e)
{
ActivityExecutionContext context = sender as ActivityExecutionContext;
//context is null here?!
IStartWorkflow aWorkflow = context.GetService(typeof(ApprovalFlow)) as IStartWorkflow;
Dictionary<string, object> parameters = new Dictionary<string, object>();
parameters.Add("Parm1", "foo");
parameters.Add("Parm2", "bar");
Guid guid = aWorkflow.StartWorkflow(typeof(ApprovalFlow), parameters);
}
Primarily the problem here is that the sender in this case is a CodeActivity not an ActivityExecutionContext. So this code fails at the first hurdle.
Here is an example of custom activity that can do what you are after:-
public class RecurseApproval : Activity
{
protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext)
{
IStartWorkflow aWorkflow = executionContext.GetService(typeof(IStartWorkflow)) as IStartWorkflow;
Dictionary<string, object> parameters = new Dictionary<string, object>();
parameters.Add("Param1", "Foo");
parameters.Add("Param2", "bar");
Guid guid = aWorkflow.StartWorkflow(typeof(ApprovalWorkflow), parameters);
return ActivityExecutionStatus.Closed;
}
}
Note that the GetService gets type of IStartWorkflow.
Your sender is of type CodeActivity not ActivityExecutionContext. You need to create a custom activity and override the Execute method which will pass you a ActivityExecutionContext.

Resources