spring-kafka delivery attempt is not being increased when message was retried - spring-kafka

I am using spring-kafka 2.8.6 with retry RetryTopicConfiguration.
#KafkaListener(
topics = "...",
groupId = "...",
containerFactory = "kafkaListenerContainerFactory")
public void listenWithHeaders(final #Valid #Payload Event event,
#Header(KafkaHeaders.DELIVERY_ATTEMPT) final int deliveryAttempt) {
}
I have setup common error handler, and also enable delivery attempt header.
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Event>
kafkaListenerContainerFactory(#Qualifier("ConsumerFactory") final ConsumerFactory<String, Event> consumerFactory) {
final ConcurrentKafkaListenerContainerFactory<String, Event> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
factory.setCommonErrorHandler(new DefaultErrorHandler(new ExponentialBackOff(kafkaProperties.getExponentialBackoffInitialInterval(), kafkaProperties.getExponentialBackoffMultiplier())));
LOGGER.info("setup ConcurrentKafkaListenerContainerFactory");
factory.getContainerProperties().setDeliveryAttemptHeader(true);
return factory;
}
But when retry is triggered, delivery attempt in the message header is always 1, never increase.
Do I miss any other part? Thanks!
--- I am using retry topic configuration.
#Bean
public RetryTopicConfiguration retryableTopicKafkaTemplate(#Qualifier("kafkaTemplate") KafkaTemplate<String, Event> kafkaTemplate) {
return RetryTopicConfigurationBuilder
.newInstance()
.exponentialBackoff(
properties.getExponentialBackoffInitialInterval(),
properties.getExponentialBackoffMultiplier(),
properties.getExponentialBackoffMaxInterval())
.autoCreateTopics(properties.isRetryTopicAutoCreateTopics(), properties.getRetryTopicAutoCreateNumPartitions(), properties.getRetryTopicAutoCreateReplicationFactor())
.maxAttempts(properties.getMaxAttempts())
.notRetryOn(...) .retryTopicSuffix(properties.getRetryTopicSuffix())
.dltSuffix(properties.getDltSuffix())
.create(kafkaTemplate);
---- Followed by Gary's suggestion, have it fully working now with my listener.
#KafkaListener(
topics = "...",
groupId = "...",
containerFactory = "kafkaListenerContainerFactory")
public void listenWithHeaders(final #Valid #Payload Event event,
#Header(value = RetryTopicHeaders.DEFAULT_HEADER_ATTEMPTS, required = false) final Integer deliveryAttempt) {
...

It works fine for me with this:
#SpringBootApplication
public class So72871495Application {
public static void main(String[] args) {
SpringApplication.run(So72871495Application.class, args);
}
#KafkaListener(id = "so72871495", topics = "so72871495")
void listen(String in, #Header(KafkaHeaders.DELIVERY_ATTEMPT) int delivery) {
System.out.println(in + " " + delivery);
throw new RuntimeException("test");
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so72871495").partitions(1).replicas(1).build();
}
#Bean
ApplicationRunner runner(KafkaTemplate<String, String> template,
AbstractKafkaListenerContainerFactory<?, ?, ?> factory) {
factory.getContainerProperties().setDeliveryAttemptHeader(true);
factory.setCommonErrorHandler(new DefaultErrorHandler(new FixedBackOff(5000L, 3)));
return args -> {
template.send("so72871495", "foo");
};
}
}
foo 1
foo 2
foo 3
foo 4
If you can't figure out what's different for you, please provide an MCRE so I can see what's wrong.
EDIT
With #RetryableTopic, that header is always 1 because each delivery is the first attempt from a different topic.
Use this instead
void listen(String in, #Header(name = RetryTopicHeaders.DEFAULT_HEADER_ATTEMPTS, required = false) Integer attempts) {
Integer not int. It will be null on the first attempt and 2, 3, etc on the retries.

Related

Force Spring Kafka not to create topics automatically, but to use already created ones

There is a quite simple case I would like to implement:
I have a base and DLT topics:
MessageBus:
Topic: my_topic
DltTopic: my_dlt_topic
Broker: event-serv:9092
So, those topics are already predefined, I don't need to create them automatically.
The only I need to handle broken messages automatically without retries, because they don't make any sense, so I have something like this:
#KafkaListener(topics = ["#{config.messageBus.topic}"], groupId = "group_id")
#RetryableTopic(
dltStrategy = DltStrategy.FAIL_ON_ERROR,
autoCreateTopics = "false",
attempts = "1"
)
#Throws(IOException::class)
fun consume(rawMessage: String?) {
...
}
#DltHandler
fun processMessage(rawMessage: String?) {
kafkaTemplate.send(config.messageBus.dltTopic, rawMessage)
}
That of course doesn't work properly.
I also tried to specify a kafkaTemplate
#Bean
fun kafkaTemplate(
config: Config,
producerFactory: ProducerFactory<String, String>
): KafkaTemplate<String, String> {
val template = KafkaTemplate(producerFactory)
template.defaultTopic = config.messageBus.dltTopic
return template
}
however, that does not change the situation.
In the end, I believe there is an obvious solution, so I please give me a hint about it.
See the documenation.
#SpringBootApplication
public class So69317126Application {
public static void main(String[] args) {
SpringApplication.run(So69317126Application.class, args);
}
#RetryableTopic(attempts = "1", autoCreateTopics = "false", dltStrategy = DltStrategy.FAIL_ON_ERROR)
#KafkaListener(id = "so69317126", topics = "so69317126")
void listen(String in) {
System.out.println(in);
throw new RuntimeException();
}
#DltHandler
void handler(String in) {
System.out.println("DLT: " + in);
}
#Bean
RetryTopicNamesProviderFactory namer() {
return new RetryTopicNamesProviderFactory() {
#Override
public RetryTopicNamesProvider createRetryTopicNamesProvider(Properties properties) {
if (properties.isMainEndpoint()) {
return new SuffixingRetryTopicNamesProviderFactory.SuffixingRetryTopicNamesProvider(properties) {
#Override
public String getTopicName(String topic) {
return "so69317126";
}
};
}
else if(properties.isDltTopic()) {
return new SuffixingRetryTopicNamesProviderFactory.SuffixingRetryTopicNamesProvider(properties) {
#Override
public String getTopicName(String topic) {
return "so69317126.DLT";
}
};
}
else {
throw new IllegalStateException("Shouldn't get here - attempts is only 1");
}
}
};
}
}
so69317126: partitions assigned: [so69317126-0]
so69317126-dlt: partitions assigned: [so69317126.DLT-0]
foo
DLT: foo
This is a Kafka server configuration so you must set it on the server. The relevant property is:
auto.create.topics.enable (true by default)

RetryingBatchErrorHandler - Offset commit handling

I'm using spring-kafka 2.3.8 and I'm trying to log the recovered records and commit the offsets using RetryingBatchErrorHandler. How would you commit the offset in the recoverer?
public class Customizer implements ContainerCustomizer{
private static ConsumerRecordRecoverer createConsumerRecordRecoverer() {
return (consumerRecord, e) -> {
log.info("Number of attempts exhausted. parition: " consumerRecord.partition() + ", offset: " + consumerRecord.offset());
# need to commit the offset
};
}
#Override
public void configure(AbstractMessageListenerContainer container) {
container.setBatchErrorHandler(new RetryingBatchErrorHandler(new FixedBackOff(5000L, 3L), createConsumerRecordRecoverer()));
}
The container will automatically commit the offsets if the error handler "handles" the exception, unless you set the ackAfterHandle property to false (it is true by default).
EDIT
This works as expected for me:
#SpringBootApplication
public class So69534923Application {
private static final Logger log = LoggerFactory.getLogger(So69534923Application.class);
public static void main(String[] args) {
SpringApplication.run(So69534923Application.class, args);
}
#KafkaListener(id = "so69534923", topics = "so69534923")
void listen(List<String> in) {
System.out.println(in);
throw new RuntimeException("test");
}
#Bean
RetryingBatchErrorHandler eh() {
return new RetryingBatchErrorHandler(new FixedBackOff(1000L, 2), (rec, ex) -> {
this.log.info("Retries exchausted for " + ListenerUtils.recordToString(rec, true));
});
}
#Bean
ApplicationRunner runner(ConcurrentKafkaListenerContainerFactory<?, ?> factory,
KafkaTemplate<String, String> template) {
factory.getContainerProperties().setCommitLogLevel(Level.INFO);
return args -> {
template.send("so69534923", "foo");
template.send("so69534923", "bar");
};
}
}
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.listener.type=batch
so69534923: partitions assigned: [so69534923-0]
[foo, bar]
[foo, bar]
[foo, bar]
Retries exchausted for so69534923-0#2
Retries exchausted for so69534923-0#3
Committing: {so69534923-0=OffsetAndMetadata{offset=4, leaderEpoch=null, metadata=''}}
The log was from the second run.
EDIT2
It does not work with 2.3.x; you should upgrade to a supported version.
https://spring.io/projects/spring-kafka#learn

metrics method of MessageListenerContainer is not capturing the right values

I am using spring-kafka 2.2.8 to created a batch consumer and trying to capture the my container metrics to understand the performance details of the batch consumer.
#Bean
public ConsumerFactory consumerFactory(){
return new DefaultKafkaConsumerFactory(consumerConfigs(),stringKeyDeserializer(), avroValueDeserializer());
}
#Bean
public FixedBackOffPolicy getBackOffPolicy() {
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
backOffPolicy.setBackOffPeriod(100);
return backOffPolicy;
}
#Bean
public ConcurrentKafkaListenerContainerFactory kafkaBatchListenerContainerFactory(){
ConcurrentKafkaListenerContainerFactory factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
factory.setBatchListener(true);
factory.setStatefulRetry(true);
return factory;
}
public Map<String, Object> consumerConfigs(){
Map<String, Object> configs = new HashMap<>();
batchConsumerConfigProperties.setKeyDeserializerClassConfig();
batchConsumerConfigProperties.setValueDeserializerClassConfig();
batchConsumerConfigProperties.setKeyDeserializerClass(StringDeserializer.class);
batchConsumerConfigProperties.setValueDeserializerClass(KafkaAvroDeserializer.class);
batchConsumerConfigProperties.setSpecificAvroReader("true");
batchConsumerConfigProperties.setAutoOffsetResetConfig(environment.getProperty("sapphire.kes.consumer.auto.offset.reset", "earliest"));
batchConsumerConfigProperties.setEnableAutoCommitConfig(environment.getProperty("sapphire.kes.consumer.enable.auto.commit", "false"));
batchConsumerConfigProperties.setMaxPollIntervalMs(environment.getProperty(MAX_POLL_INTERVAL_MS_CONFIG, "300000"));
batchConsumerConfigProperties.setMaxPollRecords(environment.getProperty(MAX_POLL_RECORDS_CONFIG, "50000"));
batchConsumerConfigProperties.setSessionTimeoutms(environment.getProperty(SESSION_TIMEOUT_MS_CONFIG, "10000"));
batchConsumerConfigProperties.setRequestTimeOut(environment.getProperty(REQUEST_TIMEOUT_MS_CONFIG, "30000"));
batchConsumerConfigProperties.setHeartBeatIntervalMs(environment.getProperty(HEARTBEAT_INTERVAL_MS_CONFIG, "3000"));
batchConsumerConfigProperties.setFetchMinBytes(environment.getProperty(FETCH_MIN_BYTES_CONFIG, "1"));
batchConsumerConfigProperties.setFetchMaxBytes(environment.getProperty(FETCH_MAX_BYTES_CONFIG, "52428800"));
batchConsumerConfigProperties.setFetchMaxWaitMS(environment.getProperty(FETCH_MAX_WAIT_MS_CONFIG, "500"));
batchConsumerConfigProperties.setMaxPartitionFetchBytes(environment.getProperty(MAX_PARTITION_FETCH_BYTES_CONFIG, "1048576"));
batchConsumerConfigProperties.setConnectionsMaxIdleMs(environment.getProperty(CONNECTIONS_MAX_IDLE_MS_CONFIG, "540000"));
batchConsumerConfigProperties.setAutoCommitIntervalMS(environment.getProperty(AUTO_COMMIT_INTERVAL_MS_CONFIG, "5000"));
batchConsumerConfigProperties.setReceiveBufferBytes(environment.getProperty(RECEIVE_BUFFER_CONFIG, "65536"));
batchConsumerConfigProperties.setSendBufferBytes(environment.getProperty(SEND_BUFFER_CONFIG, "131072"));
}
Here is my consumer code where I'm trying to capture the container metrics
#Component
public class MyBatchConsumer {
private final KafkaListenerEndpointRegistry registry;
#Autowired
public MyBatchConsumer(KafkaListenerEndpointRegistry registry) {
this.registry = registry;
}
#KafkaListener(topics = "myTopic", containerFactory = "kafkaBatchListenerContainerFactory", id = "myBatchConsumer")
public void consumeRecords(List<ConsumerRecord> messages) {
System.out.println("messages size - " + messages.size());
if(mybatchconsumerMessageCount == 0){
ConsumerPerfTestingConstants.batchConsumerStartTime = System.currentTimeMillis();
ConsumerPerfTestingConstants.batchConsumerStartDateTime = LocalDateTime.now().format(DateTimeFormatter.ofPattern("MM/dd/yyyy HH:mm:ss"));
}
mybatchconsumerMessageCount = mybatchconsumerMessageCount + messages.size());
System.out.println("\n\n\n batchConsumerConsumedMessages " + mybatchconsumerMessageCount);
if (mybatchconsumerMessageCount == targetMessageCount) {
System.out.println("ATTENTION! ATTENTION! ATTENTION! Consumer Finished processing " + messageCount + " messages");
registry.getListenerContainerIds().forEach(
listenerId -> System.out.println(" kes batch consumer listenerId is "+listenerId)
);
String listenerID = registry.getListenerContainerIds().stream().filter(listenerId -> listenerId.startsWith("myBatchConsumer")).findFirst().get();
System.out.println(" kes batch consumer listenerID is "+listenerID);
Map<String, Map<MetricName, ? extends Metric>> metrics = registry.getListenerContainer(listenerID).metrics();
registry.getListenerContainer(listenerID).stop();
System.out.println("metrics - "+metrics);
}
}
}
Now, I'm trying to consume 10 records and see what are the metrics look like and i see below values and not sure why. Can someone help me understand what am missing here?
records-consumed-total = 0
records-consumed-rate = 0
This works fine for me; I am using 2.6.2, but the container simply delegates to the consumer when calling metrics.
#SpringBootApplication
public class So64878927Application {
public static void main(String[] args) {
SpringApplication.run(So64878927Application.class, args);
}
#Autowired
KafkaListenerEndpointRegistry registry;
#KafkaListener(id = "so64878927", topics = "so64878927")
void listen(List<String> in) {
System.out.println(in);
Map<String, Map<MetricName, ? extends Metric>> metrics = registry.getListenerContainer("so64878927").metrics();
System.out.println("L: " + metrics.get("consumer-so64878927-1").entrySet().stream()
.filter(entry -> entry.getKey().name().startsWith("records-consumed"))
.map(entry -> entry.getValue().metricName().name() + " = " + entry.getValue().metricValue())
.collect(Collectors.toList()));
registry.getListenerContainer("so64878927").stop(() -> System.out.println("Stopped"));
}
#Bean
NewTopic topic() {
return TopicBuilder.name("so64878927").build();
}
#EventListener
void idleEvent(ListenerContainerIdleEvent event) {
Map<String, Map<MetricName, ? extends Metric>> metrics = registry.getListenerContainer("so64878927").metrics();
System.out.println("I: " + metrics.get("consumer-so64878927-1").entrySet().stream()
.filter(entry -> entry.getKey().name().startsWith("records-consumed"))
.map(entry -> entry.getValue().metricName().name() + " = " + entry.getValue().metricValue())
.collect(Collectors.toList()));
}
}
spring.kafka.listener.type=batch
spring.kafka.listener.idle-event-interval=6000
[foo, bar, baz, foo, bar, baz]
L: [records-consumed-total = 6.0, records-consumed-rate = 0.1996472897880411, records-consumed-total = 6.0, records-consumed-rate = 0.1996539331824837]
I am not sure why the metrics are duplicated but, as I said, all we do is call the consumer's metrics method.
By the way, if you want to stop the container from the listener, you should use the async stop - see my example.

Requeue the failed record in the kafka topic

I have a use case where the records are to be persisted in table which has foriegn key to itself.
Example:
zObject
{
uid,
name,
parentuid
}
parent uid also present in same table and any object which has non existent parentuid will be failed to persist .
At times the records are placed in the topic such a way that the dependency is not at the head of the list , instead it will be after the dependent records are present
This will cause failure in process the record . I have used the seektocurrenterrorhandler which actually retries the same failed records for the given backoff and it fails since the dependency is not met .
Is there any way where I can requeue the record at the end of the topic so that dependency is met ? If it fails for day 5 times even after enqueue , the records can be pushed to a DLT .
Thanks,
Rajasekhar
There is nothing built in; you can, however, use a custom destination resolver in the DeadLetterPublishingRecoverer to determine which topic to publish to, based on a header in the failed record.
See https://docs.spring.io/spring-kafka/docs/2.6.2/reference/html/#dead-letters
EDIT
#SpringBootApplication
public class So64646996Application {
public static void main(String[] args) {
SpringApplication.run(So64646996Application.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so64646996").partitions(1).replicas(1).build();
}
#Bean
public NewTopic dlt() {
return TopicBuilder.name("so64646996.DLT").partitions(1).replicas(1).build();
}
#Bean
public ErrorHandler eh(KafkaOperations<String, String> template) {
return new SeekToCurrentErrorHandler(new DeadLetterPublishingRecoverer(template,
(rec, ex) -> {
org.apache.kafka.common.header.Header retries = rec.headers().lastHeader("retries");
if (retries == null) {
retries = new RecordHeader("retries", new byte[] { 1 });
rec.headers().add(retries);
}
else {
retries.value()[0]++;
}
return retries.value()[0] > 5
? new TopicPartition("so64646996.DLT", rec.partition())
: new TopicPartition("so64646996", rec.partition());
}), new FixedBackOff(0L, 0L));
}
#KafkaListener(id = "so64646996", topics = "so64646996")
public void listen(String in,
#Header(KafkaHeaders.OFFSET) long offset,
#Header(name = "retries", required = false) byte[] retry) {
System.out.println(in + "#" + offset + ":" + retry[0]);
throw new IllegalStateException();
}
#KafkaListener(id = "so64646996.DLT", topics = "so64646996.DLT")
public void listenDLT(String in,
#Header(KafkaHeaders.OFFSET) long offset,
#Header(name = "retries", required = false) byte[] retry) {
System.out.println("DLT: " + in + "#" + offset + ":" + retry[0]);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> System.out.println(template.send("so64646996", "foo").get(10, TimeUnit.SECONDS)
.getRecordMetadata());
}
}

Spring Kafka Unit Tests Triggers the listener, but the method cannot get the message using consumer.poll

We are using spring-kafka-test-2.2.8-RELEASE.
When I use the template to send the message, it triggers the listener correctly, but I can't get the message content in the consumer.poll. If i instantiate the KafkaTemplate without "wiring" it in a class attribute and Instantiate it based on a producer factory, it sends the message, but does not trigger the #KafkaListener, only work if I setup a Message Listener inside the #Test Method. I need to trigger the kafka listener and realize which Topic will be called next("sucess" topic when executed without errors, and "errorTopic" the listener throws an Exception) and the message content.
#RunWith(SpringRunner.class)
#SpringBootTest
#EmbeddedKafka(partitions = 1, topics = { "tp-in-gco-mao-notasfiscais" })
public class InvoicingServiceTest {
#Autowired
private NFKafkaListener nfKafkaListener;
#ClassRule
public static EmbeddedKafkaRule broker = new EmbeddedKafkaRule(1, false, "tp-in-gco-mao-
notasfiscais");
#Value("${" + EmbeddedKafkaBroker.SPRING_EMBEDDED_KAFKA_BROKERS + "}")
private String brokerAddresses;
#Autowired
private KafkaTemplate<Object, Object> template;
#BeforeClass
public static void setup() {
System.setProperty(EmbeddedKafkaBroker.BROKER_LIST_PROPERTY,
"spring.kafka.bootstrap-servers");
}
#Test
public void testTemplate() throws Exception {
NFServiceTest nfServiceTest = spy(new NFServiceTest());
nfKafkaListener.setNfServiceClient(nfServiceTest);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("teste9", "false", broker.getEmbeddedKafka());
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, InvoiceDeserializer.class);
consumerProps.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
DefaultKafkaConsumerFactory<Integer, Object> cf = new DefaultKafkaConsumerFactory<Integer, Object>(
consumerProps);
Consumer<Integer, Object> consumer = cf.createConsumer();
broker.getEmbeddedKafka().consumeFromAnEmbeddedTopic(consumer, "tp-in-gco-mao-notasfiscais");
ZfifNfMao zf = new ZfifNfMao();
zf.setItItensnf(new Zfietb011());
Zfietb011 zfietb011 = new Zfietb011();
Zfie011 zfie011 = new Zfie011();
zfie011.setMatkl("TESTE");
zfietb011.getItem().add(zfie011);
zf.setItItensnf(zfietb011);
template.send("tp-in-gco-mao-notasfiscais", zf);
List<ConsumerRecord<Integer, Object>> received = new ArrayList<>();
int n = 0;
while (received.size() < 1 && n++ < 10) {
ConsumerRecords<Integer, Object> records1 = consumer.poll(Duration.ofSeconds(10));
//records1 is always empty
if (!records1.isEmpty()) {
records1.forEach(rec -> received.add(rec));
}
}
assertThat(received).extracting(rec -> {
ZfifNfMao zfifNfMaoRdesponse = (ZfifNfMao) rec.value();
return zfifNfMaoRdesponse.getItItensnf().getItem().get(0).getMatkl();
}).contains("TESTE");
broker.getEmbeddedKafka().getKafkaServers().forEach(b -> b.shutdown());
broker.getEmbeddedKafka().getKafkaServers().forEach(b -> b.awaitShutdown());
consumer.close();
}
public static class NFServiceTest implements INFServiceClient {
CountDownLatch latch = new CountDownLatch(1);
#Override
public ZfifNfMaoResponse enviarSap(ZfifNfMao zfifNfMao) {
ZfifNfMaoResponse zfifNfMaoResponse = new ZfifNfMaoResponse();
zfifNfMaoResponse.setItItensnf(new Zfietb011());
Zfietb011 zfietb011 = new Zfietb011();
Zfie011 zfie011 = new Zfie011();
zfie011.setMatkl("TESTE");
zfietb011.getItem().add(zfie011);
zfifNfMaoResponse.setItItensnf(zfietb011);
return zfifNfMaoResponse;
}
}
}
You have two brokers; one created by #EmbeddedKafka and one created by the #ClassRule.
Use one or the other; preferably the #EmbeddedKafka and simply #Autowired the broker instance.
I am guessing the consumers are listening to different brokers; you can confirm that by looking at the INFO logs put out by the consumer config.
I've followed your advice but it keeps triggering the listener, but consumer.poll does not capture the topic content.
#RunWith(SpringRunner.class)
#SpringBootTest
#EmbeddedKafka(partitions = 1, topics = { "tp-in-gco-mao-notasfiscais" })
public class InvoicingServiceTest {
#Autowired
private NFKafkaListener nfKafkaListener;
#Autowired
public EmbeddedKafkaBroker broker;
#Autowired
private KafkaTemplate<Object, Object> template;
#BeforeClass
public static void setup() {
System.setProperty(EmbeddedKafkaBroker.BROKER_LIST_PROPERTY,
"spring.kafka.bootstrap-servers");
}
#Test
public void testTemplate() throws Exception {
NFServiceTest nfServiceTest = spy(new NFServiceTest());
nfKafkaListener.setNfServiceClient(nfServiceTest);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("teste9", "false", broker);
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, InvoiceDeserializer.class);
DefaultKafkaConsumerFactory<Integer, Object> cf = new DefaultKafkaConsumerFactory<Integer, Object>(
consumerProps);
Consumer<Integer, Object> consumer = cf.createConsumer();
broker.consumeFromAnEmbeddedTopic(consumer, "tp-in-gco-mao-notasfiscais");
ZfifNfMao zf = new ZfifNfMao();
zf.setItItensnf(new Zfietb011());
Zfietb011 zfietb011 = new Zfietb011();
Zfie011 zfie011 = new Zfie011();
zfie011.setMatkl("TESTE");
zfietb011.getItem().add(zfie011);
zf.setItItensnf(zfietb011);
template.send("tp-in-gco-mao-notasfiscais", zf);
List<ConsumerRecord<Integer, Object>> received = new ArrayList<>();
int n = 0;
while (received.size() < 1 && n++ < 10) {
ConsumerRecords<Integer, Object> records1 = consumer.poll(Duration.ofSeconds(10));
//records1 is always empty
if (!records1.isEmpty()) {
records1.forEach(rec -> received.add(rec));
}
}
assertThat(received).extracting(rec -> {
ZfifNfMao zfifNfMaoRdesponse = (ZfifNfMao) rec.value();
return zfifNfMaoRdesponse.getItItensnf().getItem().get(0).getMatkl();
}).contains("TESTE");
broker.getKafkaServers().forEach(b -> b.shutdown());
broker.getKafkaServers().forEach(b -> b.awaitShutdown());
consumer.close();
}
public static class NFServiceTest implements INFServiceClient {
CountDownLatch latch = new CountDownLatch(1);
#Override
public ZfifNfMaoResponse enviarSap(ZfifNfMao zfifNfMao) {
ZfifNfMaoResponse zfifNfMaoResponse = new ZfifNfMaoResponse();
zfifNfMaoResponse.setItItensnf(new Zfietb011());
Zfietb011 zfietb011 = new Zfietb011();
Zfie011 zfie011 = new Zfie011();
zfie011.setMatkl("TESTE");
zfietb011.getItem().add(zfie011);
zfifNfMaoResponse.setItItensnf(zfietb011);
return zfifNfMaoResponse;
}
}
}

Resources