Records are getting re-delivered with manual ack ( AckMode.MANUAL_IMMEDIATE) - spring-kafka

Is it possible that record will redeliver again once it is properly ack? Sometime we are getting same record multiple time.
Attached image https://i.stack.imgur.com/9yC1M.jpg. Can see in image, multiple service receive same record at different time
I can't able to reproduce this in local machine.
#KafkaListener(topics = "${kafka.sample.topic}", containerFactory = "kafkaListenerContainerFactory")
public void listen(ConsumerRecord<String, String> record, Acknowledgment ack) {
//Ack record
sendAck(ack);
}
private void sendAck(Acknowledgment ack) {
try {
ack.acknowledge();
} catch (Exception e) {
logger.error("Exception occurred while sending ack");
logger.error(ExceptionUtils.getStackTrace(e));
}
}
#Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(Integer.parseInt(10));
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
return factory;
}
public Map<String, Object> consumerConfigs() {
Map<String, Object> consumerPropsMap = new HashMap<>();
consumerPropsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
consumerPropsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, autoCommitConfig);
consumerPropsMap.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, autoCommitConfigIntervalMs);
consumerPropsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, sessionTimeout);
consumerPropsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerPropsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerPropsMap.put(ConsumerConfig.GROUP_ID_CONFIG, groupIdConfig);
consumerPropsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerPropsMap.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
consumerPropsMap.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 1);
consumerPropsMap.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG, "org.apache.kafka.clients.consumer.CooperativeStickyAssignor");
if(StringUtils.isNotBlank(heartbeatIntervalMs)){
consumerPropsMap.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, heartbeatIntervalMs);
}
return consumerPropsMap;
}

Related

How does spring-data-redis get an objectRecord with a Generic type

I want to implement MQ using stream in Redis, and when the entity type passed is generic, my code does not deserialize the entity consumption message correctly.
producer critical code:
#Bean
RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory rf) {
RedisTemplate<String, Object> rt = new RedisTemplate<>();
rt.setConnectionFactory(rf);
rt.setKeySerializer(RedisSerializer.string());
rt.setValueSerializer(RedisSerializer.json());
rt.setHashKeySerializer(RedisSerializer.string());
rt.setHashValueSerializer(RedisSerializer.json());
return rt;
}
#Bean
ApplicationRunner runner(RedisTemplate<String, Object> rt) {
return arg -> {
MyMessage<User> userMyMessage = new MyMessage<User>().setReceiver(1)
.setClientType("app")
.setPayload(new User().setName("testName")
.setAge(10)
.setId(1));
Jackson2HashMapper hashMapper = new Jackson2HashMapper(false);
rt.opsForStream()
.add(StreamRecords.newRecord()
.in("my-stream")
.ofMap(hashMapper.toHash(userMyMessage)));
};
}
the consumer:
#Bean
public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory rcf) {
RedisTemplate<String, Object> rt = new RedisTemplate<>();
rt.setConnectionFactory(rcf);
rt.setKeySerializer(RedisSerializer.string());
rt.setValueSerializer(RedisSerializer.json());
rt.setHashKeySerializer(RedisSerializer.string());
rt.setHashValueSerializer(RedisSerializer.json());
return rt;
}
#Bean
StreamMessageListenerContainer.StreamMessageListenerContainerOptions<String, ObjectRecord<String, Object>> hashContainerOptions() {
return StreamMessageListenerContainer.StreamMessageListenerContainerOptions.builder()
.pollTimeout(Duration.ofSeconds(1))
.objectMapper(new Jackson2HashMapper(false))
.build();
}
#Bean
StreamMessageListenerContainer<String, ObjectRecord<String, Object>> hashContainer(
StreamMessageListenerContainer.StreamMessageListenerContainerOptions<String, ObjectRecord<String, Object>> options,
RedisConnectionFactory redisConnectionFactory,
RedisTemplate<String, Object> redisTemplate) {
var container = StreamMessageListenerContainer.create(redisConnectionFactory, options);
container.start();
String group = "default-group";
String key = "my-stream";
try {
redisTemplate.opsForStream().createGroup(key, group);
} catch (Exception ignore) {
}
container.receiveAutoAck(
Consumer.from(group, "default-consumer"),
StreamOffset.create(key, ReadOffset.lastConsumed()),
message -> {
log.info("receive message stream:{}, id:{} value:{}", message.getStream(), message.getId(), message.getValue());
}
);
return container;
}
when producer starts, the object is cached correctly in Redis:
but the consumer does not read the object,The following is the log
receive message stream:my-stream, id:1655890663894-0 value:"com.commons.MyMessage"
the entity's attributes are gone, How do I make it work? And if I want listen to two different stream (message object types are different) what to do, is to configure two StreamMessageListenerContainer or can be configured in a container.
thanks!

Kafka exactly once messaging test with "consume-transform-produce" Integration test

I am writing testcase to test the my application's consume-transform-produce loop of the Kafka. So effectively I am consuming from a sourceTopic-processing-sendMessage to Destination topic. I am writing these testcases to prove the exactly once messaging with Kafka as I will add other failure cases later.
Here is my configuration:
private Map<String, Object> consConfigProps(boolean txnEnabled) {
Map<String, Object> props = new HashMap<>(
KafkaTestUtils.consumerProps(AB_CONSUMER_GROUP_ID, "false", kafkaBroker));
props.put(ConsumerConfig.GROUP_ID_CONFIG, AB_CONSUMER_GROUP_ID);
props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
props.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
return props;
}
private Map<String, Object> prodConfigProps(boolean txnEnabled) {
Map<String, Object> props = new HashMap<>(KafkaTestUtils.producerProps(kafkaBroker));
props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
props.put(ProducerConfig.CLIENT_ID_CONFIG, "client-" + UUID.randomUUID().toString());
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
props.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "3");
props.put(ProducerConfig.RETRIES_CONFIG, "3");
props.put(ProducerConfig.ACKS_CONFIG, "all");
props.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG,
"prod-txn-" + UUID.randomUUID().toString());
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return props;
}
public KafkaMessageListenerContainer<String, NormalUser> fetchContainer() {
ContainerProperties containerProperties = new ContainerProperties(ABTOPIC, XYTOPIC, PATOPIC);
containerProperties.setGroupId("groupId-10001");
containerProperties.setAckMode(AckMode.MANUAL);
containerProperties.setSyncCommits(true);
containerProperties.setSyncCommitTimeout(Duration.ofMillis(5000));
containerProperties.setTransactionManager(kafkaTransactionManager());
KafkaMessageListenerContainer<String, NormalUser> kafkaMessageListContainer = new KafkaMessageListenerContainer<>(
consumerFactory(), containerProperties);
kafkaMessageListContainer.setupMessageListener(new AcknowledgingMessageListener<String, NormalUser>() {
#Override
public void onMessage(ConsumerRecord<String, NormalUser> record, Acknowledgment acknowledgment) {
log.debug("test-listener received message='{}'", record.toString());
records.add(record);
acknowledgment.acknowledge();
}
});
return kafkaMessageListContainer;
}
#Test
public void testProducerABSuccess() throws InterruptedException, IOException {
NormalUser userObj = new NormalUser(ABTypeGood,
Double.valueOf(Math.random() * 10000).longValue(),
"Blah" + String.valueOf(Math.random() * 10));
sendMessage(XYTOPIC, "AB-id", userObj);
try {
ConsumerRecords<String, NormalUser> records;
parserConsumer.subscribe(Collections.singletonList(XYTOPIC));
Map<TopicPartition, OffsetAndMetadata> currentOffsets = new LinkedHashMap<>();
// Check for messages
parserProducer.beginTransaction();
records = parserConsumer.poll(Duration.ofSeconds(3));
assertThat(1).isEqualTo(records.count()); // --> this asserts passes like 50% of the time.
for (ConsumerRecord<String, NormalUser> record : records) {
assertEquals(record.key(), "AB-id");
assertEquals(record.value(), userObj);
currentOffsets.put(new TopicPartition(record.topic(), record.partition()),
new OffsetAndMetadata(record.offset()));
}
parserProducer.send(new ProducerRecord<String, NormalUser>(ABTOPIC, "AB-id", userObj));
parserProducer.sendOffsetsToTransaction(currentOffsets, AB_CONSUMER_GROUP_ID);
parserProducer.commitTransaction();
} catch (ProducerFencedException | OutOfOrderSequenceException | AuthorizationException e) {
parserProducer.close();
} catch (final KafkaException e) {
parserProducer.abortTransaction();
}
ConsumerRecords<String, NormalUser> records;
loadConsumer.subscribe(Collections.singletonList(ABTOPIC));
records = loadConsumer.poll(Duration.ofSeconds(3));
assertThat(1).isEqualTo(records.count()); //--> this assert fails all the time.
for (ConsumerRecord<String, NormalUser> record : records) {
assertEquals(record.key(), "AB-id");
assertEquals(record.value(), userObj);
}
}
My issue is that the above testcase "testProducerABSuccess" is not consistent and the asserts fails sometimes and sometimes they pass. I have not been able to figure out why they are so inconsistent. What is wrong with the above.
Edits: 16-12:
Tested with consumerconfig.Auto_Offset_Reset_config-earliest no change. The first assert passes like 70% of the time. The second assert fails all the time (0% pass rate).
Which assertion fails? If it's assertThat(1).isEqualTo(records.count());, it's probably because you are setting auto.offset.reset to latest. It needs to be earliest to avoid a race condition whereby the record is sent before the consumer is assigned the partitition(s).

metrics method of MessageListenerContainer is not capturing the right values

I am using spring-kafka 2.2.8 to created a batch consumer and trying to capture the my container metrics to understand the performance details of the batch consumer.
#Bean
public ConsumerFactory consumerFactory(){
return new DefaultKafkaConsumerFactory(consumerConfigs(),stringKeyDeserializer(), avroValueDeserializer());
}
#Bean
public FixedBackOffPolicy getBackOffPolicy() {
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
backOffPolicy.setBackOffPeriod(100);
return backOffPolicy;
}
#Bean
public ConcurrentKafkaListenerContainerFactory kafkaBatchListenerContainerFactory(){
ConcurrentKafkaListenerContainerFactory factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
factory.setBatchListener(true);
factory.setStatefulRetry(true);
return factory;
}
public Map<String, Object> consumerConfigs(){
Map<String, Object> configs = new HashMap<>();
batchConsumerConfigProperties.setKeyDeserializerClassConfig();
batchConsumerConfigProperties.setValueDeserializerClassConfig();
batchConsumerConfigProperties.setKeyDeserializerClass(StringDeserializer.class);
batchConsumerConfigProperties.setValueDeserializerClass(KafkaAvroDeserializer.class);
batchConsumerConfigProperties.setSpecificAvroReader("true");
batchConsumerConfigProperties.setAutoOffsetResetConfig(environment.getProperty("sapphire.kes.consumer.auto.offset.reset", "earliest"));
batchConsumerConfigProperties.setEnableAutoCommitConfig(environment.getProperty("sapphire.kes.consumer.enable.auto.commit", "false"));
batchConsumerConfigProperties.setMaxPollIntervalMs(environment.getProperty(MAX_POLL_INTERVAL_MS_CONFIG, "300000"));
batchConsumerConfigProperties.setMaxPollRecords(environment.getProperty(MAX_POLL_RECORDS_CONFIG, "50000"));
batchConsumerConfigProperties.setSessionTimeoutms(environment.getProperty(SESSION_TIMEOUT_MS_CONFIG, "10000"));
batchConsumerConfigProperties.setRequestTimeOut(environment.getProperty(REQUEST_TIMEOUT_MS_CONFIG, "30000"));
batchConsumerConfigProperties.setHeartBeatIntervalMs(environment.getProperty(HEARTBEAT_INTERVAL_MS_CONFIG, "3000"));
batchConsumerConfigProperties.setFetchMinBytes(environment.getProperty(FETCH_MIN_BYTES_CONFIG, "1"));
batchConsumerConfigProperties.setFetchMaxBytes(environment.getProperty(FETCH_MAX_BYTES_CONFIG, "52428800"));
batchConsumerConfigProperties.setFetchMaxWaitMS(environment.getProperty(FETCH_MAX_WAIT_MS_CONFIG, "500"));
batchConsumerConfigProperties.setMaxPartitionFetchBytes(environment.getProperty(MAX_PARTITION_FETCH_BYTES_CONFIG, "1048576"));
batchConsumerConfigProperties.setConnectionsMaxIdleMs(environment.getProperty(CONNECTIONS_MAX_IDLE_MS_CONFIG, "540000"));
batchConsumerConfigProperties.setAutoCommitIntervalMS(environment.getProperty(AUTO_COMMIT_INTERVAL_MS_CONFIG, "5000"));
batchConsumerConfigProperties.setReceiveBufferBytes(environment.getProperty(RECEIVE_BUFFER_CONFIG, "65536"));
batchConsumerConfigProperties.setSendBufferBytes(environment.getProperty(SEND_BUFFER_CONFIG, "131072"));
}
Here is my consumer code where I'm trying to capture the container metrics
#Component
public class MyBatchConsumer {
private final KafkaListenerEndpointRegistry registry;
#Autowired
public MyBatchConsumer(KafkaListenerEndpointRegistry registry) {
this.registry = registry;
}
#KafkaListener(topics = "myTopic", containerFactory = "kafkaBatchListenerContainerFactory", id = "myBatchConsumer")
public void consumeRecords(List<ConsumerRecord> messages) {
System.out.println("messages size - " + messages.size());
if(mybatchconsumerMessageCount == 0){
ConsumerPerfTestingConstants.batchConsumerStartTime = System.currentTimeMillis();
ConsumerPerfTestingConstants.batchConsumerStartDateTime = LocalDateTime.now().format(DateTimeFormatter.ofPattern("MM/dd/yyyy HH:mm:ss"));
}
mybatchconsumerMessageCount = mybatchconsumerMessageCount + messages.size());
System.out.println("\n\n\n batchConsumerConsumedMessages " + mybatchconsumerMessageCount);
if (mybatchconsumerMessageCount == targetMessageCount) {
System.out.println("ATTENTION! ATTENTION! ATTENTION! Consumer Finished processing " + messageCount + " messages");
registry.getListenerContainerIds().forEach(
listenerId -> System.out.println(" kes batch consumer listenerId is "+listenerId)
);
String listenerID = registry.getListenerContainerIds().stream().filter(listenerId -> listenerId.startsWith("myBatchConsumer")).findFirst().get();
System.out.println(" kes batch consumer listenerID is "+listenerID);
Map<String, Map<MetricName, ? extends Metric>> metrics = registry.getListenerContainer(listenerID).metrics();
registry.getListenerContainer(listenerID).stop();
System.out.println("metrics - "+metrics);
}
}
}
Now, I'm trying to consume 10 records and see what are the metrics look like and i see below values and not sure why. Can someone help me understand what am missing here?
records-consumed-total = 0
records-consumed-rate = 0
This works fine for me; I am using 2.6.2, but the container simply delegates to the consumer when calling metrics.
#SpringBootApplication
public class So64878927Application {
public static void main(String[] args) {
SpringApplication.run(So64878927Application.class, args);
}
#Autowired
KafkaListenerEndpointRegistry registry;
#KafkaListener(id = "so64878927", topics = "so64878927")
void listen(List<String> in) {
System.out.println(in);
Map<String, Map<MetricName, ? extends Metric>> metrics = registry.getListenerContainer("so64878927").metrics();
System.out.println("L: " + metrics.get("consumer-so64878927-1").entrySet().stream()
.filter(entry -> entry.getKey().name().startsWith("records-consumed"))
.map(entry -> entry.getValue().metricName().name() + " = " + entry.getValue().metricValue())
.collect(Collectors.toList()));
registry.getListenerContainer("so64878927").stop(() -> System.out.println("Stopped"));
}
#Bean
NewTopic topic() {
return TopicBuilder.name("so64878927").build();
}
#EventListener
void idleEvent(ListenerContainerIdleEvent event) {
Map<String, Map<MetricName, ? extends Metric>> metrics = registry.getListenerContainer("so64878927").metrics();
System.out.println("I: " + metrics.get("consumer-so64878927-1").entrySet().stream()
.filter(entry -> entry.getKey().name().startsWith("records-consumed"))
.map(entry -> entry.getValue().metricName().name() + " = " + entry.getValue().metricValue())
.collect(Collectors.toList()));
}
}
spring.kafka.listener.type=batch
spring.kafka.listener.idle-event-interval=6000
[foo, bar, baz, foo, bar, baz]
L: [records-consumed-total = 6.0, records-consumed-rate = 0.1996472897880411, records-consumed-total = 6.0, records-consumed-rate = 0.1996539331824837]
I am not sure why the metrics are duplicated but, as I said, all we do is call the consumer's metrics method.
By the way, if you want to stop the container from the listener, you should use the async stop - see my example.

Spring Kafka Unit Tests Triggers the listener, but the method cannot get the message using consumer.poll

We are using spring-kafka-test-2.2.8-RELEASE.
When I use the template to send the message, it triggers the listener correctly, but I can't get the message content in the consumer.poll. If i instantiate the KafkaTemplate without "wiring" it in a class attribute and Instantiate it based on a producer factory, it sends the message, but does not trigger the #KafkaListener, only work if I setup a Message Listener inside the #Test Method. I need to trigger the kafka listener and realize which Topic will be called next("sucess" topic when executed without errors, and "errorTopic" the listener throws an Exception) and the message content.
#RunWith(SpringRunner.class)
#SpringBootTest
#EmbeddedKafka(partitions = 1, topics = { "tp-in-gco-mao-notasfiscais" })
public class InvoicingServiceTest {
#Autowired
private NFKafkaListener nfKafkaListener;
#ClassRule
public static EmbeddedKafkaRule broker = new EmbeddedKafkaRule(1, false, "tp-in-gco-mao-
notasfiscais");
#Value("${" + EmbeddedKafkaBroker.SPRING_EMBEDDED_KAFKA_BROKERS + "}")
private String brokerAddresses;
#Autowired
private KafkaTemplate<Object, Object> template;
#BeforeClass
public static void setup() {
System.setProperty(EmbeddedKafkaBroker.BROKER_LIST_PROPERTY,
"spring.kafka.bootstrap-servers");
}
#Test
public void testTemplate() throws Exception {
NFServiceTest nfServiceTest = spy(new NFServiceTest());
nfKafkaListener.setNfServiceClient(nfServiceTest);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("teste9", "false", broker.getEmbeddedKafka());
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, InvoiceDeserializer.class);
consumerProps.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
DefaultKafkaConsumerFactory<Integer, Object> cf = new DefaultKafkaConsumerFactory<Integer, Object>(
consumerProps);
Consumer<Integer, Object> consumer = cf.createConsumer();
broker.getEmbeddedKafka().consumeFromAnEmbeddedTopic(consumer, "tp-in-gco-mao-notasfiscais");
ZfifNfMao zf = new ZfifNfMao();
zf.setItItensnf(new Zfietb011());
Zfietb011 zfietb011 = new Zfietb011();
Zfie011 zfie011 = new Zfie011();
zfie011.setMatkl("TESTE");
zfietb011.getItem().add(zfie011);
zf.setItItensnf(zfietb011);
template.send("tp-in-gco-mao-notasfiscais", zf);
List<ConsumerRecord<Integer, Object>> received = new ArrayList<>();
int n = 0;
while (received.size() < 1 && n++ < 10) {
ConsumerRecords<Integer, Object> records1 = consumer.poll(Duration.ofSeconds(10));
//records1 is always empty
if (!records1.isEmpty()) {
records1.forEach(rec -> received.add(rec));
}
}
assertThat(received).extracting(rec -> {
ZfifNfMao zfifNfMaoRdesponse = (ZfifNfMao) rec.value();
return zfifNfMaoRdesponse.getItItensnf().getItem().get(0).getMatkl();
}).contains("TESTE");
broker.getEmbeddedKafka().getKafkaServers().forEach(b -> b.shutdown());
broker.getEmbeddedKafka().getKafkaServers().forEach(b -> b.awaitShutdown());
consumer.close();
}
public static class NFServiceTest implements INFServiceClient {
CountDownLatch latch = new CountDownLatch(1);
#Override
public ZfifNfMaoResponse enviarSap(ZfifNfMao zfifNfMao) {
ZfifNfMaoResponse zfifNfMaoResponse = new ZfifNfMaoResponse();
zfifNfMaoResponse.setItItensnf(new Zfietb011());
Zfietb011 zfietb011 = new Zfietb011();
Zfie011 zfie011 = new Zfie011();
zfie011.setMatkl("TESTE");
zfietb011.getItem().add(zfie011);
zfifNfMaoResponse.setItItensnf(zfietb011);
return zfifNfMaoResponse;
}
}
}
You have two brokers; one created by #EmbeddedKafka and one created by the #ClassRule.
Use one or the other; preferably the #EmbeddedKafka and simply #Autowired the broker instance.
I am guessing the consumers are listening to different brokers; you can confirm that by looking at the INFO logs put out by the consumer config.
I've followed your advice but it keeps triggering the listener, but consumer.poll does not capture the topic content.
#RunWith(SpringRunner.class)
#SpringBootTest
#EmbeddedKafka(partitions = 1, topics = { "tp-in-gco-mao-notasfiscais" })
public class InvoicingServiceTest {
#Autowired
private NFKafkaListener nfKafkaListener;
#Autowired
public EmbeddedKafkaBroker broker;
#Autowired
private KafkaTemplate<Object, Object> template;
#BeforeClass
public static void setup() {
System.setProperty(EmbeddedKafkaBroker.BROKER_LIST_PROPERTY,
"spring.kafka.bootstrap-servers");
}
#Test
public void testTemplate() throws Exception {
NFServiceTest nfServiceTest = spy(new NFServiceTest());
nfKafkaListener.setNfServiceClient(nfServiceTest);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("teste9", "false", broker);
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, InvoiceDeserializer.class);
DefaultKafkaConsumerFactory<Integer, Object> cf = new DefaultKafkaConsumerFactory<Integer, Object>(
consumerProps);
Consumer<Integer, Object> consumer = cf.createConsumer();
broker.consumeFromAnEmbeddedTopic(consumer, "tp-in-gco-mao-notasfiscais");
ZfifNfMao zf = new ZfifNfMao();
zf.setItItensnf(new Zfietb011());
Zfietb011 zfietb011 = new Zfietb011();
Zfie011 zfie011 = new Zfie011();
zfie011.setMatkl("TESTE");
zfietb011.getItem().add(zfie011);
zf.setItItensnf(zfietb011);
template.send("tp-in-gco-mao-notasfiscais", zf);
List<ConsumerRecord<Integer, Object>> received = new ArrayList<>();
int n = 0;
while (received.size() < 1 && n++ < 10) {
ConsumerRecords<Integer, Object> records1 = consumer.poll(Duration.ofSeconds(10));
//records1 is always empty
if (!records1.isEmpty()) {
records1.forEach(rec -> received.add(rec));
}
}
assertThat(received).extracting(rec -> {
ZfifNfMao zfifNfMaoRdesponse = (ZfifNfMao) rec.value();
return zfifNfMaoRdesponse.getItItensnf().getItem().get(0).getMatkl();
}).contains("TESTE");
broker.getKafkaServers().forEach(b -> b.shutdown());
broker.getKafkaServers().forEach(b -> b.awaitShutdown());
consumer.close();
}
public static class NFServiceTest implements INFServiceClient {
CountDownLatch latch = new CountDownLatch(1);
#Override
public ZfifNfMaoResponse enviarSap(ZfifNfMao zfifNfMao) {
ZfifNfMaoResponse zfifNfMaoResponse = new ZfifNfMaoResponse();
zfifNfMaoResponse.setItItensnf(new Zfietb011());
Zfietb011 zfietb011 = new Zfietb011();
Zfie011 zfie011 = new Zfie011();
zfie011.setMatkl("TESTE");
zfietb011.getItem().add(zfie011);
zfifNfMaoResponse.setItItensnf(zfietb011);
return zfifNfMaoResponse;
}
}
}

Open new transaction for every item in list with Spting MVC

I am not able to open new transaction for each item when iterating list using Spring Boot. I want to roll back only the item that is failed and continue with rest of the items in list. There are multiple commits in one transaction and all of them must be rolled back if any failure.
My Service code below.
#Service("intentExportImportService")
public class IntentExportImportServiceImpl implements IntentExportImportService {
#Resource(name = "intentExportImportService")
private IntentExportImportService intentExportImportService;
public Map<String, Integer> importIntents(ExportImportData exportImportData,boolean overwrite) throws DataAccessLayerException {
Map<String, Integer> statiscisMap= createOrUpdateIntent(exportImportData,overwrite);
return statiscisMap;
}
private Map<String, Integer> createOrUpdateIntent(ExportImportData exportImportData,boolean overwrite)throws DataAccessLayerException {
List<Intent> intentsList = exportImportData.getIntents();
Map<String,Entity> entityMap = getEntityMap(exportImportData.getEntityList());
Map<String,Api> apiMap = getApiMap(exportImportData.getApiList());
Map<String,Integer> statisticsMap = new HashMap<>();
Long domainId = ExcelUtil.getDomainId(exportImportData.getDomainName());
for(Intent intent : intentsList) {
Intent existingIntent = intentExists(intent.getIntentNm());
if(existingIntent != null){
startUpdateIntent(intent,existingIntent,entityMap,apiMap,overwrite,statisticsMap,domainId);
}else{
startCreateIntent(intent,entityMap,apiMap,overwrite,statisticsMap,domainId);
}
}
return statisticsMap;
}
#Transactional
public void startUpdateIntent(Intent intent, Intent existingIntent, Map<String, Entity> entityMap, Map<String, Api> apiMap, boolean overwrite, Map<String, Integer> statisticsMap, Long domainId) {
try {
intentExportImportService.updateIntent(intent, existingIntent, entityMap, apiMap, overwrite, statisticsMap,domainId);
}catch (Exception e)
{
updateStatisticsMap(FAILED,statisticsMap);
LOGGER.error("Error Importing Intents to update and hence rolling back intent: "+intent.getIntentNm());
}
}
#Transactional(value = "dataTxManager", propagation = Propagation.REQUIRES_NEW, isolation = Isolation.READ_COMMITTED, rollbackFor = {
DuplicateException.class, DataAccessException.class,DataAccessLayerException.class, SQLTimeoutException.class, SQLException.class,Exception.class})
public void updateIntent(Intent intent, Intent existingIntent, Map<String, Entity> entityMap, Map<String, Api> apiMap, boolean overwrite, Map<String, Integer> statisticsMap,Long domainId) throws DataAccessLayerException {
if(!overwrite){
LOGGER.info("Not overwriting the Intent: "+intent.getIntentNm()+" as it already exist and overwrite is false");
throw new DataAccessLayerException(CommonConstants.IMPORT_FAILURE_ERROR_CODE,"rolling back intent importing: "+intent.getIntentNm());
}
manageEntitiesAndApis(intent, entityMap, apiMap, overwrite,domainId);
Long intentId = updateImportedIntent(intent,existingIntent);
if(intentId != null) {
updateStatisticsMap(UPDATED, statisticsMap);
}
}
}
UpdateIntent is already in another class: IntentImportExportService which is injected in the caller as a Resource, so the problem is not there...
Are you sure the selected transaction manager supports nested transaction?
I solved the issue by adding try catch inside for loop like below
#Transactional
public Map<String, Integer> createOrUpdateIntent(ExportImportData exportImportData,boolean overwrite) {
List<Intent> intentsList = exportImportData.getIntents();
Map<String,Entity> entityMap = getEntityMap(exportImportData.getEntityList());
Map<String,Api> apiMap = getApiMap(exportImportData.getApiList());
Map<String,Integer> statisticsMap = new HashMap<>();
Long domainId = ExcelUtil.getDomainId(exportImportData.getDomainName());
for(Intent intent : intentsList) {
try {
Intent existingIntent = intentExists(intent.getIntentNm());
if (existingIntent != null) {
intentExportImportService.updateIntent(intent, existingIntent, entityMap, apiMap, overwrite, statisticsMap, domainId);
} else {
intentExportImportService.createIntent(intent, entityMap, apiMap, overwrite, statisticsMap, domainId);
}
} catch (DataAccessLayerException e) {
updateStatisticsMap(FAILED,statisticsMap);
LOGGER.error("Error Importing Intents to update and hence rolling back intent: "+intent.getIntentNm());
}
}
return statisticsMap;
}

Resources