Kafka exactly once messaging test with "consume-transform-produce" Integration test - spring-kafka

I am writing testcase to test the my application's consume-transform-produce loop of the Kafka. So effectively I am consuming from a sourceTopic-processing-sendMessage to Destination topic. I am writing these testcases to prove the exactly once messaging with Kafka as I will add other failure cases later.
Here is my configuration:
private Map<String, Object> consConfigProps(boolean txnEnabled) {
Map<String, Object> props = new HashMap<>(
KafkaTestUtils.consumerProps(AB_CONSUMER_GROUP_ID, "false", kafkaBroker));
props.put(ConsumerConfig.GROUP_ID_CONFIG, AB_CONSUMER_GROUP_ID);
props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
props.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
return props;
}
private Map<String, Object> prodConfigProps(boolean txnEnabled) {
Map<String, Object> props = new HashMap<>(KafkaTestUtils.producerProps(kafkaBroker));
props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
props.put(ProducerConfig.CLIENT_ID_CONFIG, "client-" + UUID.randomUUID().toString());
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
props.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "3");
props.put(ProducerConfig.RETRIES_CONFIG, "3");
props.put(ProducerConfig.ACKS_CONFIG, "all");
props.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG,
"prod-txn-" + UUID.randomUUID().toString());
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return props;
}
public KafkaMessageListenerContainer<String, NormalUser> fetchContainer() {
ContainerProperties containerProperties = new ContainerProperties(ABTOPIC, XYTOPIC, PATOPIC);
containerProperties.setGroupId("groupId-10001");
containerProperties.setAckMode(AckMode.MANUAL);
containerProperties.setSyncCommits(true);
containerProperties.setSyncCommitTimeout(Duration.ofMillis(5000));
containerProperties.setTransactionManager(kafkaTransactionManager());
KafkaMessageListenerContainer<String, NormalUser> kafkaMessageListContainer = new KafkaMessageListenerContainer<>(
consumerFactory(), containerProperties);
kafkaMessageListContainer.setupMessageListener(new AcknowledgingMessageListener<String, NormalUser>() {
#Override
public void onMessage(ConsumerRecord<String, NormalUser> record, Acknowledgment acknowledgment) {
log.debug("test-listener received message='{}'", record.toString());
records.add(record);
acknowledgment.acknowledge();
}
});
return kafkaMessageListContainer;
}
#Test
public void testProducerABSuccess() throws InterruptedException, IOException {
NormalUser userObj = new NormalUser(ABTypeGood,
Double.valueOf(Math.random() * 10000).longValue(),
"Blah" + String.valueOf(Math.random() * 10));
sendMessage(XYTOPIC, "AB-id", userObj);
try {
ConsumerRecords<String, NormalUser> records;
parserConsumer.subscribe(Collections.singletonList(XYTOPIC));
Map<TopicPartition, OffsetAndMetadata> currentOffsets = new LinkedHashMap<>();
// Check for messages
parserProducer.beginTransaction();
records = parserConsumer.poll(Duration.ofSeconds(3));
assertThat(1).isEqualTo(records.count()); // --> this asserts passes like 50% of the time.
for (ConsumerRecord<String, NormalUser> record : records) {
assertEquals(record.key(), "AB-id");
assertEquals(record.value(), userObj);
currentOffsets.put(new TopicPartition(record.topic(), record.partition()),
new OffsetAndMetadata(record.offset()));
}
parserProducer.send(new ProducerRecord<String, NormalUser>(ABTOPIC, "AB-id", userObj));
parserProducer.sendOffsetsToTransaction(currentOffsets, AB_CONSUMER_GROUP_ID);
parserProducer.commitTransaction();
} catch (ProducerFencedException | OutOfOrderSequenceException | AuthorizationException e) {
parserProducer.close();
} catch (final KafkaException e) {
parserProducer.abortTransaction();
}
ConsumerRecords<String, NormalUser> records;
loadConsumer.subscribe(Collections.singletonList(ABTOPIC));
records = loadConsumer.poll(Duration.ofSeconds(3));
assertThat(1).isEqualTo(records.count()); //--> this assert fails all the time.
for (ConsumerRecord<String, NormalUser> record : records) {
assertEquals(record.key(), "AB-id");
assertEquals(record.value(), userObj);
}
}
My issue is that the above testcase "testProducerABSuccess" is not consistent and the asserts fails sometimes and sometimes they pass. I have not been able to figure out why they are so inconsistent. What is wrong with the above.
Edits: 16-12:
Tested with consumerconfig.Auto_Offset_Reset_config-earliest no change. The first assert passes like 70% of the time. The second assert fails all the time (0% pass rate).

Which assertion fails? If it's assertThat(1).isEqualTo(records.count());, it's probably because you are setting auto.offset.reset to latest. It needs to be earliest to avoid a race condition whereby the record is sent before the consumer is assigned the partitition(s).

Related

How does spring-data-redis get an objectRecord with a Generic type

I want to implement MQ using stream in Redis, and when the entity type passed is generic, my code does not deserialize the entity consumption message correctly.
producer critical code:
#Bean
RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory rf) {
RedisTemplate<String, Object> rt = new RedisTemplate<>();
rt.setConnectionFactory(rf);
rt.setKeySerializer(RedisSerializer.string());
rt.setValueSerializer(RedisSerializer.json());
rt.setHashKeySerializer(RedisSerializer.string());
rt.setHashValueSerializer(RedisSerializer.json());
return rt;
}
#Bean
ApplicationRunner runner(RedisTemplate<String, Object> rt) {
return arg -> {
MyMessage<User> userMyMessage = new MyMessage<User>().setReceiver(1)
.setClientType("app")
.setPayload(new User().setName("testName")
.setAge(10)
.setId(1));
Jackson2HashMapper hashMapper = new Jackson2HashMapper(false);
rt.opsForStream()
.add(StreamRecords.newRecord()
.in("my-stream")
.ofMap(hashMapper.toHash(userMyMessage)));
};
}
the consumer:
#Bean
public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory rcf) {
RedisTemplate<String, Object> rt = new RedisTemplate<>();
rt.setConnectionFactory(rcf);
rt.setKeySerializer(RedisSerializer.string());
rt.setValueSerializer(RedisSerializer.json());
rt.setHashKeySerializer(RedisSerializer.string());
rt.setHashValueSerializer(RedisSerializer.json());
return rt;
}
#Bean
StreamMessageListenerContainer.StreamMessageListenerContainerOptions<String, ObjectRecord<String, Object>> hashContainerOptions() {
return StreamMessageListenerContainer.StreamMessageListenerContainerOptions.builder()
.pollTimeout(Duration.ofSeconds(1))
.objectMapper(new Jackson2HashMapper(false))
.build();
}
#Bean
StreamMessageListenerContainer<String, ObjectRecord<String, Object>> hashContainer(
StreamMessageListenerContainer.StreamMessageListenerContainerOptions<String, ObjectRecord<String, Object>> options,
RedisConnectionFactory redisConnectionFactory,
RedisTemplate<String, Object> redisTemplate) {
var container = StreamMessageListenerContainer.create(redisConnectionFactory, options);
container.start();
String group = "default-group";
String key = "my-stream";
try {
redisTemplate.opsForStream().createGroup(key, group);
} catch (Exception ignore) {
}
container.receiveAutoAck(
Consumer.from(group, "default-consumer"),
StreamOffset.create(key, ReadOffset.lastConsumed()),
message -> {
log.info("receive message stream:{}, id:{} value:{}", message.getStream(), message.getId(), message.getValue());
}
);
return container;
}
when producer starts, the object is cached correctly in Redis:
but the consumer does not read the object,The following is the log
receive message stream:my-stream, id:1655890663894-0 value:"com.commons.MyMessage"
the entity's attributes are gone, How do I make it work? And if I want listen to two different stream (message object types are different) what to do, is to configure two StreamMessageListenerContainer or can be configured in a container.
thanks!

Records are getting re-delivered with manual ack ( AckMode.MANUAL_IMMEDIATE)

Is it possible that record will redeliver again once it is properly ack? Sometime we are getting same record multiple time.
Attached image https://i.stack.imgur.com/9yC1M.jpg. Can see in image, multiple service receive same record at different time
I can't able to reproduce this in local machine.
#KafkaListener(topics = "${kafka.sample.topic}", containerFactory = "kafkaListenerContainerFactory")
public void listen(ConsumerRecord<String, String> record, Acknowledgment ack) {
//Ack record
sendAck(ack);
}
private void sendAck(Acknowledgment ack) {
try {
ack.acknowledge();
} catch (Exception e) {
logger.error("Exception occurred while sending ack");
logger.error(ExceptionUtils.getStackTrace(e));
}
}
#Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(Integer.parseInt(10));
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
return factory;
}
public Map<String, Object> consumerConfigs() {
Map<String, Object> consumerPropsMap = new HashMap<>();
consumerPropsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
consumerPropsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, autoCommitConfig);
consumerPropsMap.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, autoCommitConfigIntervalMs);
consumerPropsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, sessionTimeout);
consumerPropsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerPropsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerPropsMap.put(ConsumerConfig.GROUP_ID_CONFIG, groupIdConfig);
consumerPropsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerPropsMap.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
consumerPropsMap.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 1);
consumerPropsMap.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG, "org.apache.kafka.clients.consumer.CooperativeStickyAssignor");
if(StringUtils.isNotBlank(heartbeatIntervalMs)){
consumerPropsMap.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, heartbeatIntervalMs);
}
return consumerPropsMap;
}

Confluent Kafka consumer consumes messages only after changing groupId

I have a .Net core console application, that uses Confluent.Kafka.
I build a consumer for consuming messages from specific topic.
the app is intended to run a few times every-day, consume the messages on the specified topic and process them.
It took me a while to understand the consumer's vehavior, but the it will consume messages only if its groupId is a one that was never in use before.
Every time I change the consumer's groupId - the comsumer will fetch the messages in the subscribed topic. But on the next runs - with same groupId - the consumer.Consume returns null.
This behvior seems rlated to rebalance between consumers on same group. But I don't understand why - since the consumer should exist only throughout the application liftime. Before leaving the app, I call to consumer.close() and consumer.Dispose(). These should destoy the consumer, so that on the next run, when I create the consumer, again it will be the first and single consumer on the specified groupId. But as I said, this is not what happens in fact.
I know there are messages on the topic - I check it via command-line. And I also made sure the topic has only 1 partition.
The most weird thing is, that I have another .net core console app, which does the same process - and with no issue at all.
I attach the codes of the 2 apps.
Working app - always consuming:
class Program
{
...
static void Main(string[] args)
{
if (args.Length != 2)
{
Console.WriteLine("Please provide topic name to read and SMTP topic name");
}
else
{
var services = new ServiceCollection();
services.AddSingleton<ConsumerConfig, ConsumerConfig>();
services.AddSingleton<ProducerConfig, ProducerConfig>();
var serviceProvider = services.BuildServiceProvider();
var cConfig = serviceProvider.GetService<ConsumerConfig>();
var pConfig = serviceProvider.GetService<ProducerConfig>();
cConfig.BootstrapServers = Environment.GetEnvironmentVariable("consumer_bootstrap_servers");
cConfig.GroupId = "confluence-consumer";
cConfig.EnableAutoCommit = true;
cConfig.StatisticsIntervalMs = 5000;
cConfig.SessionTimeoutMs = 6000;
cConfig.AutoOffsetReset = AutoOffsetReset.Earliest;
cConfig.EnablePartitionEof = true;
pConfig.BootstrapServers = Environment.GetEnvironmentVariable("producer_bootstrap_servers");
var consumer = new ConsumerHelper(cConfig, args[0]);
messages = new Dictionary<string, Dictionary<string, UserMsg>>();
var result = consumer.ReadMessage();
while (result != null && !result.IsPartitionEOF)
{
Console.WriteLine($"Current consumed msg-json: {result.Message.Value}");
...
result = consumer.ReadMessage();
}
consumer.Close();
Console.WriteLine($"Done consuming messages from topic {args[0]}");
}
}
class ConsumerHelper.cs
namespace AggregateMailing
{
using System;
using Confluent.Kafka;
public class ConsumerHelper
{
private string _topicName;
private ConsumerConfig _consumerConfig;
private IConsumer<string, string> _consumer;
public ConsumerHelper(ConsumerConfig consumerConfig, string topicName)
{
try
{
_topicName = topicName;
_consumerConfig = consumerConfig;
var builder = new ConsumerBuilder<string, string>(_consumerConfig);
_consumer = builder.Build();
_consumer.Subscribe(_topicName);
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on ConsumerHelper: {exc.ToString()}");
}
}
public ConsumeResult<string, string> ReadMessage()
{
Console.WriteLine("ReadMessage: start");
try
{
return _consumer.Consume();
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on ReadMessage: {exc.ToString()}");
return null;
}
}
public void Close()
{
Console.WriteLine("Close: start");
try
{
_consumer.Close();
_consumer.Dispose();
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on Close: {exc.ToString()}");
}
}
}
}
Not working app - consuming only on first run after changing consumer groupId to one never in use:
class Program.cs
class Program
{
private static SmtpClient smtpClient;
private static Random random = new Random();
static void Main(string[] args)
{
try
{
var services = new ServiceCollection();
services.AddSingleton<ConsumerConfig, ConsumerConfig>();
services.AddSingleton<SmtpClient>(new SmtpClient("smtp.gmail.com"));
var serviceProvider = services.BuildServiceProvider();
var cConfig = serviceProvider.GetService<ConsumerConfig>();
cConfig.BootstrapServers = Environment.GetEnvironmentVariable("consumer_bootstrap_servers");
cConfig.GroupId = "smtp-consumer";
cConfig.EnableAutoCommit = true;
cConfig.StatisticsIntervalMs = 5000;
cConfig.SessionTimeoutMs = 6000;
cConfig.AutoOffsetReset = AutoOffsetReset.Earliest;
cConfig.EnablePartitionEof = true;
var consumer = new ConsumerHelper(cConfig, args[0]);
...
var result = consumer.ReadMessage();
while (result != null && !result.IsPartitionEOF)
{
Console.WriteLine($"current consumed message: {result.Message.Value}");
var msg = JsonConvert.DeserializeObject<EmailMsg>(result.Message.Value);
result = consumer.ReadMessage();
}
Console.WriteLine("Done sending emails consumed from SMTP topic");
consumer.Close();
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on Main: {exc.ToString()}");
}
}
class ConsumerHelper.cs
using Confluent.Kafka;
using System;
using System.Collections.Generic;
namespace Mailer
{
public class ConsumerHelper
{
private string _topicName;
private ConsumerConfig _consumerConfig;
private IConsumer<string, string> _consumer;
public ConsumerHelper(ConsumerConfig consumerConfig, string topicName)
{
try
{
_topicName = topicName;
_consumerConfig = consumerConfig;
var builder = new ConsumerBuilder<string, string> (_consumerConfig);
_consumer = builder.Build();
_consumer.Subscribe(_topicName);
//_consumer.Assign(new TopicPartition(_topicName, 0));
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on ConsumerHelper: {exc.ToString()}");
}
}
public ConsumeResult<string, string> ReadMessage()
{
Console.WriteLine("ConsumeResult: start");
try
{
return _consumer.Consume();
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on ConsumeResult: {exc.ToString()}");
return null;
}
}
public void Close()
{
Console.WriteLine("Close: start");
try
{
_consumer.Close();
_consumer.Dispose();
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on Close: {exc.ToString()}");
}
Console.WriteLine("Close: end");
}
}
}

Requeue the failed record in the kafka topic

I have a use case where the records are to be persisted in table which has foriegn key to itself.
Example:
zObject
{
uid,
name,
parentuid
}
parent uid also present in same table and any object which has non existent parentuid will be failed to persist .
At times the records are placed in the topic such a way that the dependency is not at the head of the list , instead it will be after the dependent records are present
This will cause failure in process the record . I have used the seektocurrenterrorhandler which actually retries the same failed records for the given backoff and it fails since the dependency is not met .
Is there any way where I can requeue the record at the end of the topic so that dependency is met ? If it fails for day 5 times even after enqueue , the records can be pushed to a DLT .
Thanks,
Rajasekhar
There is nothing built in; you can, however, use a custom destination resolver in the DeadLetterPublishingRecoverer to determine which topic to publish to, based on a header in the failed record.
See https://docs.spring.io/spring-kafka/docs/2.6.2/reference/html/#dead-letters
EDIT
#SpringBootApplication
public class So64646996Application {
public static void main(String[] args) {
SpringApplication.run(So64646996Application.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so64646996").partitions(1).replicas(1).build();
}
#Bean
public NewTopic dlt() {
return TopicBuilder.name("so64646996.DLT").partitions(1).replicas(1).build();
}
#Bean
public ErrorHandler eh(KafkaOperations<String, String> template) {
return new SeekToCurrentErrorHandler(new DeadLetterPublishingRecoverer(template,
(rec, ex) -> {
org.apache.kafka.common.header.Header retries = rec.headers().lastHeader("retries");
if (retries == null) {
retries = new RecordHeader("retries", new byte[] { 1 });
rec.headers().add(retries);
}
else {
retries.value()[0]++;
}
return retries.value()[0] > 5
? new TopicPartition("so64646996.DLT", rec.partition())
: new TopicPartition("so64646996", rec.partition());
}), new FixedBackOff(0L, 0L));
}
#KafkaListener(id = "so64646996", topics = "so64646996")
public void listen(String in,
#Header(KafkaHeaders.OFFSET) long offset,
#Header(name = "retries", required = false) byte[] retry) {
System.out.println(in + "#" + offset + ":" + retry[0]);
throw new IllegalStateException();
}
#KafkaListener(id = "so64646996.DLT", topics = "so64646996.DLT")
public void listenDLT(String in,
#Header(KafkaHeaders.OFFSET) long offset,
#Header(name = "retries", required = false) byte[] retry) {
System.out.println("DLT: " + in + "#" + offset + ":" + retry[0]);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> System.out.println(template.send("so64646996", "foo").get(10, TimeUnit.SECONDS)
.getRecordMetadata());
}
}

Open new transaction for every item in list with Spting MVC

I am not able to open new transaction for each item when iterating list using Spring Boot. I want to roll back only the item that is failed and continue with rest of the items in list. There are multiple commits in one transaction and all of them must be rolled back if any failure.
My Service code below.
#Service("intentExportImportService")
public class IntentExportImportServiceImpl implements IntentExportImportService {
#Resource(name = "intentExportImportService")
private IntentExportImportService intentExportImportService;
public Map<String, Integer> importIntents(ExportImportData exportImportData,boolean overwrite) throws DataAccessLayerException {
Map<String, Integer> statiscisMap= createOrUpdateIntent(exportImportData,overwrite);
return statiscisMap;
}
private Map<String, Integer> createOrUpdateIntent(ExportImportData exportImportData,boolean overwrite)throws DataAccessLayerException {
List<Intent> intentsList = exportImportData.getIntents();
Map<String,Entity> entityMap = getEntityMap(exportImportData.getEntityList());
Map<String,Api> apiMap = getApiMap(exportImportData.getApiList());
Map<String,Integer> statisticsMap = new HashMap<>();
Long domainId = ExcelUtil.getDomainId(exportImportData.getDomainName());
for(Intent intent : intentsList) {
Intent existingIntent = intentExists(intent.getIntentNm());
if(existingIntent != null){
startUpdateIntent(intent,existingIntent,entityMap,apiMap,overwrite,statisticsMap,domainId);
}else{
startCreateIntent(intent,entityMap,apiMap,overwrite,statisticsMap,domainId);
}
}
return statisticsMap;
}
#Transactional
public void startUpdateIntent(Intent intent, Intent existingIntent, Map<String, Entity> entityMap, Map<String, Api> apiMap, boolean overwrite, Map<String, Integer> statisticsMap, Long domainId) {
try {
intentExportImportService.updateIntent(intent, existingIntent, entityMap, apiMap, overwrite, statisticsMap,domainId);
}catch (Exception e)
{
updateStatisticsMap(FAILED,statisticsMap);
LOGGER.error("Error Importing Intents to update and hence rolling back intent: "+intent.getIntentNm());
}
}
#Transactional(value = "dataTxManager", propagation = Propagation.REQUIRES_NEW, isolation = Isolation.READ_COMMITTED, rollbackFor = {
DuplicateException.class, DataAccessException.class,DataAccessLayerException.class, SQLTimeoutException.class, SQLException.class,Exception.class})
public void updateIntent(Intent intent, Intent existingIntent, Map<String, Entity> entityMap, Map<String, Api> apiMap, boolean overwrite, Map<String, Integer> statisticsMap,Long domainId) throws DataAccessLayerException {
if(!overwrite){
LOGGER.info("Not overwriting the Intent: "+intent.getIntentNm()+" as it already exist and overwrite is false");
throw new DataAccessLayerException(CommonConstants.IMPORT_FAILURE_ERROR_CODE,"rolling back intent importing: "+intent.getIntentNm());
}
manageEntitiesAndApis(intent, entityMap, apiMap, overwrite,domainId);
Long intentId = updateImportedIntent(intent,existingIntent);
if(intentId != null) {
updateStatisticsMap(UPDATED, statisticsMap);
}
}
}
UpdateIntent is already in another class: IntentImportExportService which is injected in the caller as a Resource, so the problem is not there...
Are you sure the selected transaction manager supports nested transaction?
I solved the issue by adding try catch inside for loop like below
#Transactional
public Map<String, Integer> createOrUpdateIntent(ExportImportData exportImportData,boolean overwrite) {
List<Intent> intentsList = exportImportData.getIntents();
Map<String,Entity> entityMap = getEntityMap(exportImportData.getEntityList());
Map<String,Api> apiMap = getApiMap(exportImportData.getApiList());
Map<String,Integer> statisticsMap = new HashMap<>();
Long domainId = ExcelUtil.getDomainId(exportImportData.getDomainName());
for(Intent intent : intentsList) {
try {
Intent existingIntent = intentExists(intent.getIntentNm());
if (existingIntent != null) {
intentExportImportService.updateIntent(intent, existingIntent, entityMap, apiMap, overwrite, statisticsMap, domainId);
} else {
intentExportImportService.createIntent(intent, entityMap, apiMap, overwrite, statisticsMap, domainId);
}
} catch (DataAccessLayerException e) {
updateStatisticsMap(FAILED,statisticsMap);
LOGGER.error("Error Importing Intents to update and hence rolling back intent: "+intent.getIntentNm());
}
}
return statisticsMap;
}

Resources