Kafka producer and JPA Transaction - spring-kafka

I'm trying to make a composite transaction with JPA and Kafka with Spring-Kafka.
I need to avoid the message commit if the JPA transaction is rollbacked (eg when a ContraintViolationException is raised) but the following code does not working (if a ContraintViolationException is raised, the message is committed to topic).
My Kafka Configuration:
#Bean(name="jpaKafkaTx")
public ChainedKafkaTransactionManager<Object, Object> chainedTm(KafkaTransactionManager<String, String> kafkaTx, JpaTransactionManager jpaTx) {
return new ChainedKafkaTransactionManager<>(jpaTx, kafkaTx);
}
...
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(KafkaOperations<?, ?> template,
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> kafkaConsumerFactory, ChainedKafkaTransactionManager<Object, Object> chainedTx) {
...
factory.getContainerProperties().setTransactionManager(chainedTx);
return factory;
}
My Service
#Transactional(transactionManager = "jpaKafkaTx", rollbackFor = Exception.class)
public void update(){
updateDb();
produce();
}
..
public void updateDb(){
.....
jpaRepository.save();
}
My KafkaProducer Service
public void produce(...){
...
kafkaTemplate.send(message).addCallback(callback);
if (callback.isError()) {
log.error("KO");
} else {
log.info("OK");
}
....
}
The Kafka isolation level is: read_committed.
Could you help me? Where I can found a complete example?

return new ChainedKafkaTransactionManager<>(jpaTx, kafkaTx);
With that configuration, the kafka transaction is committed first.
Reverse the TM order, or just inject the KTM into the container and use the #Transactional to manage just the JPA transaction.
Note that the ChainedKafkaTransactionManager is deprecated with no replacement - see the comment in its javadocs.

Related

Spring Kafka : Skip error message using CommonErrorHandler

I am using spring-kafka 2.8.9 and kafka-clients 2.8.1 . I want to skip a message which is failed to de-serialize . Since setErrorHandler is deprecated , I tried using CommonErrorHandler . But I am not sure how to skip current error message and move to next record . The only option I can see is using pattern matching by extracting relevant details from below line like offset and partition .
org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition test-0 at offset 1. If needed, please seek past the record
Is there any other way like RecordDeserializationException to get necessary information from the exception or any other means without pattern matching . I can not upgrade to kafka 3.X.X .
My config
#Bean
public ConsumerFactory<String, Farewell> farewellConsumerFactory()
{
groupId = LocalTime.now().toString();
Map<String, Object> props = new HashMap<>();
props.put( ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put( ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"earliest");
return new DefaultKafkaConsumerFactory<>(props,new StringDeserializer(),new JsonDeserializer<>(Farewell.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Farewell> farewellKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Farewell> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setCommonErrorHandler(new CommonErrorHandler()
{
#Override
public void handleOtherException(Exception thrownException, Consumer<?, ?> consumer, MessageListenerContainer container, boolean batchListener)
{
CommonErrorHandler.super.handleOtherException(thrownException, consumer, container, batchListener);
}
});
factory.setConsumerFactory(farewellConsumerFactory());
return factory;
}
My listener class
#KafkaListener(topics = "${topicId}",
containerFactory = "farewellKafkaListenerContainerFactory")
public void farewellListener(Farewell message) {
System.out.println("Received Message in group " + groupId + "| " + message);
}
Domain class
public class Farewell {
private String message;
private Integer remainingMinutes;
public Farewell(String message, Integer remainingMinutes)
{
this.message = message;
this.remainingMinutes = remainingMinutes;
}
// standard getters, setters and constructor
}
I have checked these links
How to skip a msg that have error in kafka when i use ConcurrentMessageListenerContainer?
Better way of error handling in Kafka Consumer
Use an ErrorHandlingDeserializer as a wrapper around your real deserializer.
Serialization exceptions will be sent directly to the DefaultErrorHandler, which treats such exceptions as fatal (by default) and sends them directly to the recoverer.

Spring Integration with Kafka not sending messages

I am working on Spring - Kafka using Java DSL and I see that the messages are not produced/sent to the kafka topic.
The code I have been using is:
#Bean
public IntegrationFlow sendToKafkaFlow() {
return IntegrationFlows.from(kafkaPublishChannel)
.handle(kafkaMessageHandler())
.get();
}
private KafkaProducerMessageHandlerSpec<String, Object, ?> kafkaMessageHandler() {
return Kafka
.outboundChannelAdapter(_kafkaProducerFactory.getKafkaTemplate().getProducerFactory())
.messageKey(m -> m
.getHeaders()
.getId())
//.headerMapper(mapper())
.topic(_topicConfiguration.getCheProgressUpdateTopic())
.configureKafkaTemplate(t -> t.getTemplate());
}
#Bean
public DefaultKafkaHeaderMapper mapper() {
return new DefaultKafkaHeaderMapper();
}
The producer configurations I am using are:
private ProducerFactory<String, Object> producerFactory() {
final Map<String, Object> producerProps = new HashMap<>();
producerProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProducerConfiguration.getKafkaServerProducerHost());
producerProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
producerProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
producerProps.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, kafkaProducerConfiguration.isKafkaProducerIdempotentEnabled());
producerProps.put(ProducerConfig.ACKS_CONFIG, kafkaProducerConfiguration.getKafkaProducerAcks());
producerProps.put(ProducerConfig.RETRIES_CONFIG, kafkaProducerConfiguration.getKafkaProducerRetries());
producerProps.put(ProducerConfig.BATCH_SIZE_CONFIG, kafkaProducerConfiguration.getKafkaProducerBatchSize());
producerProps.put(ProducerConfig.LINGER_MS_CONFIG, kafkaProducerConfiguration.getKafkaProducerLingerMs());
return new DefaultKafkaProducerFactory<>(producerProps);
}
Not sure why I am not seeing the messages in the kafka topic. Can you please help me out here?
Try to use sync(true) on that Kafka.outboundChannelAdapter(). I believe there should be some errors if you don't see any progress during sending. You may also consider to use a DEBUG logging level for the org.springframework.integration category to see how your messages are traveling through your integration flow.

Spring cloud stream - Autowiring underlying Consumer for a given PollableMessageSource

Is it possible to get a hold of underlying KafkaConsumer bean for a defined PollableMessageSource?
I have Binding defined as:
public interface TestBindings {
String TEST_SOURCE = "test";
#Input(TEST_SOURCE)
PollableMessageSource testTopic();
}
and config class:
#EnableBinding(TestBindings.class)
public class TestBindingsPoller {
#Bean
public ApplicationRunner testPoller(PollableMessageSource testTopic) {
// Get kafka consumer for PollableMessageSource
KafkaConsumer kafkaConsumer = getConsumer(testTopic);
return args -> {
while (true) {
if (!testTopic.poll(...) {
Thread.sleep(500);
}
}
};
}
}
The question is, how can I get KafkaConsumer that corresponds to testTopic? Is there any way to get it from beans that are wired in spring cloud stream?
The KafkaMessageSource populates a KafkaConsumer into headers, so it is available in the place you receive messages: https://github.com/spring-projects/spring-kafka/blob/master/spring-kafka/src/main/java/org/springframework/kafka/support/converter/MessageConverter.java#L57.
If you are going to do stuff like poll yourself, I would suggest to inject a ConsumerFactory and use a consumer from there already.

How to use spring-kafka for sending a message again

We are using spring-kafka 1.2.2.RELEASE.
What we want
1. As soon as a message is consumed and processed successfully, offset is committed in spring-kafka. I am using Manaul Commit/Acknowledgement for it, it is working fine.
2. In case of any exception we want spring-kafka to resend the same message. We are throwing RunTime exception on any system error, which was logged by spring-kafka and never committed.
This is fine as we don't want it to commit, but that message stays in spring-kafka and never comes back unless we restart the service. On restart message comes back and executes once again and then stay in spring-kafka
What we tried
1. I have tried both ErrorHandler and RetryingMessageListenerAdapter, but in both cases we have to code in service how to process the message again
This is my consumer
public class MyConsumer{
#KafkaListener
public void receive(...){
// application logic to return success/failure
if(success){
acknowledgement.acknowledge();
}else{
throw new RunTimeException();
}
}
}
Also I have following configurations for container factory
factory.getContainerProperties().setErrorHandler(new ErrorHandler(){
#Override
public void handle(...){
throw new RunTimeException("");
}
});
While executing the flow, control is coming inside both first to receive and then handle method. After that service waits for new message. However I was expecting, since we threw an exception, and message is not committed, same message should land in receive method again.
Is there any way, we can tell spring kafka "do not commit this message and send it again asap?"
1.2.x is no longer supported; 1.x users are recommended to upgrade to at least 1.3.x (currently 1.3.8) because of its much simpler threading model, thanks to KIP-62.
The current version is 2.2.2.
2.0.1 introduced the SeekToCurrentErrorHandler which re-seeks the failed record so that it is redelivered.
With earlier versions, you had to stop and restart the container to redeliver a failed message, or add retry to the listener adapter.
I suggest you upgrade to the newest possible release.
Unfortunately version available for us to use is 1.3.7.RELEASE.
I have tried implementing the ConsumerSeekAware interface. Below is how I am doing it and I can see message delivering repreatedly
Consumer
public class MyConsumer implements ConsumerSeekAware{
private ConsumerSeekCallback consumerSeekCallback;
if(condition) {
acknowledgement.acknowledge();
}else {
consumerSeekCallback.seek((String)headers.get("kafka_receivedTopic"),
(int) headers.get("kafka_receivedPartitionId"),
(int) headers.get("kafka_offset"));
}
}
#Override
public void registerSeekCallback(ConsumerSeekCallback consumerSeekCallback) {
this.consumerSeekCallback = consumerSeekCallback;
}
#Override
public void onIdleContainer(Map<TopicPartition, Long> arg0, ConsumerSeekCallback arg1) {
LOGGER.debug("onIdleContainer called");
}
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> arg0, ConsumerSeekCallback arg1) {
LOGGER.debug("onPartitionsAssigned called");
}
}
Config
public class MyConsumerConfig {
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
// Set server, deserializer, group id
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
return props;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, MyModel> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, MyModel> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(consumerConfigs()));
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
return factory;
}
#Bean
public MyConsumer receiver() {
return new MyConsumer();
}
}

MyBatis Operation Gets Blocked in Spring Boot Async Method

In my project based on Spring Boot 1.3.3, I integrated MyBatis with mybatis-spring-boot-starter 1.1.1 as persistence layer, all CRUD operation seems working fine separately, but the integration tests failed and I found the DB operation gets blocked in asynchronous task.
The test code looks like this:
#RunWith(SpringJUnit4ClassRunner.class)
#SpringApplicationConfiguration(classes = SapiApplication.class)
#Transactional
public class OrderIntegrationTest {
#Test
public void shouldUpdateOrder() throws InterruptedException{
Order order1 = getOrder1();
orderService.createOrder(order1);
Order order1updated = getOrder1Updated();
orderService.updateOrderAsync(order1updated);
Thread.sleep(1000l);
log.info("find the order!");
Order order1Db = orderService.findOrderById(order1.getOrderId());
log.info("found the order!");
assertEquals("closed", order1Db.getStatus());
}
}
The expected execution order is createOrder() -> updateOrderAsync() -> findOrderById(), but actually the execution order is createOrder() -> updateOrderAsync() started and blocked -> findOrderById() -> updateOrderAsync() continued and ended.
Log:
16:23:04.261 [executor1-1] INFO c.s.api.web.service.OrderServiceImpl - updating order: 2884384
16:23:05.255 [main] INFO c.s.a.w.service.OrderIntegrationTest - find the order!
16:23:05.280 [main] INFO c.s.a.w.service.OrderIntegrationTest - found the order!
16:23:05.299 [executor1-1] INFO c.s.api.web.service.OrderServiceImpl - updated order: 2884384
Other related code:
#Service
public class OrderServiceImpl implements OrderService {
#Autowired
private OrderDao orderDao;
#Async("executor1")
#Override
public void updateOrderAsync(Order order){
log.info("updating order: {}", order.getOrderId());
orderDao.updateOrder(order);
log.info("updated order: {}", order.getOrderId());
}
}
The DAO:
public interface OrderDao {
public int updateOrder(Order order);
public int createOrder(Order order);
public Order findOrderById(String orderId);
}
The Gradle dependencies:
dependencies {
compile 'org.springframework.boot:spring-boot-starter-jdbc'
compile 'org.springframework.boot:spring-boot-starter-security'
compile 'org.springframework.boot:spring-boot-starter-web'
compile 'org.springframework.boot:spring-boot-starter-actuator'
compile 'org.mybatis.spring.boot:mybatis-spring-boot-starter:1.1.1'
compile 'ch.qos.logback:logback-classic:1.1.2'
compile 'org.springframework.boot:spring-boot-configuration-processor'
runtime 'mysql:mysql-connector-java'
providedRuntime 'org.springframework.boot:spring-boot-starter-tomcat'
testCompile 'org.springframework.boot:spring-boot-starter-test'
testCompile "org.springframework.security:spring-security-test"
}
The Spring configuration:
#SpringBootApplication
#EnableAsync
#EnableCaching
#EnableScheduling
#MapperScan("com.sapi.web.dao")
public class SapiApplication {
#Bean(name = "executor1")
protected Executor taskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(5);
executor.setMaxPoolSize(100);
return executor;
}
#Bean
#Primary
#ConfigurationProperties(prefix = "datasource.primary")
public DataSource numberMasterDataSource() {
return DataSourceBuilder.create().build();
}
#Bean(name = "secondary")
#ConfigurationProperties(prefix = "datasource.secondary")
public DataSource provisioningDataSource() {
return DataSourceBuilder.create().build();
}
#Bean(name = "jdbcTpl")
public JdbcTemplate jdbcTemplate(#Qualifier("secondary") DataSource dsItems) {
return new JdbcTemplate(dsItems);
}
public static void main(String[] args) {
SpringApplication.run(SapiApplication.class, args);
}
}
The properties:
mybatis.mapper-locations=classpath*:com/sapi/web/dao/*Mapper.xml
mybatis.type-aliases-package=com.sapi.web.vo
datasource.primary.driver-class-name=com.mysql.jdbc.Driver
datasource.primary.url=jdbc:mysql://10.0.6.202:3306/sapi
datasource.primary.username=xxx
datasource.primary.password=xxx
datasource.primary.maximum-pool-size=80
datasource.primary.max-idle=10
datasource.primary.max-active=150
datasource.primary.max-wait=10000
datasource.primary.min-idle=5
datasource.primary.initial-size=5
datasource.primary.validation-query=SELECT 1
datasource.primary.test-on-borrow=false
datasource.primary.test-while-idle=true
datasource.primary.time-between-eviction-runs-millis=18800
datasource.primary.jdbc-interceptors=ConnectionState;SlowQueryReport(threshold=100)
datasource.secondary.url = jdbc:mysql://10.0.6.202:3306/xdb
datasource.secondary.username = xxx
datasource.secondary.password = xxx
datasource.secondary.driver-class-name = com.mysql.jdbc.Driver
logging.level.org.springframework.web=DEBUG
The problem you see is caused by the fact that the whole test method shouldUpdateOrder is executed in one transaction. This means that any update operation that is executed in the thread that runs shouldUpdateOrder locks the record for the whole duration of the transaction (that is till exit from test method) and that record cannot be updated by another concurrent transaction (that is executed in async method).
To solve the issue you need to change transactions boundaries. In your case the correct way to emulate real life usage is to
create order in one transaction and finish the transaction
update order in another transaction
check that update is executed as expected in yet another transaction

Resources