Spring Integration with Kafka not sending messages - spring-kafka

I am working on Spring - Kafka using Java DSL and I see that the messages are not produced/sent to the kafka topic.
The code I have been using is:
#Bean
public IntegrationFlow sendToKafkaFlow() {
return IntegrationFlows.from(kafkaPublishChannel)
.handle(kafkaMessageHandler())
.get();
}
private KafkaProducerMessageHandlerSpec<String, Object, ?> kafkaMessageHandler() {
return Kafka
.outboundChannelAdapter(_kafkaProducerFactory.getKafkaTemplate().getProducerFactory())
.messageKey(m -> m
.getHeaders()
.getId())
//.headerMapper(mapper())
.topic(_topicConfiguration.getCheProgressUpdateTopic())
.configureKafkaTemplate(t -> t.getTemplate());
}
#Bean
public DefaultKafkaHeaderMapper mapper() {
return new DefaultKafkaHeaderMapper();
}
The producer configurations I am using are:
private ProducerFactory<String, Object> producerFactory() {
final Map<String, Object> producerProps = new HashMap<>();
producerProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProducerConfiguration.getKafkaServerProducerHost());
producerProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
producerProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
producerProps.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, kafkaProducerConfiguration.isKafkaProducerIdempotentEnabled());
producerProps.put(ProducerConfig.ACKS_CONFIG, kafkaProducerConfiguration.getKafkaProducerAcks());
producerProps.put(ProducerConfig.RETRIES_CONFIG, kafkaProducerConfiguration.getKafkaProducerRetries());
producerProps.put(ProducerConfig.BATCH_SIZE_CONFIG, kafkaProducerConfiguration.getKafkaProducerBatchSize());
producerProps.put(ProducerConfig.LINGER_MS_CONFIG, kafkaProducerConfiguration.getKafkaProducerLingerMs());
return new DefaultKafkaProducerFactory<>(producerProps);
}
Not sure why I am not seeing the messages in the kafka topic. Can you please help me out here?

Try to use sync(true) on that Kafka.outboundChannelAdapter(). I believe there should be some errors if you don't see any progress during sending. You may also consider to use a DEBUG logging level for the org.springframework.integration category to see how your messages are traveling through your integration flow.

Related

Kafka producer and JPA Transaction

I'm trying to make a composite transaction with JPA and Kafka with Spring-Kafka.
I need to avoid the message commit if the JPA transaction is rollbacked (eg when a ContraintViolationException is raised) but the following code does not working (if a ContraintViolationException is raised, the message is committed to topic).
My Kafka Configuration:
#Bean(name="jpaKafkaTx")
public ChainedKafkaTransactionManager<Object, Object> chainedTm(KafkaTransactionManager<String, String> kafkaTx, JpaTransactionManager jpaTx) {
return new ChainedKafkaTransactionManager<>(jpaTx, kafkaTx);
}
...
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(KafkaOperations<?, ?> template,
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> kafkaConsumerFactory, ChainedKafkaTransactionManager<Object, Object> chainedTx) {
...
factory.getContainerProperties().setTransactionManager(chainedTx);
return factory;
}
My Service
#Transactional(transactionManager = "jpaKafkaTx", rollbackFor = Exception.class)
public void update(){
updateDb();
produce();
}
..
public void updateDb(){
.....
jpaRepository.save();
}
My KafkaProducer Service
public void produce(...){
...
kafkaTemplate.send(message).addCallback(callback);
if (callback.isError()) {
log.error("KO");
} else {
log.info("OK");
}
....
}
The Kafka isolation level is: read_committed.
Could you help me? Where I can found a complete example?
return new ChainedKafkaTransactionManager<>(jpaTx, kafkaTx);
With that configuration, the kafka transaction is committed first.
Reverse the TM order, or just inject the KTM into the container and use the #Transactional to manage just the JPA transaction.
Note that the ChainedKafkaTransactionManager is deprecated with no replacement - see the comment in its javadocs.

#Header and spring stream functional programming model

Is there a way to use #Header inside the following kafka consumer code ? I am using Spring Cloud Stream (Kafka Stream binder implementation), and there after my implemention is using functional model for example.
#Bean
public Consumer<KStream<String, Pojo>> process() {
return messages -> messages.foreach((k, v) -> process(v));
}
If using Spring for apache kafka then this can be as simple as
#KafkaListener(topics = "${mytopicname}", clientIdPrefix = "${myprefix}", errorHandler = "customEventErrorHandler")
public void processEvent(#Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
#Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
#Header(KafkaHeaders.RECEIVED_TIMESTAMP) long ts
#Valid Pojo pojo) {
...
// use headers here
...
}
No; the Kafka Streams binder is not based on Spring Messaging.
You can access headers, topic, and such in a Transformer (via the ProcessorContext) added to your stream.
You can use the Kafka Message Channel binder with
#Bean
public Consumer<Message<Pojo>> process() {
return message -> ...
}

Kafka consumer health check

Is there a simple way to say if a consumer (created with spring boot and #KafkaListener) is operating normally?
This includes - can access and poll a broker, has at least one partition assigned, etc.
I see there are ways to subscribe to different lifecycle events but this seems to be a very fragile solution.
Thanks in advance!
You can use the AdminClient to get the current group status...
#SpringBootApplication
public class So56134056Application {
public static void main(String[] args) {
SpringApplication.run(So56134056Application.class, args);
}
#Bean
public NewTopic topic() {
return new NewTopic("so56134056", 1, (short) 1);
}
#KafkaListener(id = "so56134056", topics = "so56134056")
public void listen(String in) {
System.out.println(in);
}
#Bean
public ApplicationRunner runner(KafkaAdmin admin) {
return args -> {
try (AdminClient client = AdminClient.create(admin.getConfig())) {
while (true) {
Map<String, ConsumerGroupDescription> map =
client.describeConsumerGroups(Collections.singletonList("so56134056")).all().get(10, TimeUnit.SECONDS);
System.out.println(map);
System.in.read();
}
}
};
}
}
{so56134056=(groupId=so56134056, isSimpleConsumerGroup=false, members=(memberId=consumer-2-32a80e0a-2b8d-4519-b71d-671117e7eaf8, clientId=consumer-2, host=/127.0.0.1, assignment=(topicPartitions=so56134056-0)), partitionAssignor=range, state=Stable, coordinator=localhost:9092 (id: 0 rack: null))}
We have been thinking about exposing getLastPollTime() to the listener container API.
getAssignedPartitions() has been available since 2.1.3.
I know that you haven't mentioned it in your post - but beware of adding items like this to a health check if you then deploy in AWS and use such a health check for your ELB scaling environment.
For example one scenario that can happen is that your app loses connectivity to Kafka - your health check turns RED - and then elastic beanstalks begins a process of killing and re-starting your instances (which will happen continually until your Kafka instances are available again). This could be costly!
There is also a more general philosophical question on whether health checks should 'cascade failures' or not e.g. kafka is down so app connected to kafka claims it is down, the next app in the chain also does the same, etc etc. This is often more normally implemented via circuit breakers which are designed to minimise slow calls destined for failure.
You could check using the AdminClient for the topic description.
final AdminClient client = AdminClient.create(kafkaConsumerFactory.getConfigurationProperties());
final String topic = "someTopicName";
final DescribeTopicsResult describeTopicsResult = client.describeTopics(Collections.singleton(topic));
final KafkaFuture<TopicDescription> future = describeTopicsResult.values().get(topic);
try {
// for healthcheck purposes we're fetching the topic description
future.get(10, TimeUnit.SECONDS);
} catch (final InterruptedException | ExecutionException | TimeoutException e) {
throw new RuntimeException("Failed to retrieve topic description for topic: " + topic, e);
}

How to use spring-kafka for sending a message again

We are using spring-kafka 1.2.2.RELEASE.
What we want
1. As soon as a message is consumed and processed successfully, offset is committed in spring-kafka. I am using Manaul Commit/Acknowledgement for it, it is working fine.
2. In case of any exception we want spring-kafka to resend the same message. We are throwing RunTime exception on any system error, which was logged by spring-kafka and never committed.
This is fine as we don't want it to commit, but that message stays in spring-kafka and never comes back unless we restart the service. On restart message comes back and executes once again and then stay in spring-kafka
What we tried
1. I have tried both ErrorHandler and RetryingMessageListenerAdapter, but in both cases we have to code in service how to process the message again
This is my consumer
public class MyConsumer{
#KafkaListener
public void receive(...){
// application logic to return success/failure
if(success){
acknowledgement.acknowledge();
}else{
throw new RunTimeException();
}
}
}
Also I have following configurations for container factory
factory.getContainerProperties().setErrorHandler(new ErrorHandler(){
#Override
public void handle(...){
throw new RunTimeException("");
}
});
While executing the flow, control is coming inside both first to receive and then handle method. After that service waits for new message. However I was expecting, since we threw an exception, and message is not committed, same message should land in receive method again.
Is there any way, we can tell spring kafka "do not commit this message and send it again asap?"
1.2.x is no longer supported; 1.x users are recommended to upgrade to at least 1.3.x (currently 1.3.8) because of its much simpler threading model, thanks to KIP-62.
The current version is 2.2.2.
2.0.1 introduced the SeekToCurrentErrorHandler which re-seeks the failed record so that it is redelivered.
With earlier versions, you had to stop and restart the container to redeliver a failed message, or add retry to the listener adapter.
I suggest you upgrade to the newest possible release.
Unfortunately version available for us to use is 1.3.7.RELEASE.
I have tried implementing the ConsumerSeekAware interface. Below is how I am doing it and I can see message delivering repreatedly
Consumer
public class MyConsumer implements ConsumerSeekAware{
private ConsumerSeekCallback consumerSeekCallback;
if(condition) {
acknowledgement.acknowledge();
}else {
consumerSeekCallback.seek((String)headers.get("kafka_receivedTopic"),
(int) headers.get("kafka_receivedPartitionId"),
(int) headers.get("kafka_offset"));
}
}
#Override
public void registerSeekCallback(ConsumerSeekCallback consumerSeekCallback) {
this.consumerSeekCallback = consumerSeekCallback;
}
#Override
public void onIdleContainer(Map<TopicPartition, Long> arg0, ConsumerSeekCallback arg1) {
LOGGER.debug("onIdleContainer called");
}
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> arg0, ConsumerSeekCallback arg1) {
LOGGER.debug("onPartitionsAssigned called");
}
}
Config
public class MyConsumerConfig {
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
// Set server, deserializer, group id
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
return props;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, MyModel> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, MyModel> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(consumerConfigs()));
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
return factory;
}
#Bean
public MyConsumer receiver() {
return new MyConsumer();
}
}

Spring Cloud Feign with OAuth2RestTemplate

I'm trying to implement Feign Clients to get my user info from the user's service, currently I'm requesting with oAuth2RestTemplate, it works. But now I wish to change to Feign, but I'm getting error code 401 probably because it doesn't carry the user tokens, so there is a way to customize, if Spring support for Feign is using, a RestTemplate so I can use my own Bean?
Today I'm implementing in this way
The service the client
#Retryable({RestClientException.class, TimeoutException.class, InterruptedException.class})
#HystrixCommand(fallbackMethod = "getFallback")
public Promise<ResponseEntity<UserProtos.User>> get() {
logger.debug("Requiring discovery of user");
Promise<ResponseEntity<UserProtos.User>> promise = Broadcaster.<ResponseEntity<UserProtos.User>>create(reactorEnv, DISPATCHER)
.observe(Promises::success)
.observeError(Exception.class, (o, e) -> Promises.error(reactorEnv, ERROR_DISPATCHER, e))
.filter(entity -> entity.getStatusCode().is2xxSuccessful())
.next();
promise.onNext(this.client.getUserInfo());
return promise;
}
And the client
#FeignClient("account")
public interface UserInfoClient {
#RequestMapping(value = "/uaa/user",consumes = MediaTypes.PROTOBUF,method = RequestMethod.GET)
ResponseEntity<UserProtos.User> getUserInfo();
}
Feign doesn't use a RestTemplate so you'd have to find a different way. If you create a #Bean of type feign.RequestInterceptor it will be applied to all requests, so maybe one of those with an OAuth2RestTemplate in it (just to manage the token acquisition) would be the best option.
this is my solution, just to complement the another answer with the source code, implementing the interface feign.RequestInterceptor
#Bean
public RequestInterceptor requestTokenBearerInterceptor() {
return new RequestInterceptor() {
#Override
public void apply(RequestTemplate requestTemplate) {
OAuth2AuthenticationDetails details = (OAuth2AuthenticationDetails)
SecurityContextHolder.getContext().getAuthentication().getDetails();
requestTemplate.header("Authorization", "bearer " + details.getTokenValue());
}
};
}

Resources