Does DeadLetterPublishingRecoverer expect the .DLT topic to be present before hand - spring-kafka

I'm using spring boot 2.1.7.RELEASE and spring-kafka 2.2.8.RELEASE.And I'm using #KafkaListener annotation to create a consumer and I'm using all default settings for the consumer except below one
auto.create.topics.enable = false
Now I'm trying to use DeadLetterPublishingRecoverer in conjunction with SeekToCurrentErrorHandler to handle deserialization errors.
As per the spring-kafka documentation of DeadLetterPublishingRecoverer,
By default, the dead-letter record is sent to a topic named
.DLT (the original topic name suffixed with .DLT) and
to the same partition as the original record
Now my question is, Does the DeadLetterPublishingRecoverer expect the .DLT topic be present before hand or can it create the topic using Adminclient (if not present?

Yes; it must exist; just add a NewTopic #Bean and the auto-configured Boot KafkaAdmin will create it.

Related

Configuration Spring - Kafka for many clusters Kafka? What is best solution?

I see people duplicating code to configure kafka for each cluster :(
Is it really necessary to configure Consumer Factory and Producer Factory with different settings every time? And do not use Spring Boot Kafka starter in any way?
If you use similar properties for each cluster, you can override the bootstrap.servers property for each #KafkaListener and/or create multiple #KafkaTemplates with the same override.
Then, the same factories can be used.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#annotation-properties
#KafkaListener(topics = "myTopic", groupId = "group", properties = {
"max.poll.interval.ms:60000",
ConsumerConfig.MAX_POLL_RECORDS_CONFIG + "=100"
})
/**
* Create an instance using the supplied producer factory and properties, with
* autoFlush false. If the configOverrides is not null or empty, a new
* {#link DefaultKafkaProducerFactory} will be created with merged producer properties
* with the overrides being applied after the supplied factory's properties.
* #param producerFactory the producer factory.
* #param configOverrides producer configuration properties to override.
* #since 2.5
*/
public KafkaTemplate(ProducerFactory<K, V> producerFactory, #Nullable Map<String, Object> configOverrides) {
Spring Boot auto-configuration is by convention for the common microservices use-case: one thing, but simple and clear.
What you are asking is out of Spring Boot scope: the properties configuration is applied only for one ConsumerFactory and one ProducerFactory. As long as you need to connect to different clusters you are on your own. Having a custom ConsumerFactory bean will lead Spring Boot auto-configuration to back off.
You probably can look into a child ApplicationContext configuration with its own provided set of properties. But will it be easier to just have custom configuration for each cluster as you would do without Spring Boot with just a plain Apache Kafka client?
I have no idea if there is some federation solution for Apache Kafka. But that's still out of Spring for Apache Kafka and Spring Boot scope.

When should i use batch consumer vs single record consumer

As far as I know, there is no special concept such as batch consumer in Apache Kafka documentation but spring-kafka has an option to create a batch consumer using below code snippet.
#Bean
public KafkaListenerContainerFactory<?> batchFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setBatchListener(true); // <<<<<<<<<<<<<<<<<<<<<<<<<
return factory;
}
Now my question is,
When should i use batch consumer vs single record consumer? Can someone share few use cases to explain the usage of single record consumer vs batch consumer
As per the below thread, the main difference between single record consumer vs batch consumer is how many records are handled to the listener from the poll list.
What's the basic difference between single record kafka consumer and kafka batch consumer?
An example of using a batch listener might be if you to want to send data from multiple records to a database in one SQL statement (which might be more efficient than sending them one-at-a-time).
Another case is if you are using transactions; again, it might be more efficient to use one transaction for multiple SQL updates.
Generally, it's a performance consideration; if that's not a concern then the simpler one-at-a-time processing might be more suitable.

How to create/simulate creation of deserialization exception when using schema registry ( in addition to brokers and zookeepers)

We are using spring-kafka-2.2.7.RELEASE to produce and consume avro messages and using schema registry for schema validation with 'FORWARD_TRANSITIVE' as the compatibility type. Now, I'm trying to use 'ErrorHandlingDeserializer2 ' from spring-kafka to handle the exception/error when a deserializer fails to deserialize a message. Now I'm trying to write a component test to test this configuration. My component test expected to have below steps.
Spin up a local kafka cluster using docker containers
Send an avro message (using KafkaTemplate) with invalid schema to re-create/simulate the deserialization exception onto a test topic
Now what's happening is, since we have schema registry in place, if i send a message with new schema (invalid schema) it's validating the schema as per the compatibility type setting we have and not letting me producer the message onto kafka by throwing an exception at the producer level itself.
Now my question is, In this scenario, how can I create/simulate the creation of deserialization exception to test my configuration. Please suggest.
Note:- I don't want to disable/stop schema registry because that wouldn't simulate our prod setup.

Access consumer object from ConsumerRebalanceListener using Spring Kafka

Can you please tell me how to implement ConsumerRebalanceListener. and how I can get the consumer object.
Actually currently I am getting rebalance issue also some the record is missing. I found the solution like we need to use ConsumerRebalanceListener to fix this issue.
I have done this much configuration.
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 100);
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 50000);
props.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, 1000);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 40000);
Also this much configuration
ConcurrentKafkaListenerContainerFactory<String, String> factory = new
ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setCommitLogLevel(LogIfLevelEnabled.Level.INFO);
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL);
Is this correct to implement ConsumerRebalanceListener or any other solution we have to fix above.
I am using spring kafka 2.2.2 release and #kafkaListener.
I am not sure what you mean by
I am getting rebalance issue also some the record is missing.
but spring-kakfa added a ConsumerAwareRebalanceListener which extends ConsumerRebalanceListener precisely to allow access to the Consumer.

Spring cloud stream kafka transactions in producer side

We have a spring cloud stream app using Kafka. The requirement is that on the producer side the list of messages needs to be put in a topic in a transaction. There is no consumer for the messages in the same app. When i initiated the transaction using spring.cloud.stream.kafka.binder.transaction.transaction-id prefix, I am facing the error that there is no subscriber for the dispatcher and a total number of partitions obtained from the topic is less than the transaction configured. The app is not able to obtain the partitions for the topic in transaction mode. Could you please tell if I am missing anything. I will post detailed logs tomorrow.
Thanks
You need to show your code and configuration as well as the versions you are using.
Producer-only transactions are discussed in the documentation.
Enable transactions by setting spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix to a non-empty value, e.g. tx-. When used in a processor application, the consumer starts the transaction; any records sent on the consumer thread participate in the same transaction. When the listener exits normally, the listener container will send the offset to the transaction and commit it. A common producer factory is used for all producer bindings configured using spring.cloud.stream.kafka.binder.transaction.producer.* properties; individual binding Kafka producer properties are ignored.
If you wish to use transactions in a source application, or from some arbitrary thread for producer-only transaction (e.g. #Scheduled method), you must get a reference to the transactional producer factory and define a KafkaTransactionManager bean using it.
#Bean
public PlatformTransactionManager transactionManager(BinderFactory binders) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null,
MessageChannel.class)).getTransactionalProducerFactory();
return new KafkaTransactionManager<>(pf);
}
Notice that we get a reference to the binder using the BinderFactory; use null in the first argument when there is only one binder configured. If more than one binder is configured, use the binder name to get the reference. Once we have a reference to the binder, we can obtain a reference to the ProducerFactory and create a transaction manager.
Then you would just normal Spring transaction support, e.g. TransactionTemplate or #Transactional, for example:
public static class Sender {
#Transactional
public void doInTransaction(MessageChannel output, List<String> stuffToSend) {
stuffToSend.forEach(stuff -> output.send(new GenericMessage<>(stuff)));
}
}
If you wish to synchronize producer-only transactions with those from some other transaction manager, use a ChainedTransactionManager.

Resources