Spring kafka : Kafka Listener- consumer.seek issue - spring-kafka

we are using Spring KafkaListener which acknowledges each records after it is processed to DB. If we have problems writing to DB we don't acknowledge the record so that offsets are not committed for the consumer. this works fine. Now we want to get the failed messages in next poll to retry them. we added errorhandler to our listener and invoked ConsumerAwareListenerErrorHandler and tried to do consumer.seek() for the failed message offset. Expectation is during next poll, we should received the failed messages. This is not happening. Next poll fetches only the new messages and not the failed messages Code snippet is given below.
#Service
public class KafkaConsumer {
#KafkaListener(topics = ("${kafka.input.stream.topic}"), containerFactory = "kafkaManualAckListenerContainerFactory", errorHandler = "listen3ErrorHandler")
public void onMessage(ConsumerRecord<Integer, String> record,
Acknowledgment acknowledgment ) throws Exception {
try {
msg = JaxbUtil.convertJsonStringToMsg(record.value());
onHandList = DCMUtil.convertMsgToOnHandDTO(msg);
TeradataDAO.updateData(onHandList);
acknowledgment.acknowledge();
recordSuccess = true;
LOGGER.info("Message Saved in Teradata DB");
} catch (Exception e) {
LOGGER.error("Error Processing On Hand Data ", e);
recordSuccess = false;
}
}
#Bean
public ConsumerAwareListenerErrorHandler listen3ErrorHandler() throws InterruptedException {
return (message, exception, consumer) -> {
this.listen3Exception = exception;
MessageHeaders headers = message.getHeaders();
consumer.seek(new org.apache.kafka.common.TopicPartition(
headers.get(KafkaHeaders.RECEIVED_TOPIC, String.class),
headers.get(KafkaHeaders.RECEIVED_PARTITION_ID, Integer.class)),
headers.get(KafkaHeaders.OFFSET, Long.class));
return null;
};
}
}
Container Class
#Bean
public Map<Object,Object> consumerConfigs() {
Map<Object,Object> props = new HashMap<Object,Object> ();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
localhost:9092);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "example-1");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
return props;
}
#SuppressWarnings({ "rawtypes", "unchecked" })
#Bean
public ConsumerFactory consumerFactory() {
return new DefaultKafkaConsumerFactory(consumerConfigs());
}
#SuppressWarnings("unchecked")
#Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Integer, String>>
kafkaManualAckListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
return factory;
}

It's supposed to work like this:
The error handler needs to throw an exception if you want to discard additional records from the previous poll.
Since you are "handling" the error, the container knows nothing and will continue to call the listener with the remaining records from the poll.
That said, I see that the container is also ignoring an exception thrown by the error handler (it will discard if the error handler throws an Error not an exception). I will open an issue for this.
Another work around would be to add the Consumer to the listener method signature and do the seek there (and throw an exception). If there is no error handler, the rest of the batch is discarded.
Correction
If the container has no ErrorHandler, any Throwable thrown by a ListenerErrorHandler will cause the remaining records to be discarded.

Please try using SeekToCurrentErrorHandler. The doc says "This allows implementations to seek all unprocessed topic/partitions so the current record (and the others remaining) will be retrieved by the next poll. The SeekToCurrentErrorHandler does exactly this.
The container will commit any pending offset commits before calling the error handler."
https://docs.spring.io/autorepo/docs/spring-kafka-dist/2.1.0.BUILD-SNAPSHOT/reference/htmlsingle/#_seek_to_current_container_error_handlers

Related

How to manual commit do not recorverd offset already sent DLT through CommonErrorHandler

A simple example is currently being made through the spring kafka.
If an exception occurs at the service layer, I want to commit the original offset after trying to retry and loading it into the dead letter queue.
However, the dead letter queue is loaded properly, but the original message remains in the kafka because the commit is not processed.
To show you my code, it is as follows.
KafkaConfig.java
...
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setCommonErrorHandler(kafkaListenerErrorHandler());
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
return factory;
}
private CommonErrorHandler kafkaListenerErrorHandler() {
DefaultErrorHandler defaultErrorHandler = new DefaultErrorHandler(
new DeadLetterPublishingRecoverer(template, DEAD_TOPIC_DESTINATION_RESOLVER),
new FixedBackOff(1000, 3));
defaultErrorHandler.setCommitRecovered(true);
defaultErrorHandler.setAckAfterHandle(true);
defaultErrorHandler.setResetStateOnRecoveryFailure(false);
return defaultErrorHandler;
}
...
KafkaListener.java
...
#KafkaListener(topics = TOPIC_NAME, containerFactory = "kafkaListenerContainerFactory", groupId = "stock-adjustment-0")
public void subscribe(final String message, Acknowledgment ack) throws IOException {
log.info(String.format("Message Received : [%s]", message));
StockAdjustment stockAdjustment = StockAdjustment.deserializeJSON(message);
if(stockService.isAlreadyProcessedOrderId(stockAdjustment.getOrderId())) {
log.info(String.format("AlreadyProcessedOrderId : [%s]", stockAdjustment.getOrderId()));
} else {
if(stockAdjustment.getAdjustmentType().equals("REDUCE")) {
stockService.decreaseStock(stockAdjustment);
}
}
ack.acknowledge(); // <<< does not work!
}
...
Stockservice.java
...
if(stockAdjustment.getQty() > stock.getAvailableStockQty()) {
throw new RuntimeException(String.format("Stock decreased Request [decreasedQty: %s][availableQty : %s]", stockAdjustment.getQty(), stock.getAvailableStockQty()));
}
...
At this time, when RuntimeException occur in the service layer as above, the DLT is issued through an CommonErrorhandler according to the Kafka setting.
However, after issuing the DLT, the original message remains in Kafka, so there is a need for a solution.
I looked it up and found that the setting I wrote is processed through SeekUtils.seekOrRecover(), and if it is not processed even if the maximum number of attempts is not processed, an exception occurs and the original offset is rolled back without processing a commit.
According to the document, it seems that the AfterRollbackProcessor handles rollback if it fails with the default value, but I don't know how to write the code to commit even if it fails.
EDITED
The above code and settings work normally.
I thought the consumer lag would occur, but when I judged the actual logic code(SeekUtils.seekOrRecover()) and checked the offset commit and lag, I confirmed that it works normally.
I think it was caused by my mistake.
Records are never removed (until they expire), the consumer's committed offset is updated.
Use kafka-consumer-groups.sh to describe the group to see the committed offset for the failed record that was sent to the DLT.

Unable to throw exception by KafkaTemplate when topic is not available/ kafka broker is down

I am sending message to Kafka by Kafka Template but I wanted to test exception, So I have provided wrong topic name but When I run the code, it says " Error while fetching metadata with correlation id 2 : {ocf-oots-gr-outbound_123=LEADER_NOT_AVAILABLE" not available but the topic itself is created in Kafka that I can also see through Kafka tool and when broker is stopped, it is also not throwing exception.
Code:
KafkaTemplate<String, Object> kafkaTemplate = (KafkaTemplate<String, Object>) CommonAppContextProvider.getApplicationContext().getBean("kafkaTemplate");
//kafkaTemplate.send(CommonAppContextProvider.getApplicationContext().getEnvironment().getProperty("kafka.transalators.outbound.topic"), kafkaMessageFormat);
ListenableFuture listenableFuture = kafkaTemplate.send(CommonAppContextProvider.getApplicationContext().
getEnvironment().getProperty("kafka.transalators.outbound.topic"), kafkaMessageFormat);
listenableFuture.addCallback(new ListenableFutureCallback<SendResult<?, ?>>() {
#Override
public void onSuccess(SendResult<?, ?> result) {
System.out.println("Sent");
}
#Override
public void onFailure(Throwable ex) {
throw new KafkaException();
}
});
}
It should throw exception may be KafkaException, TimeOutException, Interrupted exception etc.
You have to call get(time, timeUnit) on the future to get the result (success or otherwise).

How to use spring-kafka for sending a message again

We are using spring-kafka 1.2.2.RELEASE.
What we want
1. As soon as a message is consumed and processed successfully, offset is committed in spring-kafka. I am using Manaul Commit/Acknowledgement for it, it is working fine.
2. In case of any exception we want spring-kafka to resend the same message. We are throwing RunTime exception on any system error, which was logged by spring-kafka and never committed.
This is fine as we don't want it to commit, but that message stays in spring-kafka and never comes back unless we restart the service. On restart message comes back and executes once again and then stay in spring-kafka
What we tried
1. I have tried both ErrorHandler and RetryingMessageListenerAdapter, but in both cases we have to code in service how to process the message again
This is my consumer
public class MyConsumer{
#KafkaListener
public void receive(...){
// application logic to return success/failure
if(success){
acknowledgement.acknowledge();
}else{
throw new RunTimeException();
}
}
}
Also I have following configurations for container factory
factory.getContainerProperties().setErrorHandler(new ErrorHandler(){
#Override
public void handle(...){
throw new RunTimeException("");
}
});
While executing the flow, control is coming inside both first to receive and then handle method. After that service waits for new message. However I was expecting, since we threw an exception, and message is not committed, same message should land in receive method again.
Is there any way, we can tell spring kafka "do not commit this message and send it again asap?"
1.2.x is no longer supported; 1.x users are recommended to upgrade to at least 1.3.x (currently 1.3.8) because of its much simpler threading model, thanks to KIP-62.
The current version is 2.2.2.
2.0.1 introduced the SeekToCurrentErrorHandler which re-seeks the failed record so that it is redelivered.
With earlier versions, you had to stop and restart the container to redeliver a failed message, or add retry to the listener adapter.
I suggest you upgrade to the newest possible release.
Unfortunately version available for us to use is 1.3.7.RELEASE.
I have tried implementing the ConsumerSeekAware interface. Below is how I am doing it and I can see message delivering repreatedly
Consumer
public class MyConsumer implements ConsumerSeekAware{
private ConsumerSeekCallback consumerSeekCallback;
if(condition) {
acknowledgement.acknowledge();
}else {
consumerSeekCallback.seek((String)headers.get("kafka_receivedTopic"),
(int) headers.get("kafka_receivedPartitionId"),
(int) headers.get("kafka_offset"));
}
}
#Override
public void registerSeekCallback(ConsumerSeekCallback consumerSeekCallback) {
this.consumerSeekCallback = consumerSeekCallback;
}
#Override
public void onIdleContainer(Map<TopicPartition, Long> arg0, ConsumerSeekCallback arg1) {
LOGGER.debug("onIdleContainer called");
}
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> arg0, ConsumerSeekCallback arg1) {
LOGGER.debug("onPartitionsAssigned called");
}
}
Config
public class MyConsumerConfig {
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
// Set server, deserializer, group id
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
return props;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, MyModel> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, MyModel> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(consumerConfigs()));
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
return factory;
}
#Bean
public MyConsumer receiver() {
return new MyConsumer();
}
}

kafka consumer turn on off runtime, process messages in series

My kafka listener should process messages in sequential order , onMessage method should process messages synchronously, I dont want my listener to process multiple messages at the same time, the onmessage method first stops
org.springframework.kafka.listener.MessageListenerContainer
then delgates payload to a synchronized method, after complete processing , starts listener back. Other options ofcousrse are to use a blocking queue, executor service etc, need advice on better strategy to achieve this, does kafka consumer has any feature built to process messages in series?
here is my code.
I changed implementation to this
public static class KafkaReadMsgTask implements Runnable{
#Override
public void run() {
KakfaMsgConumerImpl kakfaMsgConumerImpl=null;;
try{
kakfaMsgConumerImpl=SpContext.getBean(KakfaMsgConumerImpl.class);
kakfaMsgConumerImpl.pollFormDef();
kakfaMsgConumerImpl.pollFormData();
} catch (Exception e){
logger.error(" kafka listener errors "+e);
kakfaMsgConumerImpl.pauseTask();
}
}
}
#Component
public static class KakfaMsgConumerImpl {
#Autowired
ObjectMapper mapper;
#Autowired
FormSink formSink;
#Autowired
Environment env;
#Resource(name="formDefConsumer")
Consumer formDefConsumer;
#Resource(name="formDataConsumer")
Consumer formDataConsumer;
ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor();
public void startPolling() throws Exception{
executor.scheduleAtFixedRate(new KafkaReadMsgTask(),10, 3,TimeUnit.SECONDS);
}
public void pauseTask(){
try{
Thread.sleep (120000l);
}catch(Exception e){
throw new RuntimeException(e);
}
}
public void pollFormDef() throws Exception{
ConsumerRecords<Long, String> records =formDefConsumer.poll(0);
if(!records.isEmpty()){
int recordsCount=records.count();
if(logger.isDebugEnabled()){
logger.debug(" form-def consumer poll records size "+recordsCount);
}
if(records.count()>1){
logger.warn(" form-def consumer poll returned records more than 1 , expected 1 , received "+recordsCount);
}
ConsumerRecord<Long,String> record= records.iterator().next();
processFormDef(record.key(), record.value());
}
}
void pollFormData() throws Exception{
ConsumerRecords<Long, String> records =formDataConsumer.poll(0);
if(!records.isEmpty()){
int recordsCount=records.count();
if(logger.isDebugEnabled()){
logger.debug(" form-data consumer poll records size "+recordsCount);
}
if(records.count()>1){
logger.warn(" form-data consumer poll returned records more than 1 , expected 1 , received "+recordsCount);
} ConsumerRecord<Long,String> record= records.iterator().next();
processFormData(record.key(), record.value());
}
}
void processFormDef(Long key, String msg) throws Exception{
if(logger.isDebugEnabled()){
logger.debug(" key "+key+" payload : "+msg);
}
FormDefinition formDefinition= mapper.readValue(msg, FormDefinition.class);
formSink.createFromDef(formDefinition);
logger.debug(" processed message, key: "+key+ " msg : "+msg);
Thread.sleep(60000l);
}
void processFormData(Long key, String msg) throws Exception{
if(logger.isDebugEnabled()){
logger.debug(" key "+key+" payload : "+msg);
}
FormData formData= mapper.readValue(msg, FormData.class);
formSink.persists(formData);
logger.debug(" processed message, key: "+key+ " msg : "+msg);
Thread.sleep(60000l);
}
}
Using a message-driven listener container is not the right technology for this application; it looks like you want to consume messages alternately from two different topics.
Furthermore, stopping the container on the consumer thread won't take effect anyway, until the thread exits the method, at which time the consumer will be closed.
I would suggest you use the consumer factory to create two consumers; subscribe to the topics, set the max.poll.records on each to 1 and call the poll() method on each alternately.

Undertow : use Hystrix Observable in Http handler

I managed to setup an Hystrix Command to be called from an Undertow HTTP Handler:
public void handleRequest(HttpServerExchange exchange) throws Exception {
if (exchange.isInIoThread()) {
exchange.dispatch(this);
return;
}
RpcClient rpcClient = new RpcClient(/* ... */);
try {
byte[] response = new RpcCommand(rpcClient).execute();
// send the response
} catch (Exception e) {
// send an error
}
}
This works nice. But now, I would like to use the observable feature of Hystrix, calling observe instead of execute, making the code non-blocking.
public void handleRequest(HttpServerExchange exchange) throws Exception {
RpcClient rpcClient = new RpcClient(/* ... */);
new RpcCommand(rpcClient).observe().subscribe(new Observer<byte[]>(){
#Override
public void onCompleted() {
}
#Override
public void onError(Throwable throwable) {
exchange.setStatusCode(StatusCodes.INTERNAL_SERVER_ERROR);
exchange.endExchange();
}
#Override
public void onNext(byte[] body) {
exchange.getResponseHeaders().add(Headers.CONTENT_TYPE, "text/plain");
exchange.getResponseSender().send(ByteBuffer.wrap(body));
}
});
}
As expected (reading the doc), the handler returns immediately and as a consequence, the exchange is ended; when the onNext callback is executed, it fails with an exception:
Caused by: java.lang.IllegalStateException: UT000127: Response has already been sent
at io.undertow.io.AsyncSenderImpl.send(AsyncSenderImpl.java:122)
at io.undertow.io.AsyncSenderImpl.send(AsyncSenderImpl.java:272)
at com.xxx.poc.undertow.DiyServerBootstrap$1$1.onNext(DiyServerBootstrap.java:141)
at com.xxx.poc.undertow.DiyServerBootstrap$1$1.onNext(DiyServerBootstrap.java:115)
at rx.internal.util.ObserverSubscriber.onNext(ObserverSubscriber.java:34)
Is there a way to tell Undertow that the handler is doing IO asynchronously? I expect to use a lot of non-blocking code to access database and other services.
Thanks in advance!
You should dispatch() a Runnable to have the exchange not end when the handleRequest method returns. Since the creation of the client and subscription are pretty simple tasks, you can do it on the same thread with SameThreadExecutor.INSTANCE like this:
public void handleRequest(HttpServerExchange exchange) throws Exception {
exchange.dispatch(SameThreadExecutor.INSTANCE, () -> {
RpcClient rpcClient = new RpcClient(/* ... */);
new RpcCommand(rpcClient).observe().subscribe(new Observer<byte[]>(){
//...
});
});
}
(If you do not pass an executor to dispatch(), it will dispatch it to the XNIO worker thread pool. If you wish to do the client creation and subscription on your own executor, then you should pass that instead.)

Resources