Spring cloud Stream - kafka - Null Acknowledgement Header - spring-kafka

I want to manually Commit the offset using spring cloud stream - only when the message processing is successful.
Here is my code - application.yml & Handler Class
public void process(Message<?> message) {
System.out.println(message.getPayload());
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
if (acknowledgment != null) {
System.out.println("Acknowledgment provided");
acknowledgment.acknowledge();
}
}
---------------------------------------------------------------------------------
spring:
application:
name: springCloud
cloud:
stream:
default-binder: kafka
kafka:
bindings:
myChannel:
consumer:
autoCommitOffset: false
But my Acknowledgement object is null as in the header object 'kafka_acknowledgement' itself is NOT present.
How to get the acknowledgment object?
My requirement is to commit the offset ONLY if the processing is successful, if the processing fails I do NOT want to pop the message from the channel so that it can be read later.
Will the above code be sufficient to achieve this?

What version are you using?
In 3.1, autoCommitOffset was deprecated in favor of setting the ackMode (to manual in this case); however, it looks like autoCommitOffset is now completely ignored rather than deprecated.

When using yaml file, please use the property 'auto-commit-offset'.

Related

BackoffExceptions are logged at error level when using RetryTopicConfiguration

I am a happy user of the recently added RetryTopicConfiguration there is however a small issue that is bothering me.
The setup I use looks like:
#Bean
public RetryTopicConfiguration retryTopicConfiguration(
KafkaTemplate<String, String> template,
#Value("${kafka.topic.in}") String topicToInclude,
#Value("${spring.application.name}") String appName) {
return RetryTopicConfigurationBuilder
.newInstance()
.fixedBackOff(5000L)
.maxAttempts(3)
.retryTopicSuffix("-" + appName + ".retry")
.suffixTopicsWithIndexValues()
.dltSuffix("-" + appName + ".dlq")
.includeTopic(topicToInclude)
.dltHandlerMethod(KAFKA_EVENT_LISTENER, "handleDltEvent")
.create(template);
}
When the a listener throws an exception that triggers a retry, the DefaultErrorHandler will log a KafkaBackoffException at error level.
For a similar problem it was suggested to use a ListenerContainerFactoryConfigurer yet this does not remove all error logs, since I still see the following in my logs:
2022-04-02 17:34:33.340 ERROR 8054 --- [e.retry-0-0-C-1] o.s.kafka.listener.DefaultErrorHandler : Recovery of record (topic-spring-kafka-logging-issue.retry-0-0#0) failed
org.springframework.kafka.listener.ListenerExecutionFailedException: Listener failed; nested exception is org.springframework.kafka.listener.KafkaBackoffException: Partition 0 from topic topic-spring-kafka-logging-issue.retry-0 is not ready for consumption, backing off for approx. 4468 millis.
Can the log-level be changed, without adding a custom ErrorHandler?
Spring-Boot version: 2.6.6
Spring-Kafka version: 2.8.4
JDK version: 11
Sample project: here
Thanks for such a complete question. This is a known issue of Spring for Apache Kafka 2.8.4 due to the new combine blocking and non-blocking exceptions feature and has been fixed for 2.8.5.
The workaround is to clear the blocking exceptions mechanism such as:
#Bean(name = RetryTopicInternalBeanNames.LISTENER_CONTAINER_FACTORY_CONFIGURER_NAME)
public ListenerContainerFactoryConfigurer lcfc(KafkaConsumerBackoffManager kafkaConsumerBackoffManager,
DeadLetterPublishingRecovererFactory deadLetterPublishingRecovererFactory,
#Qualifier(RetryTopicInternalBeanNames
.INTERNAL_BACKOFF_CLOCK_BEAN_NAME) Clock clock) {
ListenerContainerFactoryConfigurer lcfc = new ListenerContainerFactoryConfigurer(kafkaConsumerBackoffManager, deadLetterPublishingRecovererFactory, clock);
lcfc.setBlockingRetriesBackOff(new FixedBackOff(0, 0));
lcfc.setErrorHandlerCustomizer(eh -> ((DefaultErrorHandler) eh).setClassifications(Collections.emptyMap(), true));
return lcfc;
}
Please let me know if that works for you.
Thanks.
EDIT:
This workaround disables only blocking retries, which since 2.8.4 can be used along non-blocking as per the link in the original answer. The exception classification for the non-blocking retries is in the DefaultDestinationTopicResolver class, and you can set FATAL exceptions as documented here.
EDIT: Alternatively, you can use the Spring Kafka 2.8.5-SNAPSHOT version by adding the Spring Snapshot repository such as:
repositories {
maven {
url 'https://repo.spring.io/snapshot'
}
}
dependencies {
implementation 'org.springframework.kafka:spring-kafka:2.8.5-SNAPSHOT'
}
You can also downgrade to Spring Kafka 2.8.3.
As Gary Russell pointed out, if your application is already in production you should not use the SNAPSHOT version, and 2.8.5 is out in a couple of weeks.
EDIT 2: Glad to hear you’re happy about the feature!

Spring Cloud Stream with Kafka Binder: /bindings Actuator API does not stop producer

I have a Spring Cloud Stream project with Actuator and the Kafka binder. I am exploring the bindings/ actuator and am trying to stop a producer as an exercise. I make the following POST request via curl:
curl -v 'localhost:8081/actuator/bindings/producer-out-0' -H 'content-type: application/json' -d '{"state": "STOPPED"}'
Actual Results:
The query returns 204. The state of the producer (seen from GET /actuator/bindings/producer-out-0) is now stopped. The producer is still producing messages, however, which can be seen from both logging and consumer activity on the topic.
Expected Results:
I expected the producer to stop producing messages. (I have also tried using the PAUSED state, which also returns 204, but error logs indicate that this producer cannot be paused.)
Do I misunderstand how this actuator works? When a producer is stopped, is it expected that S.C.S. will continue to poll that producer? The only documentation I am aware of is here, but it doesn't answer my questions as far as I can tell.
Background:
I am using spring-boot-starter-parent 2.5.3 and have starter-web and starter-actuator listed as dependencies. I don't think I'm missing any.
This is the producer/consumer pair. As you can see I am using a pollable supplier.
#Configuration
#Profile("numbers")
public class NumberHandlers {
private static final Logger LOGGER = LoggerFactory.getLogger(NumberHandlers.class);
#Bean
public Supplier<Integer> producer() {
// Needed an effectively-final mutable integer. Side-bar comments welcome. :P
var counter = new AtomicInteger();
return () -> {
var n = counter.getAndIncrement();
LOGGER.info("Producing number: " + n);
return n;
};
}
#Bean
public Consumer<Integer> consumer() {
return it -> LOGGER.info("Consuming number: " + it);
}
}
These are active when I pass in the numbers profile. My configurations are below.
application.yml:
server:
port: 8081
spring:
cloud:
stream:
kafka:
binder:
brokers: ${env.kafka.bootstrapservers:localhost}
management:
endpoints:
web:
exposure:
include: 'bindings'
... and application-numbers.yml:
spring:
cloud:
stream:
poller:
fixedDelay: 5000
bindings:
producer-out-0:
destination: numbers-raw
producer:
partitionCount: 3
consumer-in-0:
destination: numbers-raw
kafka:
bindings:
producer-out-0:
producer:
topic.properties:
# These look weird because they're done as an exercise.
retention.bytes: 10000
retention.ms: 172800000
function:
definition: producer;consumer
I am testing in a localhost environment using a docker-compose kafka and zookeeper on the host network.
Thanks!
Lifecycle control of producer bindings is not currently supported, only consumer bindings.

How to response with a success JSON format after completing a transaction in corda

Hi everyone i am working on a project in which i need to send a response in JSON format to the CLI that the Transaction have completed let me give you an example.Consider that i have stated a flow Start ExampleFlow pojo: {iouValue: 7}, otherParty: "O=PartyB,L=London,C=GB" and the result will be Starting
Generating transaction based on new IOU.
Verifying contract constraints.
Signing transaction with our private key.
Gathering the counter party's signature.
Collecting signatures from counterparties.
Verifying collected signatures.
Obtaining notary signature and recording transaction.
Broadcasting transaction to participants
Done
Flow completed with result: SignedTransaction(id=F95406D901209BA77396C1A4D375585C6E051414EE22BE441FC02E5AE147A050)
but what i want is that their should be a JSON format result not all of it but something like this
{response: success }
i just want some success response in JSON format
i am using IOU project
thanks
You can achieve that by establishing an RPC connection with your node; call the flow, then return the JSON object.
There are a couple of approaches that you can follow, and I recommend that you go through the samples repository https://github.com/corda/samples to explore them:
Create a webserver (SpringBoot application) that server REST API's that call your flows and return a JSON object: https://github.com/corda/samples/tree/release-V4/spring-webserver
Create a simple Java app that establishes an RPC connection with your node and serves as a client to call a certain method/flow: https://github.com/corda/samples/blob/release-V4/cordapp-example/clients/src/main/java/com/example/server/JavaClientRpc.java
If you follow the webserver sample, you can add a method to your controller that does something like:
#GetMapping(value = "/my-api", produces = MediaType.APPLICATION_JSON_VALUE)
private ResponseEntity<YourObject> getSomething() {
// Some code that calls your flow and returns YourObject.
return ResponseEntity.ok().body(YourObject);
}
so i got the answer what u need to do is add this dependency in client build.gradle
cordaCompile "net.corda:corda-jackson:$corda_release_version"
after that you just need to implement this code snip
String json = "";
try {
ObjectMapper mapper = JacksonSupport.createNonRpcMapper();
json = mapper.writeValueAsString(results);
} catch (JsonProcessingException e) {
e.printStackTrace();
}
return json;
result can be any datatype you want to convert to json

exactly once delivery Is it possible through spring-cloud-stream-binder-kafka or spring-kafka which one to use

I am trying to achieve exactly once delivery using spring-cloud-stream-binder-kafka in a spring boot application.
The versions I am using are:
spring-cloud-stream-binder-kafka-core-1.2.1.RELEASE
spring-cloud-stream-binder-kafka-1.2.1.RELEASE
spring-cloud-stream-codec-1.2.2.RELEASE spring-kafka-1.1.6.RELEASE
spring-integration-kafka-2.1.0.RELEASE
spring-integration-core-4.3.10.RELEASE
zookeeper-3.4.8
Kafka version : 0.10.1.1
This is my configuration (cloud-config):
spring:
autoconfigure:
exclude: org.springframework.cloud.netflix.metrics.servo.ServoMetricsAutoConfiguration
kafka:
consumer:
enable-auto-commit: false
cloud:
stream:
kafka:
binder:
brokers: "${BROKER_HOST:xyz-aws.local:9092}"
headers:
- X-B3-TraceId
- X-B3-SpanId
- X-B3-Sampled
- X-B3-ParentSpanId
- X-Span-Name
- X-Process-Id
zkNodes: "${ZOOKEEPER_HOST:120.211.316.261:2181,120.211.317.252:2181}"
bindings:
feed_platform_events_input:
consumer:
autoCommitOffset: false
binders:
xyzkafka:
type: kafka
bindings:
feed_platform_events_input:
binder: xyzkafka
destination: platform-events
group: br-platform-events
I have two main classes:
FeedSink Interface:
package au.com.xyz.proxy.interfaces;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.messaging.MessageChannel;
public interface FeedSink {
String FEED_PLATFORM_EVENTS_INPUT = "feed_platform_events_input";
#Input(FeedSink.FEED_PLATFORM_EVENTS_INPUT)
MessageChannel feedlatformEventsInput();
}
EventConsumer
package au.com.xyz.proxy.consumer;
#Slf4j
#EnableBinding(FeedSink.class)
public class EventConsumer {
public static final String SUCCESS_MESSAGE =
"SEND-SUCCESS : Successfully sent message to platform.";
public static final String FAULT_MESSAGE = "SOAP-FAULT Code: {}, Description: {}";
public static final String CONNECT_ERROR_MESSAGE = "CONNECT-ERROR Error Details: {}";
public static final String EMPTY_NOTIFICATION_ERROR_MESSAGE =
"EMPTY-NOTIFICATION-ERROR Empty Event Received from platform";
#Autowired
private CapPointService service;
#StreamListener(FeedSink.FEED_PLATFORM_EVENTS_INPUT)
/**
* method associated with stream to process message.
*/
public void message(final #Payload EventNotification eventNotification,
final #Header(KafkaHeaders.ACKNOWLEDGMENT) Acknowledgment acknowledgment) {
String caseMilestone = "UNKNOWN";
if (!ObjectUtils.isEmpty(eventNotification)) {
SysMessage sysMessage = processPayload(eventNotification);
caseMilestone = sysMessage.getCaseMilestone();
try {
ClientResponse response = service.sendPayload(sysMessage);
if (response.hasFault()) {
Fault faultDetails = response.getFaultDetails();
log.error(FAULT_MESSAGE, faultDetails.getCode(), faultDetails.getDescription());
} else {
log.info(SUCCESS_MESSAGE);
}
acknowledgment.acknowledge();
} catch (Exception e) {
log.error(CONNECT_ERROR_MESSAGE, e.getMessage());
}
} else {
log.error(EMPTY_NOTIFICATION_ERROR_MESSAGE);
acknowledgment.acknowledge();
}
}
private SysMessage processPayload(final EventNotification eventNotification) {
Gson gson = new Gson();
String jsonString = gson.toJson(eventNotification.getData());
log.info("Consumed message for platform events with payload : {} ", jsonString);
SysMessage sysMessage = gson.fromJson(jsonString, SysMessage.class);
return sysMessage;
}
}
I have set the autocommit property for Kafka and spring container as false.
if you see in the EventConsumer class I have used Acknowledge in cases where I service.sendPayload is successful and there are no Exceptions. And I want container to move the offset and poll for next records.
What I have observed is:
Scenario 1 - In case where the Exception is thrown and there are no new messages published on kafka. There is no retry to process the message and it seems there is no activity. Even if the underlying issue is resolved. The issue I am referring to is down stream server unavailability. Is there a way to retry the processing n times and then give up. Note this is retry of processing or repoll from the last committed offset. This is not about Kafka instance not available.
If I restart the service (EC2 instance) then the processing happens from the offset where the last successful Acknowledge was done.
Scenario 2 - In case where Exception happened and then a subsequent message is pushed to kafka. I see the new message is processed and the offset moved. It means I lost the message which was not acknowledged. So the question is if I have handled the Acknowledge. How do I control to read from last commit not just the latest message and process it. I am assuming there is internally a poll happening and it did not take into account or did not know about the last message not being acknowledged. I don't think there are multiple threads reading from kafka. I dont know how the #Input and #StreamListener annotations are controlled. I assume the thread is controlled by property consumer.concurrency which controls the thread and by default it is set to 1.
So I have done research and found a lot of links but unfortunately none of them answers my specific questions.
I looked at (https://github.com/spring-cloud/spring-cloud-stream/issues/575)
which has a comment from Marius (https://stackoverflow.com/users/809122/marius-bogoevici):
Do note that Kafka does not provide individual message acking, which
means that acknowledgment translates into updating the latest consumed
offset to the offset of the acked message (per topic/partition). That
means that if you're acking messages from the same topic partition out
of order, a message can 'ack' all the messages before it.
not sure if it is the issue with order when there is one thread.
Apologies for long post, but I wanted to provide enough information. The main thing is I am trying to avoid losing messages when consuming from kafka and I am trying to see if spring-cloud-stream-binder-kafka can do the job or I have to look at alternatives.
Update 6th July 2018
I saw this post https://github.com/spring-projects/spring-kafka/issues/431
Is this a better approach to my problem? I can try latest version of spring-kafka
#KafkaListener(id = "qux", topics = "annotated4", containerFactory = "kafkaManualAckListenerContainerFactory",
containerGroup = "quxGroup")
public void listen4(#Payload String foo, Acknowledgment ack, Consumer<?, ?> consumer) {
Will this help in controlling the offset to be set to where the last
successfully processed record? How can I do that from the listen
method. consumer.seekToEnd(); and then how will listen method reset to get the that record?
Does putting the Consumer in the signature provide support to get
handle to consumer? Or I need to do anything more?
Should I use Acknowledge or consumer.commitSyncy()
What is the significance of containerFactory. do I have to define it
as a bean.
Do I need #EnableKafka and #Configuration for above approach to work?
Bearing in mind the application is a Spring Boot application.
By Adding Consumer to listen method I don't need to implement
ConsumerAware Interface?
Last but not least, Is it possible to provide some example of above approach if it is feasible.
Update 12 July 2018
Thanks Gary (https://stackoverflow.com/users/1240763/gary-russell) for providing the tip of using maxAttempts. I have used that approach. And I am able to achieve exactly once delivery and preserve the order of the message.
My updated cloud-config:
spring:
autoconfigure:
exclude: org.springframework.cloud.netflix.metrics.servo.ServoMetricsAutoConfiguration
kafka:
consumer:
enable-auto-commit: false
cloud:
stream:
kafka:
binder:
brokers: "${BROKER_HOST:xyz-aws.local:9092}"
headers:
- X-B3-TraceId
- X-B3-SpanId
- X-B3-Sampled
- X-B3-ParentSpanId
- X-Span-Name
- X-Process-Id
zkNodes: "${ZOOKEEPER_HOST:120.211.316.261:2181,120.211.317.252:2181}"
bindings:
feed_platform_events_input:
consumer:
autoCommitOffset: false
binders:
xyzkafka:
type: kafka
bindings:
feed_platform_events_input:
binder: xyzkafka
destination: platform-events
group: br-platform-events
consumer:
maxAttempts: 2147483647
backOffInitialInterval: 1000
backOffMaxInterval: 300000
backOffMultiplier: 2.0
Event Consumer remains the same as my initial implementation. Except for rethrowing the error for the container to know the processing has failed. If you just catch it then there is no way container knows the message processing has failures. By doing acknoweldgement.acknowledge you are just controlling the offset commit. In order for retry to happen you must throw the exception. Don't forget to set the kafka client autocommit property and spring (container level) autocommitOffset property to false. Thats it.
As explained by Marius, Kafka only maintains an offset in the log. If you process the next message, and update the offset; the failed message is lost.
You can send the failed message to a dead-letter topic (set enableDlq to true).
Recent versions of Spring Kafka (2.1.x) have special error handlers ContainerStoppingErrorHandler which stops the container when an exception occurs and SeekToCurrentErrorHandler which will cause the failed message to be redelivered.

Symfony2, RabbitMQ: I'm lost

I've installed the RabbitMQ Bundle already. Now here is what I want to do:
Controller: Creates Redis-List, pushes message to client, afterwards send a message into queue, so heavier background task can be executed asynchronously.
But I'm lost.
$msg = array('userid' => $someid);
$this->get('old_sound_rabbit_mq.task_example_producer')->publish(serialize($msg));
This will send some data to a produce? And the according consumer will execute the heavy background task (DB queries etc, based on the "userid" from the producer)? Do I need a callback? What's the queue?! The queue forwards the messages from the producer to the consumer one by one? So can I have multiple consumers to handle more messages at the same time?!
Kinda old post but in case someone else comes looking for help:
It seems that you are using the old_sound's rabbitmq bundle. It has a somewhat helpful tutorial-type of documentation here: https://github.com/videlalvaro/RabbitMqBundle
It helped me get going with rabbitmq in symfony.
In a nutshell:
1: You need to have some configration in the config.yml-file. For example:
# RabbitMQ Configuration
old_sound_rabbit_mq:
connections:
default:
host: 'localhost'
port: 5672
user: 'guest'
password: 'guest'
vhost: '/'
lazy: true
connection_timeout: 3
read_write_timeout: 3
# requires php-amqplib v2.4.1+ and PHP5.4+
keepalive: false
# requires php-amqplib v2.4.1+
heartbeat: 0
producers:
task_example:
connection: default
exchange_options: {name: 'task_example', type: direct}
consumers:
task_example:
connection: default
exchange_options: {name: 'task_example', type: direct}
queue_options: {name: 'task_example'}
callback: test_class
Here the connection is defined, and one producer and one consumer. Both use the same "default" connection.
You will also need to define the callback as a service:
# My services
services:
test_class:
class: AppBundle\Testclasses\rabbittest\testclass
arguments: [#logger]
2: Now you need to have the consumer, which is the callback-option here, the "test_class". Simple consumer could look like this:
namespace AppBundle\Testclasses\rabbittest;
use OldSound\RabbitMqBundle\RabbitMq\ConsumerInterface;
use PhpAmqpLib\Message\AMQPMessage;
class testclass implements ConsumerInterface
{
private $logger; // Monolog-logger.
// Init:
public function __construct( $logger )
{
$this->logger = $logger;
echo "testclass is listening...";
}
public function execute(AMQPMessage $msg)
{
$message = unserialize($msg->body);
$userid = $message['userid'];
// Do something with the data. Save to db, write a log, whatever.
}
}
3: And now the producer that you already had:
$msg = array('userid' => $someid);
$this->get('old_sound_rabbit_mq.task_example_producer')->publish(serialize($msg));
4: And final piece of the puzzle is running the consumer. Consumer is started from the console, I was developing in a windows machine, and used Windows PowerShell. You can start up the consumer like this:
php app/console rabbitmq:consumer task_example
And it should give you the text:
testclass is listening...
, if you copied that from this example. That text is not necessary, and without it, the console will output nothing but will work just fine. Unless some error is presented.
But remember that you have to be in the directory where your symfony-application is. For example:
C:\wamp\www\symfony\my_project
A queue is a list of messages you want processed.
An exchange is a router of messages to queues. (you can have multiple queues listing to the same exchange, for example).
A producer pushes messages to an exchange.
A consumer reads messages from the queue.
Normally you have one producer and many consumers to process the messages in parallel.
The code you posted demonstrates a producer publishing to an exchange.
RabbitMQBundle expects you to have in-depth knowledge of the broker internals. That's not always what you want.
There is a solution that hides all those nitty-gritty details, leaving a simple but yet powerful interface for you to use. The doc is short. If you follow it you get working solution with zero knowledge of how RabbitMQ actually works.
P.S. Here's the blog post on how to migrate from RabbitMQBundle to EnqueueBundle.

Resources