Spring Cloud Stream Kafka : How to access recordMetadata of kafka headers after producing a message to a kafka topic - spring-cloud-stream-binder-kafka

I want get the offset and partition information after i produce a message to kafka topic.
I read through spring cloud stream kafka binding document and found that that can be achieved by fecthing RECORD_METADATA kafka header.
From Spring documentation: (https://cloud.spring.io/spring-cloud-static/spring-cloud-stream-binder-kafka/3.0.0.RELEASE/reference/html/spring-cloud-stream-binder-kafka.html#kafka-producer-properties)
recordMetadataChannel
The bean name of a MessageChannel to which successful send results should be sent; the bean must exist in the application context. The message sent to the channel is the sent message (after conversion, if any) with an additional header KafkaHeaders.RECORD_METADATA. The header contains a RecordMetadata object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)
I have configured my output topic bean name as message channel in the property file
spring.cloud.stream.kafka.bindings.acknowledgement-out.producer.record-metadata-channel = acknowledgement-out
my customized interface and producer class as below:
public interface OutputAcknowledgement {
#Output("acknowledgement-out")
MessageChannel output();
}
Producer class:
#EnableBinding(OutputAcknowledgement.class)
public class AcknowledgementProducer {
#Autowired
OutputAcknowledgement outputAcknowledgement;
public Boolean produce(Acknowledgement acknowledgement) {
Message msg = MessageBuilder.withPayload(acknowledgement).build();
boolean val = outputAcknowledgement.output().send(msg);
RecordMetadata recordMetadata = msg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class);
Getting null for recordMetadata.
Please suggest whether my approach is correct?

You're getting null because it doesn't exist in that message object at the time you're accessing it. According to the docs the metadata is only provided after successful publish. See this answer on how to get record metadata by providing a handler/consumer for the record metadata channel.

Related

Corda - Passing Anonymous Implementations Over RPC

Is it possible to pass anonymous object implementations over Corda's RPC interface? - For example:
Workflow
#CordaSerializable
interface ExampleInterface {
val number: Int
}
#StartableByRPC
class SquareFlow(private val example: ExampleInterface) : FlowLogic<Int>() {
#Suspendable
override fun call(): Int = example.number * example.number
}
RPC Client
val value = object : ExampleInterface {
override val number: Int = 5
}
return rpc.startFlowDynamic(SquareFlow::class.java, value).returnValue.getOrThrow()
Exception
net.corda.client.rpc.RPCException: java.util.List<*> -> Unable to create an object serializer for type class com.example.client.ExampleService$square$value$1: No unique deserialization constructor can be identified
Either annotate a constructor for this type with #ConstructorForDeserialization, or provide a custom serializer for it
Whilst this is an example, what I actually want to do is pass object : TypeReference<SomeType>(){} over RPC.
As per Corda's docs anonymous objects are not supported by their AMQP
serialization which is used in the RPC client which you can read here. This is likely due to a public constructor being needed for serialization and deserialization I imagine.
Additionally I assume you are referencing Jackson's TypeReference class which uses generics which are only available at compile time. Due to how Corda's serialization works the generic won't be available at runtime so when the TypeReference is passed from the rpc application to the corda node the corda node will not know what the generic is. I haven't tried this myself but I believe some exception will be thrown whenever the flow attempts to suspend or possibly before in the RPC client.
You can probably pass a jackson JavaType class to a flow without a problem as I believe it is whitelisted as serializable in Corda by default, if not look at how to whitelist the class here using the SerializationWhitelist. You can obtain the JavaType by instantiating a TypeFactory and calling constructType(new TypeReference<SomeType> {}). then just pass the JavaType to your flows constructor.

Spring cloud Stream - kafka - Null Acknowledgement Header

I want to manually Commit the offset using spring cloud stream - only when the message processing is successful.
Here is my code - application.yml & Handler Class
public void process(Message<?> message) {
System.out.println(message.getPayload());
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
if (acknowledgment != null) {
System.out.println("Acknowledgment provided");
acknowledgment.acknowledge();
}
}
---------------------------------------------------------------------------------
spring:
application:
name: springCloud
cloud:
stream:
default-binder: kafka
kafka:
bindings:
myChannel:
consumer:
autoCommitOffset: false
But my Acknowledgement object is null as in the header object 'kafka_acknowledgement' itself is NOT present.
How to get the acknowledgment object?
My requirement is to commit the offset ONLY if the processing is successful, if the processing fails I do NOT want to pop the message from the channel so that it can be read later.
Will the above code be sufficient to achieve this?
What version are you using?
In 3.1, autoCommitOffset was deprecated in favor of setting the ackMode (to manual in this case); however, it looks like autoCommitOffset is now completely ignored rather than deprecated.
When using yaml file, please use the property 'auto-commit-offset'.

How to response with a success JSON format after completing a transaction in corda

Hi everyone i am working on a project in which i need to send a response in JSON format to the CLI that the Transaction have completed let me give you an example.Consider that i have stated a flow Start ExampleFlow pojo: {iouValue: 7}, otherParty: "O=PartyB,L=London,C=GB" and the result will be Starting
Generating transaction based on new IOU.
Verifying contract constraints.
Signing transaction with our private key.
Gathering the counter party's signature.
Collecting signatures from counterparties.
Verifying collected signatures.
Obtaining notary signature and recording transaction.
Broadcasting transaction to participants
Done
Flow completed with result: SignedTransaction(id=F95406D901209BA77396C1A4D375585C6E051414EE22BE441FC02E5AE147A050)
but what i want is that their should be a JSON format result not all of it but something like this
{response: success }
i just want some success response in JSON format
i am using IOU project
thanks
You can achieve that by establishing an RPC connection with your node; call the flow, then return the JSON object.
There are a couple of approaches that you can follow, and I recommend that you go through the samples repository https://github.com/corda/samples to explore them:
Create a webserver (SpringBoot application) that server REST API's that call your flows and return a JSON object: https://github.com/corda/samples/tree/release-V4/spring-webserver
Create a simple Java app that establishes an RPC connection with your node and serves as a client to call a certain method/flow: https://github.com/corda/samples/blob/release-V4/cordapp-example/clients/src/main/java/com/example/server/JavaClientRpc.java
If you follow the webserver sample, you can add a method to your controller that does something like:
#GetMapping(value = "/my-api", produces = MediaType.APPLICATION_JSON_VALUE)
private ResponseEntity<YourObject> getSomething() {
// Some code that calls your flow and returns YourObject.
return ResponseEntity.ok().body(YourObject);
}
so i got the answer what u need to do is add this dependency in client build.gradle
cordaCompile "net.corda:corda-jackson:$corda_release_version"
after that you just need to implement this code snip
String json = "";
try {
ObjectMapper mapper = JacksonSupport.createNonRpcMapper();
json = mapper.writeValueAsString(results);
} catch (JsonProcessingException e) {
e.printStackTrace();
}
return json;
result can be any datatype you want to convert to json

when to use RecoveryCallback vs KafkaListenerErrorHandler

I'm trying to understand when should i use org.springframework.retry.RecoveryCallback and org.springframework.kafka.listener.KafkaListenerErrorHandler?
As of today, I'm using a class (implements org.springframework.retry.RecoveryCallback) to log error message and send the message to DLT and it's working. For sending a message to DLT, I'm using Spring KafkaTemplate and then I came across KafkaListenerErrorHandler and DeadLetterPublishingRecoverer. Now, can you please suggest me, how should i use KafkaListenerErrorHandler and DeadLetterPublishingRecoverer? Can this replace the RecoveryCallback?
Here is my current kafkaListenerContainerFactory code
#Bean
public ConcurrentKafkaListenerContainerFactory kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(primaryConsumerFactory());
factory.setRetryTemplate(retryTemplate());
factory.setRecoveryCallback(recoveryCallback);
factory.getContainerProperties().setAckMode(AckMode.RECORD);
factory.setConcurrency(1);
factory.getContainerProperties().setMissingTopicsFatal(false);
return factory; }
If it's working as you want now, why change it?
There are several layers and you can choose which one to do the error handling, depending on your needs.
KafkaListenerErrorHandler would be invoked for each delivery attempt within the retry, so you typically won't use it with retry.
Retry RecoveryCallback is invoked after retries are exhausted (or immmediately if you have classified an exception as not retryable).
ErrorHandler - is in the container and is invoked if any listener throws an exception, not just #KafkaListeners.
With recent versions of the framework you can completely replace listener level retry with a SeekToCurrentErrorHandler configured with a DeadLetterPublishingRecoverer and a BackOff.
The DeadLetterPublishingRecoverer is intended for use in a container error handler since it needs the raw ConsumerRecord<?, ?>.
The KafkaListenerErrorHandler only has access to the spring-messaging Message<?> that is converted from the ConsumerRecord<?, ?>.
To add on to the excellent context from #GaryRussell, this is what i am currently using:
I am handling any errors(a.k.a exception) like this:
factory.setErrorHandler(new SeekToCurrentErrorHandler(
new DeadLetterPublishingRecoverer(kafkaTemplate), new FixedBackOff(0L, 0L)));
And to print this error, i have a listener on the .DLT and i am printing the exception stack trace that is stored in the header like so:
#KafkaListener(id = "MY_ID", topics = MY_TOPIC + ".DLT")
public void listenDlt(ConsumerRecord<String, SomeClassName> consumerRecord,
#Header(KafkaHeaders.DLT_EXCEPTION_STACKTRACE) String exceptionStackTrace) {
logger.error(exceptionStackTrace);
}
Note: I am using logger.error, because i am redirecting all error messages to an error log file that is being monitored.
BONUS:
If you set the following:
logging.level.org.springframework.kafka=DEBUG
You will see this in your console/log:
xxx [org.springframework.kafka.KafkaListenerEndpointContainer#7-2-C-1] DEBUG o.s.k.listener.SeekToCurrentErrorHandler - Skipping seek of: ConsumerRecord xxx
xxx [kafka-producer-network-thread | producer-3] DEBUG o.s.k.l.DeadLetterPublishingRecoverer - Successful dead-letter publication: SendResult xxx
If you have a better way to log, i would appreciate your comment.
Thanks!
Cheers

exactly once delivery Is it possible through spring-cloud-stream-binder-kafka or spring-kafka which one to use

I am trying to achieve exactly once delivery using spring-cloud-stream-binder-kafka in a spring boot application.
The versions I am using are:
spring-cloud-stream-binder-kafka-core-1.2.1.RELEASE
spring-cloud-stream-binder-kafka-1.2.1.RELEASE
spring-cloud-stream-codec-1.2.2.RELEASE spring-kafka-1.1.6.RELEASE
spring-integration-kafka-2.1.0.RELEASE
spring-integration-core-4.3.10.RELEASE
zookeeper-3.4.8
Kafka version : 0.10.1.1
This is my configuration (cloud-config):
spring:
autoconfigure:
exclude: org.springframework.cloud.netflix.metrics.servo.ServoMetricsAutoConfiguration
kafka:
consumer:
enable-auto-commit: false
cloud:
stream:
kafka:
binder:
brokers: "${BROKER_HOST:xyz-aws.local:9092}"
headers:
- X-B3-TraceId
- X-B3-SpanId
- X-B3-Sampled
- X-B3-ParentSpanId
- X-Span-Name
- X-Process-Id
zkNodes: "${ZOOKEEPER_HOST:120.211.316.261:2181,120.211.317.252:2181}"
bindings:
feed_platform_events_input:
consumer:
autoCommitOffset: false
binders:
xyzkafka:
type: kafka
bindings:
feed_platform_events_input:
binder: xyzkafka
destination: platform-events
group: br-platform-events
I have two main classes:
FeedSink Interface:
package au.com.xyz.proxy.interfaces;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.messaging.MessageChannel;
public interface FeedSink {
String FEED_PLATFORM_EVENTS_INPUT = "feed_platform_events_input";
#Input(FeedSink.FEED_PLATFORM_EVENTS_INPUT)
MessageChannel feedlatformEventsInput();
}
EventConsumer
package au.com.xyz.proxy.consumer;
#Slf4j
#EnableBinding(FeedSink.class)
public class EventConsumer {
public static final String SUCCESS_MESSAGE =
"SEND-SUCCESS : Successfully sent message to platform.";
public static final String FAULT_MESSAGE = "SOAP-FAULT Code: {}, Description: {}";
public static final String CONNECT_ERROR_MESSAGE = "CONNECT-ERROR Error Details: {}";
public static final String EMPTY_NOTIFICATION_ERROR_MESSAGE =
"EMPTY-NOTIFICATION-ERROR Empty Event Received from platform";
#Autowired
private CapPointService service;
#StreamListener(FeedSink.FEED_PLATFORM_EVENTS_INPUT)
/**
* method associated with stream to process message.
*/
public void message(final #Payload EventNotification eventNotification,
final #Header(KafkaHeaders.ACKNOWLEDGMENT) Acknowledgment acknowledgment) {
String caseMilestone = "UNKNOWN";
if (!ObjectUtils.isEmpty(eventNotification)) {
SysMessage sysMessage = processPayload(eventNotification);
caseMilestone = sysMessage.getCaseMilestone();
try {
ClientResponse response = service.sendPayload(sysMessage);
if (response.hasFault()) {
Fault faultDetails = response.getFaultDetails();
log.error(FAULT_MESSAGE, faultDetails.getCode(), faultDetails.getDescription());
} else {
log.info(SUCCESS_MESSAGE);
}
acknowledgment.acknowledge();
} catch (Exception e) {
log.error(CONNECT_ERROR_MESSAGE, e.getMessage());
}
} else {
log.error(EMPTY_NOTIFICATION_ERROR_MESSAGE);
acknowledgment.acknowledge();
}
}
private SysMessage processPayload(final EventNotification eventNotification) {
Gson gson = new Gson();
String jsonString = gson.toJson(eventNotification.getData());
log.info("Consumed message for platform events with payload : {} ", jsonString);
SysMessage sysMessage = gson.fromJson(jsonString, SysMessage.class);
return sysMessage;
}
}
I have set the autocommit property for Kafka and spring container as false.
if you see in the EventConsumer class I have used Acknowledge in cases where I service.sendPayload is successful and there are no Exceptions. And I want container to move the offset and poll for next records.
What I have observed is:
Scenario 1 - In case where the Exception is thrown and there are no new messages published on kafka. There is no retry to process the message and it seems there is no activity. Even if the underlying issue is resolved. The issue I am referring to is down stream server unavailability. Is there a way to retry the processing n times and then give up. Note this is retry of processing or repoll from the last committed offset. This is not about Kafka instance not available.
If I restart the service (EC2 instance) then the processing happens from the offset where the last successful Acknowledge was done.
Scenario 2 - In case where Exception happened and then a subsequent message is pushed to kafka. I see the new message is processed and the offset moved. It means I lost the message which was not acknowledged. So the question is if I have handled the Acknowledge. How do I control to read from last commit not just the latest message and process it. I am assuming there is internally a poll happening and it did not take into account or did not know about the last message not being acknowledged. I don't think there are multiple threads reading from kafka. I dont know how the #Input and #StreamListener annotations are controlled. I assume the thread is controlled by property consumer.concurrency which controls the thread and by default it is set to 1.
So I have done research and found a lot of links but unfortunately none of them answers my specific questions.
I looked at (https://github.com/spring-cloud/spring-cloud-stream/issues/575)
which has a comment from Marius (https://stackoverflow.com/users/809122/marius-bogoevici):
Do note that Kafka does not provide individual message acking, which
means that acknowledgment translates into updating the latest consumed
offset to the offset of the acked message (per topic/partition). That
means that if you're acking messages from the same topic partition out
of order, a message can 'ack' all the messages before it.
not sure if it is the issue with order when there is one thread.
Apologies for long post, but I wanted to provide enough information. The main thing is I am trying to avoid losing messages when consuming from kafka and I am trying to see if spring-cloud-stream-binder-kafka can do the job or I have to look at alternatives.
Update 6th July 2018
I saw this post https://github.com/spring-projects/spring-kafka/issues/431
Is this a better approach to my problem? I can try latest version of spring-kafka
#KafkaListener(id = "qux", topics = "annotated4", containerFactory = "kafkaManualAckListenerContainerFactory",
containerGroup = "quxGroup")
public void listen4(#Payload String foo, Acknowledgment ack, Consumer<?, ?> consumer) {
Will this help in controlling the offset to be set to where the last
successfully processed record? How can I do that from the listen
method. consumer.seekToEnd(); and then how will listen method reset to get the that record?
Does putting the Consumer in the signature provide support to get
handle to consumer? Or I need to do anything more?
Should I use Acknowledge or consumer.commitSyncy()
What is the significance of containerFactory. do I have to define it
as a bean.
Do I need #EnableKafka and #Configuration for above approach to work?
Bearing in mind the application is a Spring Boot application.
By Adding Consumer to listen method I don't need to implement
ConsumerAware Interface?
Last but not least, Is it possible to provide some example of above approach if it is feasible.
Update 12 July 2018
Thanks Gary (https://stackoverflow.com/users/1240763/gary-russell) for providing the tip of using maxAttempts. I have used that approach. And I am able to achieve exactly once delivery and preserve the order of the message.
My updated cloud-config:
spring:
autoconfigure:
exclude: org.springframework.cloud.netflix.metrics.servo.ServoMetricsAutoConfiguration
kafka:
consumer:
enable-auto-commit: false
cloud:
stream:
kafka:
binder:
brokers: "${BROKER_HOST:xyz-aws.local:9092}"
headers:
- X-B3-TraceId
- X-B3-SpanId
- X-B3-Sampled
- X-B3-ParentSpanId
- X-Span-Name
- X-Process-Id
zkNodes: "${ZOOKEEPER_HOST:120.211.316.261:2181,120.211.317.252:2181}"
bindings:
feed_platform_events_input:
consumer:
autoCommitOffset: false
binders:
xyzkafka:
type: kafka
bindings:
feed_platform_events_input:
binder: xyzkafka
destination: platform-events
group: br-platform-events
consumer:
maxAttempts: 2147483647
backOffInitialInterval: 1000
backOffMaxInterval: 300000
backOffMultiplier: 2.0
Event Consumer remains the same as my initial implementation. Except for rethrowing the error for the container to know the processing has failed. If you just catch it then there is no way container knows the message processing has failures. By doing acknoweldgement.acknowledge you are just controlling the offset commit. In order for retry to happen you must throw the exception. Don't forget to set the kafka client autocommit property and spring (container level) autocommitOffset property to false. Thats it.
As explained by Marius, Kafka only maintains an offset in the log. If you process the next message, and update the offset; the failed message is lost.
You can send the failed message to a dead-letter topic (set enableDlq to true).
Recent versions of Spring Kafka (2.1.x) have special error handlers ContainerStoppingErrorHandler which stops the container when an exception occurs and SeekToCurrentErrorHandler which will cause the failed message to be redelivered.

Resources