BackoffExceptions are logged at error level when using RetryTopicConfiguration - spring-kafka

I am a happy user of the recently added RetryTopicConfiguration there is however a small issue that is bothering me.
The setup I use looks like:
#Bean
public RetryTopicConfiguration retryTopicConfiguration(
KafkaTemplate<String, String> template,
#Value("${kafka.topic.in}") String topicToInclude,
#Value("${spring.application.name}") String appName) {
return RetryTopicConfigurationBuilder
.newInstance()
.fixedBackOff(5000L)
.maxAttempts(3)
.retryTopicSuffix("-" + appName + ".retry")
.suffixTopicsWithIndexValues()
.dltSuffix("-" + appName + ".dlq")
.includeTopic(topicToInclude)
.dltHandlerMethod(KAFKA_EVENT_LISTENER, "handleDltEvent")
.create(template);
}
When the a listener throws an exception that triggers a retry, the DefaultErrorHandler will log a KafkaBackoffException at error level.
For a similar problem it was suggested to use a ListenerContainerFactoryConfigurer yet this does not remove all error logs, since I still see the following in my logs:
2022-04-02 17:34:33.340 ERROR 8054 --- [e.retry-0-0-C-1] o.s.kafka.listener.DefaultErrorHandler : Recovery of record (topic-spring-kafka-logging-issue.retry-0-0#0) failed
org.springframework.kafka.listener.ListenerExecutionFailedException: Listener failed; nested exception is org.springframework.kafka.listener.KafkaBackoffException: Partition 0 from topic topic-spring-kafka-logging-issue.retry-0 is not ready for consumption, backing off for approx. 4468 millis.
Can the log-level be changed, without adding a custom ErrorHandler?
Spring-Boot version: 2.6.6
Spring-Kafka version: 2.8.4
JDK version: 11
Sample project: here

Thanks for such a complete question. This is a known issue of Spring for Apache Kafka 2.8.4 due to the new combine blocking and non-blocking exceptions feature and has been fixed for 2.8.5.
The workaround is to clear the blocking exceptions mechanism such as:
#Bean(name = RetryTopicInternalBeanNames.LISTENER_CONTAINER_FACTORY_CONFIGURER_NAME)
public ListenerContainerFactoryConfigurer lcfc(KafkaConsumerBackoffManager kafkaConsumerBackoffManager,
DeadLetterPublishingRecovererFactory deadLetterPublishingRecovererFactory,
#Qualifier(RetryTopicInternalBeanNames
.INTERNAL_BACKOFF_CLOCK_BEAN_NAME) Clock clock) {
ListenerContainerFactoryConfigurer lcfc = new ListenerContainerFactoryConfigurer(kafkaConsumerBackoffManager, deadLetterPublishingRecovererFactory, clock);
lcfc.setBlockingRetriesBackOff(new FixedBackOff(0, 0));
lcfc.setErrorHandlerCustomizer(eh -> ((DefaultErrorHandler) eh).setClassifications(Collections.emptyMap(), true));
return lcfc;
}
Please let me know if that works for you.
Thanks.
EDIT:
This workaround disables only blocking retries, which since 2.8.4 can be used along non-blocking as per the link in the original answer. The exception classification for the non-blocking retries is in the DefaultDestinationTopicResolver class, and you can set FATAL exceptions as documented here.
EDIT: Alternatively, you can use the Spring Kafka 2.8.5-SNAPSHOT version by adding the Spring Snapshot repository such as:
repositories {
maven {
url 'https://repo.spring.io/snapshot'
}
}
dependencies {
implementation 'org.springframework.kafka:spring-kafka:2.8.5-SNAPSHOT'
}
You can also downgrade to Spring Kafka 2.8.3.
As Gary Russell pointed out, if your application is already in production you should not use the SNAPSHOT version, and 2.8.5 is out in a couple of weeks.
EDIT 2: Glad to hear you’re happy about the feature!

Related

JpaSagaStore in conjunction with Jackson unable to properly store state

In a SpringBoot application, I have the following configuration:
axon:
axonserver:
servers: "${AXON_SERVER:localhost}"
serializer:
general: jackson
messages: jackson
events: jackson
logging.level:
org.axonframework.modelling.saga: debug
Downsizing the scenario to bare minimum, the relevant portion of Saga class:
#Slf4j
#Saga
#ProcessingGroup("AuctionEventManager")
public class AuctionEventManagerSaga {
#Autowired
private transient EventScheduler eventScheduler;
private ScheduleToken scheduleToken;
private Instant auctionTimerStart;
#StartSaga
#SagaEventHandler(associationProperty = "auctionEventId")
protected void on(final AuctionEventScheduled event) {
this.auctionTimerStart = event.getTimerStart();
// Cancel any pre-existing previous job, since the scheduling thread might be lost upon a crash/restart of JVM.
if (this.scheduleToken != null) {
this.eventScheduler.cancelSchedule(this.scheduleToken);
}
this.scheduleToken = this.eventScheduler.schedule(
this.auctionTimerStart,
AuctionEventStarted.builder()
.auctionEventId(event.getAuctionEventId())
.build()
);
}
#EndSaga
#SagaEventHandler(associationProperty = "auctionEventId")
protected void on(final AuctionEventStarted event) {
log.info(
"[AuctionEventManagerSaga] Current state: {scheduleToken={}, auctionTimerStart={}}",
this.scheduleToken,
this.auctionTimerStart
);
}
}
In the final compiled class, we will end up having 4 properties: log (from #Slf4j), eventScheduler (transient, #Autowired), scheduleToken and auctionTimerStart.
For reference information, here is a sample of the general approach I've been using for both Command and Event classes:
#Value
#Builder
#JsonDeserialize(builder = AuctionEventStarted.AuctionEventStartedBuilder.class)
public class AuctionEventStarted {
AuctionEventId auctionEventId;
#JsonPOJOBuilder(withPrefix = "")
public static final class AuctionEventStartedBuilder {}
}
When executing the code, you get the following output:
2020-05-12 15:40:01.180 DEBUG 1 --- [mandProcessor-4] o.a.m.saga.repository.jpa.JpaSagaStore : Updating saga id c8aff7f7-d47f-4616-8a96-a40044cb7e3b as {}
As soon as the general serializer is changed to xstream, the content is serialized properly, but I face another issue during deserialization, since I have private static final class Builder classes using Lombok.
So is there a way for Axon to handle these scenarios:
1- Axon to safely manage Jackson to ignore #Autowired, transient and static properties from #Saga classes? I've attempted to manually define #JsonIgnore at non-state properties and it still didn't work.
2- Axon to safely configure XStream to ignore inner classes (mostly Builder classes implemented as private static final)?
Thanks in advance,
EDIT: I'm pursuing a resolution using my preferred serializer: JSON. I attempted to modify the saga class and extend JsonSerializer<AuctionEventManagerSaga>. For that I implemented the methods:
#Override
public Class<AuctionEventManagerSaga> handledType() {
return AuctionEventManagerSaga.class;
}
#Override
public void serialize(
final AuctionEventManagerSaga value,
final JsonGenerator gen,
final SerializerProvider serializers
) throws IOException {
gen.writeStartObject();
gen.writeObjectField("scheduleToken", value.eventScheduler);
gen.writeObjectField("auctionTimerStart", value.auctionTimerStart);
gen.writeEndObject();
}
Right now, I have something being serialized, but it has nothing to do with the properties I've defined:
2020-05-12 16:20:01.322 DEBUG 1 --- [mandProcessor-0] o.a.m.saga.repository.jpa.JpaSagaStore : Storing saga id c4b5d94c-7251-40a5-accf-332768b1cacd as {"delegatee":null,"unwrappingSerializer":false}
EDIT 2 Decided to add more insight into the issue I experience when I switch general to use XStream (even though it's somewhat unrelated to the main issue described in the title).
Here is the issue it complains to me:
2020-05-12 17:08:06.495 DEBUG 1 --- [ault-executor-0] o.a.a.c.command.AxonServerCommandBus : Received command response [message_identifier: "79631ffb-9a87-4224-bed3-a957730dced7"
error_code: "AXONIQ-4002"
error_message {
message: "No converter available\n---- Debugging information ----\nmessage : No converter available\ntype : jdk.internal.misc.InnocuousThread\nconverter : com.thoughtworks.xstream.converters.reflection.ReflectionConverter\nmessage[1] : Unable to make field private static final jdk.internal.misc.Unsafe jdk.internal.misc.InnocuousThread.UNSAFE accessible: module java.base does not \"opens jdk.internal.misc\" to unnamed module #7728643a\n-------------------------------"
location: "1#600b5b87a922"
details: "No converter available\n---- Debugging information ----\nmessage : No converter available\ntype : jdk.internal.misc.InnocuousThread\nconverter : com.thoughtworks.xstream.converters.reflection.ReflectionConverter\nmessage[1] : Unable to make field private static final jdk.internal.misc.Unsafe jdk.internal.misc.InnocuousThread.UNSAFE accessible: module java.base does not \"opens jdk.internal.misc\" to unnamed module #7728643a\n-------------------------------"
}
request_identifier: "2f7020b1-f655-4649-bbe0-d6f458b3c2f3"
]
2020-05-12 17:08:06.505 WARN 1 --- [ault-executor-0] o.a.c.gateway.DefaultCommandGateway : Command 'ACommandClassDispatchedFromSaga' resulted in org.axonframework.commandhandling.CommandExecutionException(No converter available
---- Debugging information ----
message : No converter available
type : jdk.internal.misc.InnocuousThread
converter : com.thoughtworks.xstream.converters.reflection.ReflectionConverter
message[1] : Unable to make field private static final jdk.internal.misc.Unsafe jdk.internal.misc.InnocuousThread.UNSAFE accessible: module java.base does not "opens jdk.internal.misc" to unnamed module #7728643a
-------------------------------)
Still no luck on resolving this...
I've worked on Axon systems where the only used Serializer implementation was the JacksonSerializer too. Mind you though, this is not what the Axon team recommends. For messages (i.e. commands, events and queries) it makes perfect sense to use JSON as the serialized format. But switching the general Serializer to jackson means you have to litter your domain logic (e.g. your Saga) with Jackson specifics "to make it work".
Regardless, backtracking to my successful use case of jackson-serialized-sagas. In this case we used the correct match of JSON annotations on the fields we desired to take into account (the actual state) and to ignore the one's we didn't want deserialized (with either transient or #JsonIgnore). Why both do not seem to work in your scenario is not entirely clear at this stage.
What I do recall is that the referenced project's team very clearly decided against Lombok due to "overall weirdnes" when it comes to de-/serialization. As a trial it thus might be worth to not use any Lombok annotations/logic in the Saga class and see if you can de-/serialize it correctly in such a state.
If it does work at that moment, I think you have found your culprit for diving in further search.
I know this isn't an exact answer, but I hope it helps you regardless!
Might be worthwhile to share the repository where this problems occurs in; might make the problem clearer for others too.
I was able to resolve the issue #2 when using XStream as general serializer.
One of the Sagas had an #Autowired dependency property that was not transient.
XStream was throwing some cryptic message, but we managed to track the problem and address it.
As for JSON support, we had no luck. We ended up switched everything to XStream for now, as the company only uses Java and it would be ok to decode the events using XStream.
Not the greatest solution, as we really wanted (and hoped) JSON would be supported properly out of the box. Mind you, this is in conjunction with using Lombok which caused for the nuisance in this case.

when to use RecoveryCallback vs KafkaListenerErrorHandler

I'm trying to understand when should i use org.springframework.retry.RecoveryCallback and org.springframework.kafka.listener.KafkaListenerErrorHandler?
As of today, I'm using a class (implements org.springframework.retry.RecoveryCallback) to log error message and send the message to DLT and it's working. For sending a message to DLT, I'm using Spring KafkaTemplate and then I came across KafkaListenerErrorHandler and DeadLetterPublishingRecoverer. Now, can you please suggest me, how should i use KafkaListenerErrorHandler and DeadLetterPublishingRecoverer? Can this replace the RecoveryCallback?
Here is my current kafkaListenerContainerFactory code
#Bean
public ConcurrentKafkaListenerContainerFactory kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(primaryConsumerFactory());
factory.setRetryTemplate(retryTemplate());
factory.setRecoveryCallback(recoveryCallback);
factory.getContainerProperties().setAckMode(AckMode.RECORD);
factory.setConcurrency(1);
factory.getContainerProperties().setMissingTopicsFatal(false);
return factory; }
If it's working as you want now, why change it?
There are several layers and you can choose which one to do the error handling, depending on your needs.
KafkaListenerErrorHandler would be invoked for each delivery attempt within the retry, so you typically won't use it with retry.
Retry RecoveryCallback is invoked after retries are exhausted (or immmediately if you have classified an exception as not retryable).
ErrorHandler - is in the container and is invoked if any listener throws an exception, not just #KafkaListeners.
With recent versions of the framework you can completely replace listener level retry with a SeekToCurrentErrorHandler configured with a DeadLetterPublishingRecoverer and a BackOff.
The DeadLetterPublishingRecoverer is intended for use in a container error handler since it needs the raw ConsumerRecord<?, ?>.
The KafkaListenerErrorHandler only has access to the spring-messaging Message<?> that is converted from the ConsumerRecord<?, ?>.
To add on to the excellent context from #GaryRussell, this is what i am currently using:
I am handling any errors(a.k.a exception) like this:
factory.setErrorHandler(new SeekToCurrentErrorHandler(
new DeadLetterPublishingRecoverer(kafkaTemplate), new FixedBackOff(0L, 0L)));
And to print this error, i have a listener on the .DLT and i am printing the exception stack trace that is stored in the header like so:
#KafkaListener(id = "MY_ID", topics = MY_TOPIC + ".DLT")
public void listenDlt(ConsumerRecord<String, SomeClassName> consumerRecord,
#Header(KafkaHeaders.DLT_EXCEPTION_STACKTRACE) String exceptionStackTrace) {
logger.error(exceptionStackTrace);
}
Note: I am using logger.error, because i am redirecting all error messages to an error log file that is being monitored.
BONUS:
If you set the following:
logging.level.org.springframework.kafka=DEBUG
You will see this in your console/log:
xxx [org.springframework.kafka.KafkaListenerEndpointContainer#7-2-C-1] DEBUG o.s.k.listener.SeekToCurrentErrorHandler - Skipping seek of: ConsumerRecord xxx
xxx [kafka-producer-network-thread | producer-3] DEBUG o.s.k.l.DeadLetterPublishingRecoverer - Successful dead-letter publication: SendResult xxx
If you have a better way to log, i would appreciate your comment.
Thanks!
Cheers

Realm doesn’t work with xUnite and .net core

I’m having issues running realm with xUnite and Net core. Here is a very simple test that I want to run
public class UnitTest1
{
[Scenario]
public void Test1()
{
var realm = Realm.GetInstance(new InMemoryConfiguration("Test123"));
realm.Write(() =>
{
realm.Add(new Product());
});
var test = realm.All<Product>().First();
realm.Write(() => realm.RemoveAll());
}
}
I get different exceptions on different machines (Windows & Mac) on line where I try to create a Realm instace with InMemoryConfiguration.
On Mac I get the following exception
libc++abi.dylib: terminating with uncaught exception of type realm::IncorrectThreadException: Realm accessed from incorrect thread.
On Windows I get the following exception when running
ERROR Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. at
System.Net.Sockets.NetworkStream.Read(Span1 destination) at
System.Net.Sockets.NetworkStream.ReadByte() at
System.IO.BinaryReader.ReadByte() at
System.IO.BinaryReader.Read7BitEncodedInt() at
System.IO.BinaryReader.ReadString() at
Microsoft.VisualStudio.TestPlatform.CommunicationUtilities.LengthPrefixCommunicationChannel.NotifyDataAvailable() at
Microsoft.VisualStudio.TestPlatform.CommunicationUtilities.TcpClientExtensions.MessageLoopAsync(TcpClient client, ICommunicationChannel channel, Action1 errorHandler, CancellationToken cancellationToken) Source: System.Net.Sockets HResult: -2146232800 Inner Exception: An existing connection was forcibly closed by the remote host HResult: -2147467259
I’m using Realm 3.3.0 and xUnit 2.4.1
I’ve tried downgrading to Realm 2.2.0, and it didn’t work either.
The solution to this problem was found in this Github post
The piece of code from that helped me to solve the issue
Realm GetInstanceWithoutCapturingContext(RealmConfiguration config)
{
var context = SynchronizationContext.Current;
SynchronizationContext.SetSynchronizationContext(null);
Realm realm = null;
try
{
realm = Realm.GetInstance(config);
}
finally
{
SynchronizationContext.SetSynchronizationContext(context);
}
return realm;
}
Though it took a while for me to apply this to my solution.
First and foremost, instead of just setting the context to null I am using Nito.AsyncEx.AsyncContext. Because otherwise automatic changes will not be propagated through threads, as realm needs a non-null SynchronizationContext for that feature to work. So, in my case the method looks something like this
public class MockRealmFactory : IRealmFactory
{
private readonly SynchronizationContext _synchronizationContext;
private readonly string _defaultDatabaseId;
public MockRealmFactory()
{
_synchronizationContext = new AsyncContext().SynchronizationContext;
_defaultDatabaseId = Guid.NewGuid().ToString();
}
public Realm GetRealmWithPath(string realmDbPath)
{
var context = SynchronizationContext.Current;
SynchronizationContext.SetSynchronizationContext(_synchronizationContext);
Realm realm;
try
{
realm = Realm.GetInstance(new InMemoryConfiguration(realmDbPath));
}
finally
{
SynchronizationContext.SetSynchronizationContext(context);
}
return realm;
}
}
Further, this fixed a lot of failing unit tests. But I was still receiving that same exception - Realm accessed from incorrect thread. And I had no clue why, cause everything was set correctly. Then I found that the tests that were failing were related to methods where I was using async realm api, in particular realm.WriteAsync. After some more digging I found the following lines in the realm documentation.
It is not a problem if you have set SynchronisationContext.Current but
it will cause WriteAsync to dispatch again on the thread pool, which
may create another worker thread. So, if you are using Current in your
threads, consider calling just Write instead of WriteAsync.
In my code there was no direct need of using the async API. I removed and replaced with sync Write and all the tests became green again! I guess if I find myself in a situation that I do need to use the async API because of some kind of bulk insertions, I'd either mock that specific API, or replace with my own background thread using Task.Run instead of using Realm's version.

exactly once delivery Is it possible through spring-cloud-stream-binder-kafka or spring-kafka which one to use

I am trying to achieve exactly once delivery using spring-cloud-stream-binder-kafka in a spring boot application.
The versions I am using are:
spring-cloud-stream-binder-kafka-core-1.2.1.RELEASE
spring-cloud-stream-binder-kafka-1.2.1.RELEASE
spring-cloud-stream-codec-1.2.2.RELEASE spring-kafka-1.1.6.RELEASE
spring-integration-kafka-2.1.0.RELEASE
spring-integration-core-4.3.10.RELEASE
zookeeper-3.4.8
Kafka version : 0.10.1.1
This is my configuration (cloud-config):
spring:
autoconfigure:
exclude: org.springframework.cloud.netflix.metrics.servo.ServoMetricsAutoConfiguration
kafka:
consumer:
enable-auto-commit: false
cloud:
stream:
kafka:
binder:
brokers: "${BROKER_HOST:xyz-aws.local:9092}"
headers:
- X-B3-TraceId
- X-B3-SpanId
- X-B3-Sampled
- X-B3-ParentSpanId
- X-Span-Name
- X-Process-Id
zkNodes: "${ZOOKEEPER_HOST:120.211.316.261:2181,120.211.317.252:2181}"
bindings:
feed_platform_events_input:
consumer:
autoCommitOffset: false
binders:
xyzkafka:
type: kafka
bindings:
feed_platform_events_input:
binder: xyzkafka
destination: platform-events
group: br-platform-events
I have two main classes:
FeedSink Interface:
package au.com.xyz.proxy.interfaces;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.messaging.MessageChannel;
public interface FeedSink {
String FEED_PLATFORM_EVENTS_INPUT = "feed_platform_events_input";
#Input(FeedSink.FEED_PLATFORM_EVENTS_INPUT)
MessageChannel feedlatformEventsInput();
}
EventConsumer
package au.com.xyz.proxy.consumer;
#Slf4j
#EnableBinding(FeedSink.class)
public class EventConsumer {
public static final String SUCCESS_MESSAGE =
"SEND-SUCCESS : Successfully sent message to platform.";
public static final String FAULT_MESSAGE = "SOAP-FAULT Code: {}, Description: {}";
public static final String CONNECT_ERROR_MESSAGE = "CONNECT-ERROR Error Details: {}";
public static final String EMPTY_NOTIFICATION_ERROR_MESSAGE =
"EMPTY-NOTIFICATION-ERROR Empty Event Received from platform";
#Autowired
private CapPointService service;
#StreamListener(FeedSink.FEED_PLATFORM_EVENTS_INPUT)
/**
* method associated with stream to process message.
*/
public void message(final #Payload EventNotification eventNotification,
final #Header(KafkaHeaders.ACKNOWLEDGMENT) Acknowledgment acknowledgment) {
String caseMilestone = "UNKNOWN";
if (!ObjectUtils.isEmpty(eventNotification)) {
SysMessage sysMessage = processPayload(eventNotification);
caseMilestone = sysMessage.getCaseMilestone();
try {
ClientResponse response = service.sendPayload(sysMessage);
if (response.hasFault()) {
Fault faultDetails = response.getFaultDetails();
log.error(FAULT_MESSAGE, faultDetails.getCode(), faultDetails.getDescription());
} else {
log.info(SUCCESS_MESSAGE);
}
acknowledgment.acknowledge();
} catch (Exception e) {
log.error(CONNECT_ERROR_MESSAGE, e.getMessage());
}
} else {
log.error(EMPTY_NOTIFICATION_ERROR_MESSAGE);
acknowledgment.acknowledge();
}
}
private SysMessage processPayload(final EventNotification eventNotification) {
Gson gson = new Gson();
String jsonString = gson.toJson(eventNotification.getData());
log.info("Consumed message for platform events with payload : {} ", jsonString);
SysMessage sysMessage = gson.fromJson(jsonString, SysMessage.class);
return sysMessage;
}
}
I have set the autocommit property for Kafka and spring container as false.
if you see in the EventConsumer class I have used Acknowledge in cases where I service.sendPayload is successful and there are no Exceptions. And I want container to move the offset and poll for next records.
What I have observed is:
Scenario 1 - In case where the Exception is thrown and there are no new messages published on kafka. There is no retry to process the message and it seems there is no activity. Even if the underlying issue is resolved. The issue I am referring to is down stream server unavailability. Is there a way to retry the processing n times and then give up. Note this is retry of processing or repoll from the last committed offset. This is not about Kafka instance not available.
If I restart the service (EC2 instance) then the processing happens from the offset where the last successful Acknowledge was done.
Scenario 2 - In case where Exception happened and then a subsequent message is pushed to kafka. I see the new message is processed and the offset moved. It means I lost the message which was not acknowledged. So the question is if I have handled the Acknowledge. How do I control to read from last commit not just the latest message and process it. I am assuming there is internally a poll happening and it did not take into account or did not know about the last message not being acknowledged. I don't think there are multiple threads reading from kafka. I dont know how the #Input and #StreamListener annotations are controlled. I assume the thread is controlled by property consumer.concurrency which controls the thread and by default it is set to 1.
So I have done research and found a lot of links but unfortunately none of them answers my specific questions.
I looked at (https://github.com/spring-cloud/spring-cloud-stream/issues/575)
which has a comment from Marius (https://stackoverflow.com/users/809122/marius-bogoevici):
Do note that Kafka does not provide individual message acking, which
means that acknowledgment translates into updating the latest consumed
offset to the offset of the acked message (per topic/partition). That
means that if you're acking messages from the same topic partition out
of order, a message can 'ack' all the messages before it.
not sure if it is the issue with order when there is one thread.
Apologies for long post, but I wanted to provide enough information. The main thing is I am trying to avoid losing messages when consuming from kafka and I am trying to see if spring-cloud-stream-binder-kafka can do the job or I have to look at alternatives.
Update 6th July 2018
I saw this post https://github.com/spring-projects/spring-kafka/issues/431
Is this a better approach to my problem? I can try latest version of spring-kafka
#KafkaListener(id = "qux", topics = "annotated4", containerFactory = "kafkaManualAckListenerContainerFactory",
containerGroup = "quxGroup")
public void listen4(#Payload String foo, Acknowledgment ack, Consumer<?, ?> consumer) {
Will this help in controlling the offset to be set to where the last
successfully processed record? How can I do that from the listen
method. consumer.seekToEnd(); and then how will listen method reset to get the that record?
Does putting the Consumer in the signature provide support to get
handle to consumer? Or I need to do anything more?
Should I use Acknowledge or consumer.commitSyncy()
What is the significance of containerFactory. do I have to define it
as a bean.
Do I need #EnableKafka and #Configuration for above approach to work?
Bearing in mind the application is a Spring Boot application.
By Adding Consumer to listen method I don't need to implement
ConsumerAware Interface?
Last but not least, Is it possible to provide some example of above approach if it is feasible.
Update 12 July 2018
Thanks Gary (https://stackoverflow.com/users/1240763/gary-russell) for providing the tip of using maxAttempts. I have used that approach. And I am able to achieve exactly once delivery and preserve the order of the message.
My updated cloud-config:
spring:
autoconfigure:
exclude: org.springframework.cloud.netflix.metrics.servo.ServoMetricsAutoConfiguration
kafka:
consumer:
enable-auto-commit: false
cloud:
stream:
kafka:
binder:
brokers: "${BROKER_HOST:xyz-aws.local:9092}"
headers:
- X-B3-TraceId
- X-B3-SpanId
- X-B3-Sampled
- X-B3-ParentSpanId
- X-Span-Name
- X-Process-Id
zkNodes: "${ZOOKEEPER_HOST:120.211.316.261:2181,120.211.317.252:2181}"
bindings:
feed_platform_events_input:
consumer:
autoCommitOffset: false
binders:
xyzkafka:
type: kafka
bindings:
feed_platform_events_input:
binder: xyzkafka
destination: platform-events
group: br-platform-events
consumer:
maxAttempts: 2147483647
backOffInitialInterval: 1000
backOffMaxInterval: 300000
backOffMultiplier: 2.0
Event Consumer remains the same as my initial implementation. Except for rethrowing the error for the container to know the processing has failed. If you just catch it then there is no way container knows the message processing has failures. By doing acknoweldgement.acknowledge you are just controlling the offset commit. In order for retry to happen you must throw the exception. Don't forget to set the kafka client autocommit property and spring (container level) autocommitOffset property to false. Thats it.
As explained by Marius, Kafka only maintains an offset in the log. If you process the next message, and update the offset; the failed message is lost.
You can send the failed message to a dead-letter topic (set enableDlq to true).
Recent versions of Spring Kafka (2.1.x) have special error handlers ContainerStoppingErrorHandler which stops the container when an exception occurs and SeekToCurrentErrorHandler which will cause the failed message to be redelivered.

How to use spring-cloud-sleuth to trace spring-security-oauth activities?

I'm trying to use spring-cloud-sleuth to trace https requests initiated by spring-security-oauth.
But I'm stuck on that the spring-security-oauth filter OAuth2AuthenticationProcessingFilter is executed before the spring-cloud-sleuth filter TraceFilter.
Can this be changed so that the spring-cloud-sleuth filter is processed before the spring-security-oauth filter?
Version info:
spring-boot: 1.3.5
spring-cloud: Brixton.SR3
spring-cloud-sleuth: 1.0.3
spring-security-oauth2: 2.0.9
Update:
Based on the suggestion below I could solve the problem by defining my own FilterRegistrationBean as:
#Inject
TraceFilter traceFilter;
#Bean
public FilterRegistrationBean myTraceFilter() {
LOG.info("Register a TraceFilter with HIGHEST_PRECEDENCE");
FilterRegistrationBean filterRegistrationBean = new FilterRegistrationBean(traceFilter, new ServletRegistrationBean[0]);
filterRegistrationBean.setDispatcherTypes(ASYNC, new DispatcherType[]{ERROR, FORWARD, INCLUDE, REQUEST});
filterRegistrationBean.setOrder(Ordered.HIGHEST_PRECEDENCE);
return filterRegistrationBean;
}
You can register the TraceFilter yourself and manually provide the order. Just try to put it before the spring security filter. If that works fine you can file a PR / issue describing the whole flow so that we continue the discussion there.

Resources