Using spring-cloud-stream from spring-cloud Hoxton.SR12 release with Kafka Binder.
Boot version: 2.5.2
Problem statement:
I would like to handle deserialisation errors by pushing them to a poison-pill topic with no retries.
Handle any other exceptions by retrying and then pushing to a parkingLot topic.
Do not retry ValidationException
This is my error handling code so far:
#Configuration
#Slf4j
public class ErrorHandlingConfig {
#Value("${errorHandling.parkingLotDestination}")
private String parkingLotDestination;
#Value("${errorHandling.retryAttempts}")
private long retryAttempts;
#Value("${errorHandling.retryIntervalMillis}")
private long retryIntervalMillis;
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<byte[], byte[]>> customizer(SeekToCurrentErrorHandler errorHandler) {
return (container, dest, group) -> {
container.setErrorHandler(errorHandler);
};
}
#Bean
public SeekToCurrentErrorHandler errorHandler(DeadLetterPublishingRecoverer parkingLotPublisher) {
SeekToCurrentErrorHandler seekToCurrentErrorHandler = new SeekToCurrentErrorHandler(parkingLotPublisher, new FixedBackOff(retryIntervalMillis, retryAttempts));
seekToCurrentErrorHandler.addNotRetryableExceptions(ValidationException.class);
return seekToCurrentErrorHandler;
}
#Bean
public DeadLetterPublishingRecoverer parkingLotPublisher(KafkaOperations bytesTemplate) {
DeadLetterPublishingRecoverer deadLetterPublishingRecoverer = new DeadLetterPublishingRecoverer(bytesTemplate, (cr, e) -> new TopicPartition(parkingLotDestination, cr.partition()));
deadLetterPublishingRecoverer.setHeadersFunction((cr, e) -> cr.headers());
return deadLetterPublishingRecoverer;
}
}
I think what I have so far should cover the retryable exceptions being pushed to parking lot. How do I now add in the code to push failed deserialisation events to poison topic?
I want to do this outside of the binder/binding configuration and at the container level due to the outstanding issue of not being able to send to a custom dlqName.
I could use a ErrorHandlingDeserializer and call setFailedDeserializationFunction() on it that would contain a function that sends the message onto poison topic. Should I do this using a Source binding or raw KafkaOperations? I also need to work out how to hook this ErrorHandingDeserialiser into the ConsumerFactory.
Why are you using Hoxton with Boot 2.5? The proper cloud version for Boot 2.5.2 is 2020.0.3.
The SeekToCurrentErrorHandler already considers DeserializationExceptions to be fatal. See
/**
* Add exception types to the default list. By default, the following exceptions will
* not be retried:
* <ul>
* <li>{#link DeserializationException}</li>
* <li>{#link MessageConversionException}</li>
* <li>{#link ConversionException}</li>
* <li>{#link MethodArgumentResolutionException}</li>
* <li>{#link NoSuchMethodException}</li>
* <li>{#link ClassCastException}</li>
* </ul>
* All others will be retried.
* #param exceptionTypes the exception types.
* #since 2.6
* #see #removeNotRetryableException(Class)
* #see #setClassifications(Map, boolean)
*/
#SafeVarargs
#SuppressWarnings("varargs")
public final void addNotRetryableExceptions(Class<? extends Exception>... exceptionTypes) {
The ErrorHandlingDeserializer (without a function) adds the exception to a header; the DeadLetterPublishingRecoverer automatically extracts the original payload from the header and sets as the value() of the outgoing record (byte[]).
Since you are using native encoding, you will need two KafkaTemplates - one for the failed records that need to be re-serialized and one for the DeserializationExceptions (that uses a ByteArraySerializer.
See
/**
* Create an instance with the provided templates and destination resolving function,
* that receives the failed consumer record and the exception and returns a
* {#link TopicPartition}. If the partition in the {#link TopicPartition} is less than
* 0, no partition is set when publishing to the topic. The templates map keys are
* classes and the value the corresponding template to use for objects (producer
* record values) of that type. A {#link java.util.LinkedHashMap} is recommended when
* there is more than one template, to ensure the map is traversed in order. To send
* records with a null value, add a template with the {#link Void} class as a key;
* otherwise the first template from the map values iterator will be used.
* #param templates the {#link KafkaOperations}s to use for publishing.
* #param destinationResolver the resolving function.
*/
#SuppressWarnings("unchecked")
public DeadLetterPublishingRecoverer(Map<Class<?>, KafkaOperations<? extends Object, ? extends Object>> templates,
BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> destinationResolver) {
I also need to work out how to hook this ErrorHandingDeserialiser into the ConsumerFactory.
Just set the appropriate properties - see the documentation.
Related
I am using Spring-Kafka to consume messages from Confluent Kafka and I am using RetryTopicConfiguration Bean to configure the topics and backoff strategy. My application works fine but I see a lot of WARNING log like the one below in my logs and I am wondering if my configuration is incorrect.
DeadLetterPublishingRecovererFactory$1 : Destination resolver returned non-existent partition flow-events-retry-0-4, KafkaProducer will determine partition to use for this topic
Config Code
#Bean
public KafkaTemplate kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
#Bean
public RetryTopicConfiguration myRetryableTopic(KafkaTemplate<String, Object> template) {
return RetryTopicConfigurationBuilder
.newInstance()
.exponentialBackoff(BACKOFF_INITIAL_DELAY_10MINS, BACKOFF_EXPONENTIAL_MULTIPLIER_3, BACKOFF_MAX_DELAY_4HRS)
.maxAttempts(5)
.doNotAutoCreateRetryTopics()
.setTopicSuffixingStrategy(TopicSuffixingStrategy.SUFFIX_WITH_INDEX_VALUE)
.create(template);
}
The retry topics are created separately with 1 partition and replication factor of 3.
By default, the same partition as the original topic is used; you can override that behavior by overriding the DeadLetterPublishingRecovererFactory #Bean:
#Bean(RetryTopicInternalBeanNames.DEAD_LETTER_PUBLISHING_RECOVERER_FACTORY_BEAN_NAME)
DeadLetterPublishingRecovererFactory factory(DestinationTopicResolver resolver) {
DeadLetterPublishingRecovererFactory factory = new DeadLetterPublishingRecovererFactory(resolver) {
#Override
protected TopicPartition resolveTopicPartition(ConsumerRecord<?, ?> cr, DestinationTopic nextDestination) {
return new TopicPartition(nextDestination.getDestinationName(), -1); // Kafka Chooses
// return new TopicPartition(nextDestination.getDestinationName(), 0); // explict
}
};
factory.setDeadLetterPublishingRecovererCustomizer(dlpr -> {
// ...
});
return factory;
}
As you can see in this example, you can also customize DLPR properties here too.
/**
* Creates and returns the {#link TopicPartition}, where the original record should be forwarded.
* By default, it will use the partition same as original record's partition, in the next destination topic.
*
* <p>{#link DeadLetterPublishingRecoverer#checkPartition} has logic to check whether that partition exists,
* and if it doesn't it sets -1, to allow the Producer itself to assign a partition to the record.</p>
*
* <p>Subclasses can inherit from this method to override the implementation, if necessary.</p>
*
* #param cr The original {#link ConsumerRecord}, which is to be forwarded to DLT
* #param nextDestination The next {#link DestinationTopic}, where the consumerRecord is to be forwarded
* #return An instance of {#link TopicPartition}, specifying the topic and partition, where the cr is to be sent
*/
protected TopicPartition resolveTopicPartition(final ConsumerRecord<?, ?> cr, final DestinationTopic nextDestination) {
return new TopicPartition(nextDestination.getDestinationName(), cr.partition());
}
I'm using spring boot 2.1.7.RELEASE and spring-kafka 2.2.8.RELEASE. We are in the process of upgrading the spring boot version but for now, we are using this spring-kafka version.
And I'm using #KafkaListener annotation to create a consumer and I'm using all default settings for the consumer.And I'm using below configuration as specified in the Spring-Kafka documentation.
// other props
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer2.class);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer2.class);
props.put(ErrorHandlingDeserializer.KEY_DESERIALIZER_CLASS, StringDeserializer.class);
props.put(ErrorHandlingDeserializer.VALUE_DESERIALIZER_CLASS, AvroDeserializer.class.getName());
return new DefaultKafkaConsumerFactory<>(props);
Now, I've implemented my custom SeekToCurrentErrorHandler by extending SeekToCurrentErrorHandler to capture the send the records causing deserialization exception and send them to DLT.
Now the problem is, when i'm trying to test this logic with 30 messages with alternate messages having the deserialization exception, the list of the handle method is getting all 30 messages instead of getting only 15 messages which are causing the exception. With that said, how can i get the records with exception? Please suggest.
Here is my custom SeekToCurrentErrorHandler code
#Component
public class MySeekToCurrentErrorHandler extends SeekToCurrentErrorHandler {
private final MyDeadLetterRecoverer deadLetterRecoverer;
#Autowired
public MySeekToCurrentErrorHandler(MyDeadLetterRecoverer deadLetterRecoverer) {
super(-1);
this.deadLetterRecoverer = deadLetterRecoverer;
}
#Override
public void handle(Exception thrownException, List<ConsumerRecord<?, ?>> data, Consumer<?, ?> consumer, MessageListenerContainer container) {
if (thrownException instanceof DeserializationException) {
//Improve to support multiple records
DeserializationException deserializationException = (DeserializationException) thrownException;
deadLetterRecoverer.accept(data.get(0), deserializationException);
ConsumerRecord<?, ?>. consumerRecord = data.get(0);
sout(consumerRecord.key());
sout(consumerRecord.value());
} else {
//Calling super method to let the 'SeekToCurrentErrorHandler' do what it is actually designed for
super.handle(thrownException, data, consumer, container);
}
}
}
We have to pass all the remaining records, so that the STCEH can re-seek all partitions for the records that weren't processed.
After you recover the failed record, use SeekUtils to seek the remaining records (remove the one that you have recovered from the list).
Set recoverable to false so that doSeeks() doesn't try to recover the new first record.
/**
* Seek records to earliest position, optionally skipping the first.
* #param records the records.
* #param consumer the consumer.
* #param exception the exception
* #param recoverable true if skipping the first record is allowed.
* #param skipper function to determine whether or not to skip seeking the first.
* #param logger a {#link Log} for seek errors.
* #return true if the failed record was skipped.
*/
public static boolean doSeeks(List<ConsumerRecord<?, ?>> records, Consumer<?, ?> consumer, Exception exception,
boolean recoverable, BiPredicate<ConsumerRecord<?, ?>, Exception> skipper, Log logger) {
You won't need all this code when you move to a more recent version (Boot 2.1 and Spring for Apache Kafka 2.2 are no longer supported).
My question is how does one include protected properties when creating a stub instance.
In my jest test I have:
const sandbox = createSandbox();
let manager: SinonStubbedInstance<EntityManager>;
let repo: Repo;
beforeEach(() => {
manager = sandbox.createStubInstance(EntityManager);
repo = new Repo(manager);
});
afterEach(() => sandbox.restore());
Which is attempting to make a stub of:
export declare class EntityManager {
/**
* Connection used by this entity manager.
*/
readonly connection: Connection;
/**
* Custom query runner to be used for operations in this entity manager.
* Used only in non-global entity manager.
*/
readonly queryRunner?: QueryRunner;
/**
* Once created and then reused by en repositories.
*/
protected repositories: Repository<any>[];
/**
* Plain to object transformer used in create and merge operations.
*/
.......
}
So I don't seem to be able to have readonly properties and protected properties included in the stub.
At the "repo = new Repo(manager);" line.
The above code yields the following exception:
Argument of type 'SinonStubbedInstance<EntityManager>' is not assignable to parameter of type 'EntityManager'.
Property 'repositories' is missing in type 'SinonStubbedInstance<EntityManager>'.ts(2345)
Is there anyway to tell Sinon to include the properties?
Any help would be most appreciated.
I solved this problem with
repo = new Repo(manager as any);
I don't know what in your case the Repo does with the EntityManager, also it is not completely clear to me what you want to test here.. so based on this my answer is a bit generic, but maybe it points you in the right direction.
My idea: Maybe you should decouple them. I would approach it the following way:
Create a getter in the EntityManager that gets all repos, name it for example getRepos()
Create a mocked array that contains some Repos... const mockedRepos;
Mock the getter of the EntityManager with your stub instance returning your mocked data:
manager.getRepos.returns(mockedRepos);
This way you don't need the protected repositories var in your test.
I'm using javafx's webengine to display a web page. And on the page, there's script calling window.confirm. I already know how to set the confirm handler and how to show a modal-like dialog.
My question is how can I get the user's choice before handler returns?
webEngine.setConfirmHandler(new Callback<String, Boolean>() {
#Override
public Boolean call(String message) {
// Show the dialog
...
return true; // How can I get user's choice here?
}
});
As described in javafx-jira.kenai.com/browse/RT-19783, we can use the new method showAndWait which is available in JavaFx 2.2 to achieve this.
Stage class:
/**
* Show the stage and wait for it to be closed before returning to the
* caller. This must be called on the FX Application thread. The stage
* must not already be visible prior to calling this method. This must not
* be called on the primary stage.
*
* #throws IllegalStateException if this method is called on a thread
* other than the JavaFX Application Thread.
* #throws IllegalStateException if this method is called on the
* primary stage.
* #throws IllegalStateException if this stage is already showing.
*/
public void showAndWait();
#jewelsea created a sample on https://gist.github.com/2992072. Thanks!
I have a spring BlazeDS integration application. I would like to log all the request.
I planned to use Filter. In my filter when I check the request parameter. It does not contain anything related to the client request. If I change the order of my filter(I have spring Security), then it prints some thing related to spring security.
I am unable to log the user request.
Any help is appreciated.
I have done the same functionality by using AOP (AspectJ) to inject a logger statement into communication endpoint methods. -- May this is an alternative approach for you too.
/** Logger advice and pointcuts for flex remoting stuff based on aspect J*/
public aspect AspectJInvocationLoggerAspect {
/** The name of the used logger. */
public final static String LOGGER_NAME = "myPackage.FLEX_INVOCATION_LOGGER";
/** Logger used to log messages. */
private static final Logger LOGGER = Logger.getLogger(LOGGER_NAME);
AspectJInvocationLoggerAspect() {
}
/**
* Pointcut for all flex remoting methods.
*
* Flex remoting methods are determined by two constraints:
* <ul>
* <li>they are public</li>
* <li>the are located in a class of name Remoting* within (implement an interface)
* {#link com.example.remote} package</li>
* <li>they are located within a class with an {#link RemotingDestination} annotation</li>
* </ul>
*/
pointcut remotingServiceFunction()
: (execution(public * com.example.remote.*.*Remote*.*(..)))
&& (within(#RemotingDestination *));
before() : remotingServiceFunction() {
if (LOGGER.isDebugEnabled()) {
Signature sig = thisJoinPointStaticPart.getSignature();
Object[] args = thisJoinPoint.getArgs();
String location = sig.getDeclaringTypeName() + '.' + sig.getName() + ", args=" + Arrays.toString(args);
LOGGER.debug(location + " - begin");
}
}
/** Log flex invocation result at TRACE level. */
after() returning (Object result): remotingServiceFunction() {
if (LOGGER.isTraceEnabled()) {
Signature sig = thisJoinPointStaticPart.getSignature();
String location = sig.getDeclaringTypeName() + '.' + sig.getName();
LOGGER.trace(location + " - end = " + result);
}
}
}