Destination resolver returned non-existent partition - spring-kafka

I am using Spring-Kafka to consume messages from Confluent Kafka and I am using RetryTopicConfiguration Bean to configure the topics and backoff strategy. My application works fine but I see a lot of WARNING log like the one below in my logs and I am wondering if my configuration is incorrect.
DeadLetterPublishingRecovererFactory$1 : Destination resolver returned non-existent partition flow-events-retry-0-4, KafkaProducer will determine partition to use for this topic
Config Code
#Bean
public KafkaTemplate kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
#Bean
public RetryTopicConfiguration myRetryableTopic(KafkaTemplate<String, Object> template) {
return RetryTopicConfigurationBuilder
.newInstance()
.exponentialBackoff(BACKOFF_INITIAL_DELAY_10MINS, BACKOFF_EXPONENTIAL_MULTIPLIER_3, BACKOFF_MAX_DELAY_4HRS)
.maxAttempts(5)
.doNotAutoCreateRetryTopics()
.setTopicSuffixingStrategy(TopicSuffixingStrategy.SUFFIX_WITH_INDEX_VALUE)
.create(template);
}
The retry topics are created separately with 1 partition and replication factor of 3.

By default, the same partition as the original topic is used; you can override that behavior by overriding the DeadLetterPublishingRecovererFactory #Bean:
#Bean(RetryTopicInternalBeanNames.DEAD_LETTER_PUBLISHING_RECOVERER_FACTORY_BEAN_NAME)
DeadLetterPublishingRecovererFactory factory(DestinationTopicResolver resolver) {
DeadLetterPublishingRecovererFactory factory = new DeadLetterPublishingRecovererFactory(resolver) {
#Override
protected TopicPartition resolveTopicPartition(ConsumerRecord<?, ?> cr, DestinationTopic nextDestination) {
return new TopicPartition(nextDestination.getDestinationName(), -1); // Kafka Chooses
// return new TopicPartition(nextDestination.getDestinationName(), 0); // explict
}
};
factory.setDeadLetterPublishingRecovererCustomizer(dlpr -> {
// ...
});
return factory;
}
As you can see in this example, you can also customize DLPR properties here too.
/**
* Creates and returns the {#link TopicPartition}, where the original record should be forwarded.
* By default, it will use the partition same as original record's partition, in the next destination topic.
*
* <p>{#link DeadLetterPublishingRecoverer#checkPartition} has logic to check whether that partition exists,
* and if it doesn't it sets -1, to allow the Producer itself to assign a partition to the record.</p>
*
* <p>Subclasses can inherit from this method to override the implementation, if necessary.</p>
*
* #param cr The original {#link ConsumerRecord}, which is to be forwarded to DLT
* #param nextDestination The next {#link DestinationTopic}, where the consumerRecord is to be forwarded
* #return An instance of {#link TopicPartition}, specifying the topic and partition, where the cr is to be sent
*/
protected TopicPartition resolveTopicPartition(final ConsumerRecord<?, ?> cr, final DestinationTopic nextDestination) {
return new TopicPartition(nextDestination.getDestinationName(), cr.partition());
}

Related

Handle deserialisation errors and other exceptions separately

Using spring-cloud-stream from spring-cloud Hoxton.SR12 release with Kafka Binder.
Boot version: 2.5.2
Problem statement:
I would like to handle deserialisation errors by pushing them to a poison-pill topic with no retries.
Handle any other exceptions by retrying and then pushing to a parkingLot topic.
Do not retry ValidationException
This is my error handling code so far:
#Configuration
#Slf4j
public class ErrorHandlingConfig {
#Value("${errorHandling.parkingLotDestination}")
private String parkingLotDestination;
#Value("${errorHandling.retryAttempts}")
private long retryAttempts;
#Value("${errorHandling.retryIntervalMillis}")
private long retryIntervalMillis;
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<byte[], byte[]>> customizer(SeekToCurrentErrorHandler errorHandler) {
return (container, dest, group) -> {
container.setErrorHandler(errorHandler);
};
}
#Bean
public SeekToCurrentErrorHandler errorHandler(DeadLetterPublishingRecoverer parkingLotPublisher) {
SeekToCurrentErrorHandler seekToCurrentErrorHandler = new SeekToCurrentErrorHandler(parkingLotPublisher, new FixedBackOff(retryIntervalMillis, retryAttempts));
seekToCurrentErrorHandler.addNotRetryableExceptions(ValidationException.class);
return seekToCurrentErrorHandler;
}
#Bean
public DeadLetterPublishingRecoverer parkingLotPublisher(KafkaOperations bytesTemplate) {
DeadLetterPublishingRecoverer deadLetterPublishingRecoverer = new DeadLetterPublishingRecoverer(bytesTemplate, (cr, e) -> new TopicPartition(parkingLotDestination, cr.partition()));
deadLetterPublishingRecoverer.setHeadersFunction((cr, e) -> cr.headers());
return deadLetterPublishingRecoverer;
}
}
I think what I have so far should cover the retryable exceptions being pushed to parking lot. How do I now add in the code to push failed deserialisation events to poison topic?
I want to do this outside of the binder/binding configuration and at the container level due to the outstanding issue of not being able to send to a custom dlqName.
I could use a ErrorHandlingDeserializer and call setFailedDeserializationFunction() on it that would contain a function that sends the message onto poison topic. Should I do this using a Source binding or raw KafkaOperations? I also need to work out how to hook this ErrorHandingDeserialiser into the ConsumerFactory.
Why are you using Hoxton with Boot 2.5? The proper cloud version for Boot 2.5.2 is 2020.0.3.
The SeekToCurrentErrorHandler already considers DeserializationExceptions to be fatal. See
/**
* Add exception types to the default list. By default, the following exceptions will
* not be retried:
* <ul>
* <li>{#link DeserializationException}</li>
* <li>{#link MessageConversionException}</li>
* <li>{#link ConversionException}</li>
* <li>{#link MethodArgumentResolutionException}</li>
* <li>{#link NoSuchMethodException}</li>
* <li>{#link ClassCastException}</li>
* </ul>
* All others will be retried.
* #param exceptionTypes the exception types.
* #since 2.6
* #see #removeNotRetryableException(Class)
* #see #setClassifications(Map, boolean)
*/
#SafeVarargs
#SuppressWarnings("varargs")
public final void addNotRetryableExceptions(Class<? extends Exception>... exceptionTypes) {
The ErrorHandlingDeserializer (without a function) adds the exception to a header; the DeadLetterPublishingRecoverer automatically extracts the original payload from the header and sets as the value() of the outgoing record (byte[]).
Since you are using native encoding, you will need two KafkaTemplates - one for the failed records that need to be re-serialized and one for the DeserializationExceptions (that uses a ByteArraySerializer.
See
/**
* Create an instance with the provided templates and destination resolving function,
* that receives the failed consumer record and the exception and returns a
* {#link TopicPartition}. If the partition in the {#link TopicPartition} is less than
* 0, no partition is set when publishing to the topic. The templates map keys are
* classes and the value the corresponding template to use for objects (producer
* record values) of that type. A {#link java.util.LinkedHashMap} is recommended when
* there is more than one template, to ensure the map is traversed in order. To send
* records with a null value, add a template with the {#link Void} class as a key;
* otherwise the first template from the map values iterator will be used.
* #param templates the {#link KafkaOperations}s to use for publishing.
* #param destinationResolver the resolving function.
*/
#SuppressWarnings("unchecked")
public DeadLetterPublishingRecoverer(Map<Class<?>, KafkaOperations<? extends Object, ? extends Object>> templates,
BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> destinationResolver) {
I also need to work out how to hook this ErrorHandingDeserialiser into the ConsumerFactory.
Just set the appropriate properties - see the documentation.

how can i send the records causing DeserializationException to a DLT while consuming a message from kafka topic using seekToErrorHandler?

I'm using spring boot 2.1.7.RELEASE and spring-kafka 2.2.8.RELEASE. We are in the process of upgrading the spring boot version but for now, we are using this spring-kafka version.
And I'm using #KafkaListener annotation to create a consumer and I'm using all default settings for the consumer.And I'm using below configuration as specified in the Spring-Kafka documentation.
// other props
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer2.class);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer2.class);
props.put(ErrorHandlingDeserializer.KEY_DESERIALIZER_CLASS, StringDeserializer.class);
props.put(ErrorHandlingDeserializer.VALUE_DESERIALIZER_CLASS, AvroDeserializer.class.getName());
return new DefaultKafkaConsumerFactory<>(props);
Now, I've implemented my custom SeekToCurrentErrorHandler by extending SeekToCurrentErrorHandler to capture the send the records causing deserialization exception and send them to DLT.
Now the problem is, when i'm trying to test this logic with 30 messages with alternate messages having the deserialization exception, the list of the handle method is getting all 30 messages instead of getting only 15 messages which are causing the exception. With that said, how can i get the records with exception? Please suggest.
Here is my custom SeekToCurrentErrorHandler code
#Component
public class MySeekToCurrentErrorHandler extends SeekToCurrentErrorHandler {
private final MyDeadLetterRecoverer deadLetterRecoverer;
#Autowired
public MySeekToCurrentErrorHandler(MyDeadLetterRecoverer deadLetterRecoverer) {
super(-1);
this.deadLetterRecoverer = deadLetterRecoverer;
}
#Override
public void handle(Exception thrownException, List<ConsumerRecord<?, ?>> data, Consumer<?, ?> consumer, MessageListenerContainer container) {
if (thrownException instanceof DeserializationException) {
//Improve to support multiple records
DeserializationException deserializationException = (DeserializationException) thrownException;
deadLetterRecoverer.accept(data.get(0), deserializationException);
ConsumerRecord<?, ?>. consumerRecord = data.get(0);
sout(consumerRecord.key());
sout(consumerRecord.value());
} else {
//Calling super method to let the 'SeekToCurrentErrorHandler' do what it is actually designed for
super.handle(thrownException, data, consumer, container);
}
}
}
We have to pass all the remaining records, so that the STCEH can re-seek all partitions for the records that weren't processed.
After you recover the failed record, use SeekUtils to seek the remaining records (remove the one that you have recovered from the list).
Set recoverable to false so that doSeeks() doesn't try to recover the new first record.
/**
* Seek records to earliest position, optionally skipping the first.
* #param records the records.
* #param consumer the consumer.
* #param exception the exception
* #param recoverable true if skipping the first record is allowed.
* #param skipper function to determine whether or not to skip seeking the first.
* #param logger a {#link Log} for seek errors.
* #return true if the failed record was skipped.
*/
public static boolean doSeeks(List<ConsumerRecord<?, ?>> records, Consumer<?, ?> consumer, Exception exception,
boolean recoverable, BiPredicate<ConsumerRecord<?, ?>, Exception> skipper, Log logger) {
You won't need all this code when you move to a more recent version (Boot 2.1 and Spring for Apache Kafka 2.2 are no longer supported).

Out of range Ids in Symfony route

I have a common structure for Symfony controller (using FOSRestBundle)
/**
* #Route\Get("users/{id}", requirements={"userId" = "(\d+)"})
*/
public function getUserAction(User $user)
{
}
Now if I request http://localhost/users/1 everything is fine. But if I request http://localhost/users/11111111111111111 I get 500 error and Exception
ERROR: value \"11111111111111111\" is out of range for type integer"
Is there a way to check id before it is transferred to database?
As a solution I can specify length of id
/**
* #Route\Get("users/{id}", requirements={"userId" = "(\d{,10})"})
*/
but then Symfony will say that there is no such route, instead of showing that the id is incorrect.
By telling Symfony that the getUserAction() argument is a User instance, it will take for granted that the {id} url parameter must be matched to the as primary key, handing it over to the Doctrine ParamConverter to fetch the corresponding User.
There are at least two workarounds.
1. Use the ParamConverter repository_method config
In the controller function's comment, we can add the #ParamConverter annotation and tell it to use the repository_method option.
This way Symfony will hand the url parameter to a function in our entity repository, from which we'll be able to check the integrity of the url parameter.
In UserRepository, let's create a function getting an entity by primary key, checking first the integrity of the argument. That is, $id must not be larger than the largest integer that PHP can handle (the PHP_INT_MAX constant).
Please note: $id is a string, so it's safe to compare it to PHP_INT_MAX, because PHP will automatically typecast PHP_INT_MAX to a string and compare it to $id. If it were an integer, the test would always fail (by design, all integers are less than or equal to PHP_INT_MAX).
// ...
use Symfony\Component\Form\Exception\OutOfBoundsException;
class UserRepository extends ...
{
// ...
public function findSafeById($id) {
if ($id > PHP_INT_MAX) {
throw new OutOfBoundsException($id . " is too large to fit in an integer");
}
return $this->find($id);
}
}
This is only an example: we can do anything we like before throwing the exception (for example logging the failed attempt).
Then, in our controller, let's include the ParamConverter annotation:
use Sensio\Bundle\FrameworkExtraBundle\Configuration\ParamConverter;
and modify the function comment adding the annotation:
#ParamConverter("id", class="App:User", options={"repository_method" = "findSafeById"})
Our controller function should look like:
/**
* #Get("users/{id}")
* #ParamConverter("id", class="App:User", options={"repository_method" = "findSafeById"})
*/
public function getUserAction(User $user) {
// Return a "OK" response with the content you like
}
This technique allows customizing the exception, but does not give you control over the response - you'll still get a 500 error in production.
Documentation: see here.
2. Parse the route "the old way"
This way was the only viable one up to Symfony 3, and gives you a more fine-grained control over the generated response.
Let's change the action prototype like this:
/**
* #Route\Get("users/{id}", requirements={"id" = "(\d+)"})
*/
public function getUserAction($id)
{
}
Now, in the action we'll receive the requested $id and we'll be able to check whether it's ok. If not, we throw an exception and/or return some error response (we can choose the HTTP status code, the format and anything else).
Below you find a sample implementation of this procedure.
use FOS\RestBundle\Controller\Annotations\Get;
use FOS\RestBundle\Controller\FOSRestController;
use Symfony\Component\Form\Exception\OutOfBoundsException;
use Symfony\Component\HttpFoundation\JsonResponse;
class MyRestController extends FOSRestController {
/**
* #Get("users/{id}", requirements={"id" = "(\d+)"})
*/
public function getUserAction($id) {
try {
if ($id > PHP_INT_MAX) {
throw new OutOfBoundsException($id . " is too large to fit in an integer");
}
// Replace App\Entity\User with your actual Entity alias
$user = $this->getDoctrine()->getRepository('App\Entity\User')->find($id);
if (!$user) {
throw new \Doctrine\ORM\NoResultException("User not found");
}
// Return a "OK" response with the content you like
return new JsonResponse(['key' => 123]);
} catch (Exception $e) {
return new JsonResponse(['message' => $e->getMessage()], 400);
}
}

Error while sending multivalued customdata

I am facing the following error while sending set of custom objects.
Oct 21, 2015 7:28:38 PM
com.amazonaws.services.dynamodbv2.datamodeling.marshallers.ObjectSetToStringSetMarshaller
marshall WARNING: Marshaling a set of non-String objects to a DynamoDB
StringSet. You won't be able to read these objects back out of
DynamoDB unless you REALLY know what you're doing: it's probably a
bug. If you DO know what you're doing feelfree to ignore this warning,
but consider using a custom marshaler for this instead.
com.amazonaws.AmazonServiceException: One or more parameter values
were invalid: An string set may not be empty (Service:
AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException;
Request ID: S61P89234SI8PE61M8I9QTG8RNVV4KQNSO5AEMVJF66Q9ASUAAJG) at
com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1181)
at
com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:766)
at
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:485)
at
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:306)
at
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:1799)
at
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.updateItem(AmazonDynamoDBClient.java:1614)
at
com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper$SaveObjectHandler.doUpdateItem(DynamoDBMapper.java:1241)
at
com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper$2.executeLowLevelRequest(DynamoDBMapper.java:937)
at
com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper$SaveObjectHandler.execute(DynamoDBMapper.java:1120)
at
com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper.save(DynamoDBMapper.java:966)
at
com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper.save(DynamoDBMapper.java:758)
at .dao.impl.DeviceDaoImpl.addDevice(DeviceDaoImpl.java:57) at
.admin.device.service.impl.DeviceServiceImpl.addDevice(DeviceServiceImpl.java:38)
This is the code snippet from where this is happening
#DynamoDBTable(tableName="Device")
public class Device extends BaseEntity implements Serializable{
private Set<AsscoiatedReviewer> asscoiatedReviewers ;
// getters setter
public Device addDevice(Device device) {
device.setDeviceId(generateId());
device.setCreatedTime(getCurrentTime());
if (!StringUtils.isNullOrEmpty(device.getPincode())) {
// get lat and long - do this for review team as well
//TODO : move it to common place that can be accessed for review team too
setLatLong(device);
// calculate the distances between this organization and reviewers
calculateAsscoiatedReviewersDistances(device);
}
mapper.save(device);
Device device1 = mapper.load(Device.class, device.getDeviceId());
return device1;
}
private void calculateAsscoiatedReviewersDistances(Device device) {
if(StringUtils.isNullOrEmpty(device.getPincode())) return;
try {
List<ReviewTeam> ReviewTeam = reviewTeamDao.getAllReviewTeams();
Set<AsscoiatedReviewer> asscoiatedReviewers = new HashSet<AsscoiatedReviewer>();
/*for (ReviewTeam reviewTeam2 : ReviewTeam) {
if(StringUtils.isNullOrEmpty(reviewTeam2.getPincode())) continue;
AsscoiatedReviewer asscoiatedReviewer = new AsscoiatedReviewer();
asscoiatedReviewer.setReviewTeamId(reviewTeam2.getReviewTeamId());
double distance = distance(new Double(device.getLat()), new Double(device.getLongt()),
new Double(reviewTeam2.getLat()), new Double(reviewTeam2.getLongt()));
asscoiatedReviewer.setDistance(new Double(distance).toString());
asscoiatedReviewers.add(asscoiatedReviewer);
}*/
device.setAsscoiatedReviewers(asscoiatedReviewers);
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public class AsscoiatedReviewer implements Serializable,Comparable {
/**
*
*/
private static final long serialVersionUID = -1906388056843006046L;
private String distance;
private String reviewTeamId;
// getters setters
}
it seems the error is happening from the following line
device.setAsscoiatedReviewers(asscoiatedReviewers);
Should this be marshalled / demarshalled and send to amazon ? Any help would be greatly appreciated .
UPDATE :
i am getting the following exception when the data is populated in asscoiatedReviewers
WARNING: Marshaling a set of non-String objects to a DynamoDB
StringSet. You won't be able to read these objects back out of
DynamoDB unless you REALLY know what you're doing: it's probably a
bug. If you DO know what you're doing feelfree to ignore this warning,
but consider using a custom marshaler for this instead.
com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMappingException:
Cannot unmarshall to type class
in.forus.foruscare.entity.AsscoiatedReviewer without a custom
marshaler or #DynamoDBDocument annotation. at
com.amazonaws.services.dynamodbv2.datamodeling.ConversionSchemas$StandardItemConverter.getObjectUnmarshaller(ConversionSchemas.java:590)
at
com.amazonaws.services.dynamodbv2.datamodeling.ConversionSchemas$StandardItemConverter.augment(ConversionSchemas.java:497)
at
com.amazonaws.services.dynamodbv2.datamodeling.ConversionSchemas$StandardItemConverter.getMemberUnmarshaller(ConversionSchemas.java:466)
at
com.amazonaws.services.dynamodbv2.datamodeling.ConversionSchemas$StandardItemConverter.getObjectSetUnmarshaller(ConversionSchemas.java:520)
does this mean i needed to write custom marshaller or annoate with DynamoDBDocument annotation . If yes, this is set of custom objects, can you send me the pointer to simple example of marshaller. i happened to look at other marshllers, they didnt seem to explain what i need

Log all request from flex / BlazeDS client using filter

I have a spring BlazeDS integration application. I would like to log all the request.
I planned to use Filter. In my filter when I check the request parameter. It does not contain anything related to the client request. If I change the order of my filter(I have spring Security), then it prints some thing related to spring security.
I am unable to log the user request.
Any help is appreciated.
I have done the same functionality by using AOP (AspectJ) to inject a logger statement into communication endpoint methods. -- May this is an alternative approach for you too.
/** Logger advice and pointcuts for flex remoting stuff based on aspect J*/
public aspect AspectJInvocationLoggerAspect {
/** The name of the used logger. */
public final static String LOGGER_NAME = "myPackage.FLEX_INVOCATION_LOGGER";
/** Logger used to log messages. */
private static final Logger LOGGER = Logger.getLogger(LOGGER_NAME);
AspectJInvocationLoggerAspect() {
}
/**
* Pointcut for all flex remoting methods.
*
* Flex remoting methods are determined by two constraints:
* <ul>
* <li>they are public</li>
* <li>the are located in a class of name Remoting* within (implement an interface)
* {#link com.example.remote} package</li>
* <li>they are located within a class with an {#link RemotingDestination} annotation</li>
* </ul>
*/
pointcut remotingServiceFunction()
: (execution(public * com.example.remote.*.*Remote*.*(..)))
&& (within(#RemotingDestination *));
before() : remotingServiceFunction() {
if (LOGGER.isDebugEnabled()) {
Signature sig = thisJoinPointStaticPart.getSignature();
Object[] args = thisJoinPoint.getArgs();
String location = sig.getDeclaringTypeName() + '.' + sig.getName() + ", args=" + Arrays.toString(args);
LOGGER.debug(location + " - begin");
}
}
/** Log flex invocation result at TRACE level. */
after() returning (Object result): remotingServiceFunction() {
if (LOGGER.isTraceEnabled()) {
Signature sig = thisJoinPointStaticPart.getSignature();
String location = sig.getDeclaringTypeName() + '.' + sig.getName();
LOGGER.trace(location + " - end = " + result);
}
}
}

Resources