spring-boot 2.5.2
spring-cloud Hoxton.SR12
spring-kafka 2.6.7 (downgraded due to issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1079)
I'm following this recipe to handle deserialisation errors: https://github.com/spring-cloud/spring-cloud-stream-samples/blob/main/recipes/recipe-3-handling-deserialization-errors-dlq-kafka.adoc
I created the beans mentioned in the recipe above as:
Configuration
#Slf4j
public class ErrorHandlingConfig {
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<byte[], byte[]>> customizer(SeekToCurrentErrorHandler errorHandler) {
return (container, dest, group) -> {
container.setErrorHandler(errorHandler);
};
}
#Bean
public SeekToCurrentErrorHandler errorHandler(DeadLetterPublishingRecoverer deadLetterPublishingRecoverer) {
return new SeekToCurrentErrorHandler(deadLetterPublishingRecoverer);
}
#Bean
public DeadLetterPublishingRecoverer publisher(KafkaOperations bytesTemplate) {
return new DeadLetterPublishingRecoverer(bytesTemplate);
}
}
configuration file:
spring:
cloud:
stream:
default:
producer:
useNativeEncoding: true
consumer:
useNativeDecoding: true
bindings:
myInboundRoute:
destination: some-destination.1
group: a-custom-group
myOutboundRoute:
destination: some-destination.2
kafka:
binder:
brokers: localhost
defaultBrokerPort: 9092
configuration:
application:
security: PLAINTEXT
bindings:
myInboundRoute:
consumer:
autoCommitOffset: true
startOffset: latest
enableDlq: true
dlqName: my-dql.poison
dlqProducerProperties:
configuration:
value.serializer: myapp.serde.MyCustomSerializer
configuration:
value.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
spring.deserializer.value.delegate.class: myapp.serde.MyCustomSerializer
myOutboundRoute:
producer:
configuration:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: myapp.serde.MyCustomSerializer
I was expecting the DLT to be called my-dql.poison. This topic is in fact created fine, however I also get a second topic auto created called some-destination.1.DLT
Why does it create this as well as the one I have named in the config with dlqName ?
What am I doing wrong? When I poll for messages, the message is in the auto created some-destination.1.DLT and not my dlqName
You should not configure dlt processing in the binding if you configure the STCEH in the container. Also set maxAttempts=1 to disable retries there.
You need to configure a destination resolver in the DLPR to use a different name.
/**
* Create an instance with the provided template and destination resolving function,
* that receives the failed consumer record and the exception and returns a
* {#link TopicPartition}. If the partition in the {#link TopicPartition} is less than
* 0, no partition is set when publishing to the topic.
* #param template the {#link KafkaOperations} to use for publishing.
* #param destinationResolver the resolving function.
*/
public DeadLetterPublishingRecoverer(KafkaOperations<? extends Object, ? extends Object> template,
BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> destinationResolver) {
this(Collections.singletonMap(Object.class, template), destinationResolver);
}
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#dead-letters
There is an open issue to configure the DLPR with the binding's DLT name.
https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1031
Related
We had a rogue producer setting a Kafka Header __TypeId__ to a class that was part of the producer, but not of a consumer implemented within a Spring Cloud Stream application using Kafka Streams binder. It resulted in an exception
java.lang.IllegalArgumentException: The class 'com.bad.MyClass' is not in the trusted packages: [java.util, java.lang, de.datev.pws.loon.dcp.foreignmodels.*]. If you believe this class is safe to deserialize, please provide its name. If the serialization is only done by a trusted source, you can also enable trust all (*).
How can we ensure within the consumer that this TypeId header is ignored?
Some stackoverflow answers point to spring.json.use.type.headers=false, but it seems to be an "old" property, that is no more valid.
application.yaml:
spring:
json.use.type.headers: false
application:
name: dcp-all
kafka:
bootstrap-servers: 'xxxxx.kafka.dev.dvint.de:9093'
cloud:
stream:
kafka:
streams:
binder:
required-acks: -1 # all in-sync-replicas
...
Stack trace:
at org.springframework.kafka.support.mapping.DefaultJackson2JavaTypeMapper.getClassIdType(DefaultJackson2JavaTypeMapper.java:129)
at org.springframework.kafka.support.mapping.DefaultJackson2JavaTypeMapper.toJavaType(DefaultJackson2JavaTypeMapper.java:103)
at org.springframework.kafka.support.serializer.JsonDeserializer.deserialize(JsonDeserializer.java:569)
at org.apache.kafka.streams.processor.internals.SourceNode.deserializeValue(SourceNode.java:58)
at org.apache.kafka.streams.processor.internals.RecordDeserializer.deserialize(RecordDeserializer.java:66)
at org.apache.kafka.streams.processor.internals.RecordQueue.updateHead(RecordQueue.java:176)
at org.apache.kafka.streams.processor.internals.RecordQueue.addRawRecords(RecordQueue.java:112)
at org.apache.kafka.streams.processor.internals.PartitionGroup.addRawRecords(PartitionGroup.java:304)
at org.apache.kafka.streams.processor.internals.StreamTask.addRecords(StreamTask.java:960)
at org.apache.kafka.streams.processor.internals.TaskManager.addRecordsToTasks(TaskManager.java:1068)
at org.apache.kafka.streams.processor.internals.StreamThread.pollPhase(StreamThread.java:962)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:751)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:604)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:576)
Here is a unit test
#Test
void consumeWorksEvenWithBadTypesHeader() throws JsonProcessingException, InterruptedException {
Map<String, Object> producerProps = KafkaTestUtils.producerProps(embeddedKafka);
producerProps.put("key.serializer", StringSerializer.class.getName());
DefaultKafkaProducerFactory<String, String> pf = new DefaultKafkaProducerFactory<>(producerProps);
List<Header> headers = Arrays.asList(new RecordHeader("__TypeId__", "com.bad.MyClass".getBytes()));
ProducerRecord<String,String> p = new ProducerRecord(TOPIC1, 0, "any-key",
"{ ... some valid JSON ...}", headers);
try {
KafkaTemplate<String, String> template = new KafkaTemplate<>(pf, true);
template.send(p);
ConsumerRecord<String, String> consumerRecord = KafkaTestUtils.getSingleRecord(consumer, TOPIC2, DEFAULT_CONSUMER_POLL_TIME);
// Assertions ...
} finally {
pf.destroy();
}
}
You have 2 options:
On the producer side set the property to omit adding the type info headers
On the consumer side, set the property to not use the type info headers
https://docs.spring.io/spring-kafka/docs/current/reference/html/#json-serde
It is not an "old" property.
/**
* Kafka config property for using type headers (default true).
* #since 2.2.3
*/
public static final String USE_TYPE_INFO_HEADERS = "spring.json.use.type.headers";
It needs to be set in the consumer properties.
We are using Spring Kafka 2.2.2 Release to retrieve records from Kafka using #KafkaListener and with the ConcurrentKafkaListenerContainerFactory. We have configured the max-poll-records to 5, however it always gives only 1 record in the list to the consumer instead of 5 records.
Whilst with the same configuration, it works in Spring Kafka 2.1.4.Release.
Here is our application.yml configuration:
spring:
kafka:
bootstrap-servers: localhost:9092
consumer:
enable-auto-commit: false
max-poll-records: 5
bootstrap-servers: localhost:9092
group-id: group_id
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: com.gap.cascade.li.data.xx.xx.CustomDeserialiser
Here is our ConcurrentKafkaListenerContainerFactory:
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setBatchListener(true);
return factory;
}
Are we missing any configuration which needs to be done for Spring Kafka 2.2.2 Release?
Assuming you have a listener
#KafkaListener(...)
public void listen(List<...> data) {
...
}
Setting factory.setBatchListener(true); should work for you (as long as there is more than one record ready).
You can also use the boot property
spring:
kafka:
listener:
type: batch
to do the same thing; avoiding the need to declare your own factory.
If you turn on DEBUG logging, the container will log how many records are returned by the poll. You can also set fetch.min.bytes and fetch.max.wait.ms to influence how many records are returned if only one is immediately ready...
spring:
kafka:
consumer:
auto-offset-reset: earliest
enable-auto-commit: false
properties:
fetch.min.bytes: 10000
fetch.max.wait.ms: 2000
listener:
type: batch
BTW, the current 2.2.x release is 2.2.7 (boot 2.1.6).
I'm trying to test an application with the following binding configured:
spring:
cloud:
stream:
bindings:
accountSource:
destination: account
producer:
useNativeEncoding: true
kafka:
binder:
brokers: ${KAFKA_BOOTSTRAP_ADDRESSES}
producer-properties:
schema.registry.url: ${KAFKA_SCHEMA_REGISTRY_URL}
value.subject.name.strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
bindings:
accountSource:
producer:
configuration:
key:
serializer: org.apache.kafka.common.serialization.StringSerializer
value:
serializer: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerializer
When running the application normally, AbstractMessageChannel.interceptorList is empty and sending message to broker works fine.
When running the test (with spring-cloud-stream-test-support binder), AbstractMessageChannel.interceptorList gets populated with MessageConverterConfigurer and message is being converted using content-type serialization mechanisms (Avro object is converted to JSON). This is the test code:
#RunWith(SpringRunner.class)
public class AccountServiceImplTest {
#Autowired
private AccountService accountService;
#Autowired
private MessageCollector messageCollector;
#Autowired
private MessageChannel accountSource;
#Test
public void create() {
// Simplified code
AccountCreationRequest accountCreationRequest = AccountCreationRequest.builder().company(company).subscription(subscription).user(user).build();
accountCreationRequest = accountService.create(accountCreationRequest);
Message<?> message = messageCollector.forChannel(accountSource).poll();
// execute asserts on message
}
#TestConfiguration
#ComponentScan(basePackageClasses = TestSupportBinderAutoConfiguration.class)
static protected class AccountServiceImplTestConfiguration {
#EnableBinding({KafkaConfig.AccountBinding.class})
public interface AccountBinding {
#Output("accountSource")
MessageChannel accountSource();
}
}
Am I missing something to disable spring-cloud-stream serialization mechanisms?
Don't use the test binder; use the Kafka binder with an embedded kafka broker instead.
I'm using Spring cloud stream binder kafka, Edgware.SR4 release.
I have set custom headers to a message payload and published it but i can't see those headers in consumer end.
I have used Message object to bind payload and headers. I have tried adding the property spring.cloud.stream.kafka.binder.headers but it did not work
Producer:
Application.yml
spring:
cloud:
stream:
bindings:
sampleEvent:
destination: sample-event
content-type: application/json
kafka:
binder:
brokers: localhost:9092
zkNodes: localhost:2181
autoCreateTopics: false
zkConnectionTimeout: 36000
MessageChannelConstants.java
public class MessageChannelConstants {
public static final String SAMPLE_EVENT = "sampleEvent";
private MessageChannelConstants() {}
}
SampleMessageChannels.java
public interface SampleMessageChannels {
#Output(MessageChannelConstants.SAMPLE_EVENT)
MessageChannel sampleEvent();
}
SampleEventPublisher.java
#Service
#EnableBinding(SampleMessageChannels.class)
public class SampleEventPublisher{
#Autowired
private SampleMessageChannels sampleMessageChannels;
public void publishSampleEvent(SampleEvent sampleEvent) {
final Message<SampleEvent> message = MessageBuilder.withPayload(sampleEvent).setHeader("appId", "Demo").build();
MessageChannel messageChannel = SampleMessageChannels.sampleEvent();
if (messageChannel != null) {
messageChannel.send(message);
}
}
}
Consumer:
application.yml
spring:
cloud:
stream:
bindings:
sampleEvent:
destination: sample-event
content-type: application/json
kafka:
binder:
brokers: localhost:9092
zkNodes: localhost:2181
autoCreateTopics: false
zkConnectionTimeout: 36000
MessageChannelConstants.java
public class MessageChannelConstants {
public static final String SAMPLE_EVENT = "sampleEvent";
private MessageChannelConstants() {}
}
SampleMessageChannels.java
public interface SampleMessageChannels {
#Output(MessageChannelConstants.SAMPLE_EVENT)
MessageChannel sampleEvent();
}
SampleEventListener.java
#Service
#EnableBinding(SampleMessageChannels.class)
public class SampleEventListener{
#StreamListener(MessageChannelConstants.SAMPLE_EVENT)
public void listenSampleEvent(#Payload SampleEvent event,
#Header(required = true, name = "appId") String appId) {
// do something
}
Below is the Exception I got,
org.springframework.messaging.MessageHandlingException: Missing header 'appId' for method parameter type [class java.lang.String]
at org.springframework.messaging.handler.annotation.support.HeaderMethodArgumentResolver.handleMissingValue(HeaderMethodArgumentResolver.java:100)
at org.springframework.messaging.handler.annotation.support.AbstractNamedValueMethodArgumentResolver.resolveArgument(AbstractNamedValueMethodArgumentResolver.java:103)
at org.springframework.messaging.handler.invocation.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:112)
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.getMethodArgumentValues(InvocableHandlerMethod.java:135)
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:107)
at org.springframework.cloud.stream.binding.StreamListenerMessageHandler.handleRequestMessage(StreamListenerMessageHandler.java:55)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:109)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:148)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:121)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:89)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:425)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:375)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:105)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutput(AbstractMessageProducingHandler.java:360)
at org.springframework.integration.handler.AbstractMessageProducingHandler.produceOutput(AbstractMessageProducingHandler.java:271)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutputs(AbstractMessageProducingHandler.java:188)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:115)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
at org.springframework.integration.channel.FixedSubscriberChannel.send(FixedSubscriberChannel.java:70)
at org.springframework.integration.channel.FixedSubscriberChannel.send(FixedSubscriberChannel.java:64)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:105)
at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:188)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.access$200(KafkaMessageDrivenChannelAdapter.java:63)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:372)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:352)
at org.springframework.kafka.listener.adapter.RetryingAcknowledgingMessageListenerAdapter$1.doWithRetry(RetryingAcknowledgingMessageListenerAdapter.java:79)
at org.springframework.kafka.listener.adapter.RetryingAcknowledgingMessageListenerAdapter$1.doWithRetry(RetryingAcknowledgingMessageListenerAdapter.java:73)
at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:287)
at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:180)
at org.springframework.kafka.listener.adapter.RetryingAcknowledgingMessageListenerAdapter.onMessage(RetryingAcknowledgingMessageListenerAdapter.java:73)
at org.springframework.kafka.listener.adapter.RetryingAcknowledgingMessageListenerAdapter.onMessage(RetryingAcknowledgingMessageListenerAdapter.java:39)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:792)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:736)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.access$2100(KafkaMessageListenerContainer.java:246)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer$ListenerInvoker.run(KafkaMessageListenerContainer.java:1025)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Note: I am using spring cloud sleuth and zipkin dependency as well.
With Edgware (SCSt Ditmars), you have to specify which headers will be transferred.
See Kafka Binder Properties.
This is because Edgware was based on Kafka before it supported headers natively and we encode the headers into the payload.
spring.cloud.stream.kafka.binder.headers
The list of custom headers that will be transported by the binder.
Default: empty.
You should also be sure to upgrade spring-kafka to 1.3.9.RELEASE and kafka-clients to 0.11.0.2.
Preferably, though, upgrade to Finchley or Greemwich. Those versions support headers natively.
I am starting to work with services in Symfony and therefore created the example service from the symfony documentation:
namespace AppBundle\Service;
use Psr\Log\LoggerInterface;
class MessageGenerator
{
private $logger;
public function __construct(LoggerInterface $logger){
}
public function getMessage()
{
$this->logger->info('Success!');
}
}
I call that service in my controller (I also have the use Statement:
: use AppBundle\Service\MessageGenerator;
$messageGenerator = $this->get(MessageGenerator::class);
$message = $messageGenerator->getMessage();
$this->addFlash('success', $message);
My service is defined in the services.yml file:
app.message_generator:
class: AppBundle\Service\MessageGenerator
public: true
so in my eyes I did everything exactly as described in the documentation and when calling:
php app/console debug:container app.message_generator
in my commandline I get my service:
Option Value
------------------ ------------------------------------
Service ID app.message_generator
Class AppBundle\Service\MessageGenerator
Tags -
Scope container
Public yes
Synthetic no
Lazy no
Synchronized no
Abstract no
Autowired no
Autowiring Types -
Now when I execute the controller function where I call my service I still get the error:
You have requested a non-existent service "appbundle\service\messagegenerator".
Any ideas?
Symfony is a bit confusing at naming: you retrieve the service by requesting it by its defined name: app.message_generator.
$messageGenerator = $this->get('app.message_generator');
Symfony has recently suggested switching from a give-name (app.message_generator) that you are defining the service as, to the class name (AppBundle\Service\MessageGenerator). They are both just 'a name' to call the service.
You are trying to use both, when only the given name is defined.
In the long term, it's suggested to use the ::class based name, and quite possibly allow the framework to find the classes itself, and configure them itself too. This means that, by default, all services are private, and are handled by the framework & it's service container.
In the meantime, while you are learning, you can either:
$messageGenerator = $this->get('app.message_generator');
or define explicitly define the service, and make it public, so it can be fetched with ->get(...) from the container.
# services.yml
AppBundle\Service\MessageGenerator:
class: AppBundle\Service\MessageGenerator
public: true
# php controller
$messageGenerator = $this->get(MessageGenerator::class);
or just injected automatically into the controller, when that is requested
public function __construct(LoggerInterface $logger, MessageGenerator $msgGen)
{
$this->messageGenerator = $msgGen;
}
public function getMessage()
{
$result = $this->messageGenerator->do_things(....);
$this->logger->info('Success!');
}