How to test bindings with useNativeEncoding=true in Spring Cloud Stream - spring-kafka

I'm trying to test an application with the following binding configured:
spring:
cloud:
stream:
bindings:
accountSource:
destination: account
producer:
useNativeEncoding: true
kafka:
binder:
brokers: ${KAFKA_BOOTSTRAP_ADDRESSES}
producer-properties:
schema.registry.url: ${KAFKA_SCHEMA_REGISTRY_URL}
value.subject.name.strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
bindings:
accountSource:
producer:
configuration:
key:
serializer: org.apache.kafka.common.serialization.StringSerializer
value:
serializer: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerializer
When running the application normally, AbstractMessageChannel.interceptorList is empty and sending message to broker works fine.
When running the test (with spring-cloud-stream-test-support binder), AbstractMessageChannel.interceptorList gets populated with MessageConverterConfigurer and message is being converted using content-type serialization mechanisms (Avro object is converted to JSON). This is the test code:
#RunWith(SpringRunner.class)
public class AccountServiceImplTest {
#Autowired
private AccountService accountService;
#Autowired
private MessageCollector messageCollector;
#Autowired
private MessageChannel accountSource;
#Test
public void create() {
// Simplified code
AccountCreationRequest accountCreationRequest = AccountCreationRequest.builder().company(company).subscription(subscription).user(user).build();
accountCreationRequest = accountService.create(accountCreationRequest);
Message<?> message = messageCollector.forChannel(accountSource).poll();
// execute asserts on message
}
#TestConfiguration
#ComponentScan(basePackageClasses = TestSupportBinderAutoConfiguration.class)
static protected class AccountServiceImplTestConfiguration {
#EnableBinding({KafkaConfig.AccountBinding.class})
public interface AccountBinding {
#Output("accountSource")
MessageChannel accountSource();
}
}
Am I missing something to disable spring-cloud-stream serialization mechanisms?

Don't use the test binder; use the Kafka binder with an embedded kafka broker instead.

Related

Prevent __TypeId__ to be used in Spring Cloud Stream

We had a rogue producer setting a Kafka Header __TypeId__ to a class that was part of the producer, but not of a consumer implemented within a Spring Cloud Stream application using Kafka Streams binder. It resulted in an exception
java.lang.IllegalArgumentException: The class 'com.bad.MyClass' is not in the trusted packages: [java.util, java.lang, de.datev.pws.loon.dcp.foreignmodels.*]. If you believe this class is safe to deserialize, please provide its name. If the serialization is only done by a trusted source, you can also enable trust all (*).
How can we ensure within the consumer that this TypeId header is ignored?
Some stackoverflow answers point to spring.json.use.type.headers=false, but it seems to be an "old" property, that is no more valid.
application.yaml:
spring:
json.use.type.headers: false
application:
name: dcp-all
kafka:
bootstrap-servers: 'xxxxx.kafka.dev.dvint.de:9093'
cloud:
stream:
kafka:
streams:
binder:
required-acks: -1 # all in-sync-replicas
...
Stack trace:
at org.springframework.kafka.support.mapping.DefaultJackson2JavaTypeMapper.getClassIdType(DefaultJackson2JavaTypeMapper.java:129)
at org.springframework.kafka.support.mapping.DefaultJackson2JavaTypeMapper.toJavaType(DefaultJackson2JavaTypeMapper.java:103)
at org.springframework.kafka.support.serializer.JsonDeserializer.deserialize(JsonDeserializer.java:569)
at org.apache.kafka.streams.processor.internals.SourceNode.deserializeValue(SourceNode.java:58)
at org.apache.kafka.streams.processor.internals.RecordDeserializer.deserialize(RecordDeserializer.java:66)
at org.apache.kafka.streams.processor.internals.RecordQueue.updateHead(RecordQueue.java:176)
at org.apache.kafka.streams.processor.internals.RecordQueue.addRawRecords(RecordQueue.java:112)
at org.apache.kafka.streams.processor.internals.PartitionGroup.addRawRecords(PartitionGroup.java:304)
at org.apache.kafka.streams.processor.internals.StreamTask.addRecords(StreamTask.java:960)
at org.apache.kafka.streams.processor.internals.TaskManager.addRecordsToTasks(TaskManager.java:1068)
at org.apache.kafka.streams.processor.internals.StreamThread.pollPhase(StreamThread.java:962)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:751)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:604)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:576)
Here is a unit test
#Test
void consumeWorksEvenWithBadTypesHeader() throws JsonProcessingException, InterruptedException {
Map<String, Object> producerProps = KafkaTestUtils.producerProps(embeddedKafka);
producerProps.put("key.serializer", StringSerializer.class.getName());
DefaultKafkaProducerFactory<String, String> pf = new DefaultKafkaProducerFactory<>(producerProps);
List<Header> headers = Arrays.asList(new RecordHeader("__TypeId__", "com.bad.MyClass".getBytes()));
ProducerRecord<String,String> p = new ProducerRecord(TOPIC1, 0, "any-key",
"{ ... some valid JSON ...}", headers);
try {
KafkaTemplate<String, String> template = new KafkaTemplate<>(pf, true);
template.send(p);
ConsumerRecord<String, String> consumerRecord = KafkaTestUtils.getSingleRecord(consumer, TOPIC2, DEFAULT_CONSUMER_POLL_TIME);
// Assertions ...
} finally {
pf.destroy();
}
}
You have 2 options:
On the producer side set the property to omit adding the type info headers
On the consumer side, set the property to not use the type info headers
https://docs.spring.io/spring-kafka/docs/current/reference/html/#json-serde
It is not an "old" property.
/**
* Kafka config property for using type headers (default true).
* #since 2.2.3
*/
public static final String USE_TYPE_INFO_HEADERS = "spring.json.use.type.headers";
It needs to be set in the consumer properties.

Spring cloud stream handling poison pills with Kafka DLT

spring-boot 2.5.2
spring-cloud Hoxton.SR12
spring-kafka 2.6.7 (downgraded due to issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1079)
I'm following this recipe to handle deserialisation errors: https://github.com/spring-cloud/spring-cloud-stream-samples/blob/main/recipes/recipe-3-handling-deserialization-errors-dlq-kafka.adoc
I created the beans mentioned in the recipe above as:
Configuration
#Slf4j
public class ErrorHandlingConfig {
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<byte[], byte[]>> customizer(SeekToCurrentErrorHandler errorHandler) {
return (container, dest, group) -> {
container.setErrorHandler(errorHandler);
};
}
#Bean
public SeekToCurrentErrorHandler errorHandler(DeadLetterPublishingRecoverer deadLetterPublishingRecoverer) {
return new SeekToCurrentErrorHandler(deadLetterPublishingRecoverer);
}
#Bean
public DeadLetterPublishingRecoverer publisher(KafkaOperations bytesTemplate) {
return new DeadLetterPublishingRecoverer(bytesTemplate);
}
}
configuration file:
spring:
cloud:
stream:
default:
producer:
useNativeEncoding: true
consumer:
useNativeDecoding: true
bindings:
myInboundRoute:
destination: some-destination.1
group: a-custom-group
myOutboundRoute:
destination: some-destination.2
kafka:
binder:
brokers: localhost
defaultBrokerPort: 9092
configuration:
application:
security: PLAINTEXT
bindings:
myInboundRoute:
consumer:
autoCommitOffset: true
startOffset: latest
enableDlq: true
dlqName: my-dql.poison
dlqProducerProperties:
configuration:
value.serializer: myapp.serde.MyCustomSerializer
configuration:
value.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
spring.deserializer.value.delegate.class: myapp.serde.MyCustomSerializer
myOutboundRoute:
producer:
configuration:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: myapp.serde.MyCustomSerializer
I was expecting the DLT to be called my-dql.poison. This topic is in fact created fine, however I also get a second topic auto created called some-destination.1.DLT
Why does it create this as well as the one I have named in the config with dlqName ?
What am I doing wrong? When I poll for messages, the message is in the auto created some-destination.1.DLT and not my dlqName
You should not configure dlt processing in the binding if you configure the STCEH in the container. Also set maxAttempts=1 to disable retries there.
You need to configure a destination resolver in the DLPR to use a different name.
/**
* Create an instance with the provided template and destination resolving function,
* that receives the failed consumer record and the exception and returns a
* {#link TopicPartition}. If the partition in the {#link TopicPartition} is less than
* 0, no partition is set when publishing to the topic.
* #param template the {#link KafkaOperations} to use for publishing.
* #param destinationResolver the resolving function.
*/
public DeadLetterPublishingRecoverer(KafkaOperations<? extends Object, ? extends Object> template,
BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> destinationResolver) {
this(Collections.singletonMap(Object.class, template), destinationResolver);
}
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#dead-letters
There is an open issue to configure the DLPR with the binding's DLT name.
https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1031

#Header and spring stream functional programming model

Is there a way to use #Header inside the following kafka consumer code ? I am using Spring Cloud Stream (Kafka Stream binder implementation), and there after my implemention is using functional model for example.
#Bean
public Consumer<KStream<String, Pojo>> process() {
return messages -> messages.foreach((k, v) -> process(v));
}
If using Spring for apache kafka then this can be as simple as
#KafkaListener(topics = "${mytopicname}", clientIdPrefix = "${myprefix}", errorHandler = "customEventErrorHandler")
public void processEvent(#Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
#Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
#Header(KafkaHeaders.RECEIVED_TIMESTAMP) long ts
#Valid Pojo pojo) {
...
// use headers here
...
}
No; the Kafka Streams binder is not based on Spring Messaging.
You can access headers, topic, and such in a Transformer (via the ProcessorContext) added to your stream.
You can use the Kafka Message Channel binder with
#Bean
public Consumer<Message<Pojo>> process() {
return message -> ...
}

Building a reactive long polling API with webflux

I'm trying to use spring webflux to build a reactive long-polling API service, I have implemented a service that produces a data stream from my data resource(Kafka). And the caller of this long-polling API is supposed to get/wait for the first data with the requested key. But somehow the API is not returning the data if data received after the application gets the data. Am I using the correct reactor methods? Thanks so much!
My service:
#Service
public class KafkaService {
// the processor is a in-memory cache that keeps tracking of the old messages
#Autowired private KafkaMessageProcessor kafkaMessageProcessor;


#Autowired private ApplicationEventService applicationEventService;

private Flux<KafkaMessage> eventFlux;
#PostConstruct

public void setEventFlux() {

 this.eventFlux = Flux.create(applicationEventService).share();
}
public Flux<KafkaMessage> listenToKeyFlux(String key) {
return kafkaMessageProcessor

 .getAll()

 .concatWith(this.eventFlux.share())

 .filter(x -> x.getKey().equals(key));
}
}
My handler:
#RestController

#RequestMapping("/longpolling")
public class LongPollingController {


 private static final String MSG_PREFIX = "key_";


 #Autowired private KafkaService kafkaService;
#GetMapping("/message/{key}")

#CrossOrigin(allowedHeaders = "*", origins = "*")

public CompletableFuture<KafkaMessage> getMessage(#PathVariable String key) {

return kafkaService.listenToKeyFlux(MSG_PREFIX + key).shareNext().toFuture();

}
}

Spring Cloud stream binder Kafka custom headers are not received in consumer

I'm using Spring cloud stream binder kafka, Edgware.SR4 release.
I have set custom headers to a message payload and published it but i can't see those headers in consumer end.
I have used Message object to bind payload and headers. I have tried adding the property spring.cloud.stream.kafka.binder.headers but it did not work
Producer:
Application.yml
spring:
cloud:
stream:
bindings:
sampleEvent:
destination: sample-event
content-type: application/json
kafka:
binder:
brokers: localhost:9092
zkNodes: localhost:2181
autoCreateTopics: false
zkConnectionTimeout: 36000
MessageChannelConstants.java
public class MessageChannelConstants {
public static final String SAMPLE_EVENT = "sampleEvent";
private MessageChannelConstants() {}
}
SampleMessageChannels.java
public interface SampleMessageChannels {
#Output(MessageChannelConstants.SAMPLE_EVENT)
MessageChannel sampleEvent();
}
SampleEventPublisher.java
#Service
#EnableBinding(SampleMessageChannels.class)
public class SampleEventPublisher{
#Autowired
private SampleMessageChannels sampleMessageChannels;
public void publishSampleEvent(SampleEvent sampleEvent) {
final Message<SampleEvent> message = MessageBuilder.withPayload(sampleEvent).setHeader("appId", "Demo").build();
MessageChannel messageChannel = SampleMessageChannels.sampleEvent();
if (messageChannel != null) {
messageChannel.send(message);
}
}
}
Consumer:
application.yml
spring:
cloud:
stream:
bindings:
sampleEvent:
destination: sample-event
content-type: application/json
kafka:
binder:
brokers: localhost:9092
zkNodes: localhost:2181
autoCreateTopics: false
zkConnectionTimeout: 36000
MessageChannelConstants.java
public class MessageChannelConstants {
public static final String SAMPLE_EVENT = "sampleEvent";
private MessageChannelConstants() {}
}
SampleMessageChannels.java
public interface SampleMessageChannels {
#Output(MessageChannelConstants.SAMPLE_EVENT)
MessageChannel sampleEvent();
}
SampleEventListener.java
#Service
#EnableBinding(SampleMessageChannels.class)
public class SampleEventListener{
#StreamListener(MessageChannelConstants.SAMPLE_EVENT)
public void listenSampleEvent(#Payload SampleEvent event,
#Header(required = true, name = "appId") String appId) {
// do something
}
Below is the Exception I got,
org.springframework.messaging.MessageHandlingException: Missing header 'appId' for method parameter type [class java.lang.String]
at org.springframework.messaging.handler.annotation.support.HeaderMethodArgumentResolver.handleMissingValue(HeaderMethodArgumentResolver.java:100)
at org.springframework.messaging.handler.annotation.support.AbstractNamedValueMethodArgumentResolver.resolveArgument(AbstractNamedValueMethodArgumentResolver.java:103)
at org.springframework.messaging.handler.invocation.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:112)
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.getMethodArgumentValues(InvocableHandlerMethod.java:135)
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:107)
at org.springframework.cloud.stream.binding.StreamListenerMessageHandler.handleRequestMessage(StreamListenerMessageHandler.java:55)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:109)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:148)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:121)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:89)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:425)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:375)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:105)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutput(AbstractMessageProducingHandler.java:360)
at org.springframework.integration.handler.AbstractMessageProducingHandler.produceOutput(AbstractMessageProducingHandler.java:271)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutputs(AbstractMessageProducingHandler.java:188)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:115)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
at org.springframework.integration.channel.FixedSubscriberChannel.send(FixedSubscriberChannel.java:70)
at org.springframework.integration.channel.FixedSubscriberChannel.send(FixedSubscriberChannel.java:64)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:105)
at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:188)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.access$200(KafkaMessageDrivenChannelAdapter.java:63)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:372)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:352)
at org.springframework.kafka.listener.adapter.RetryingAcknowledgingMessageListenerAdapter$1.doWithRetry(RetryingAcknowledgingMessageListenerAdapter.java:79)
at org.springframework.kafka.listener.adapter.RetryingAcknowledgingMessageListenerAdapter$1.doWithRetry(RetryingAcknowledgingMessageListenerAdapter.java:73)
at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:287)
at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:180)
at org.springframework.kafka.listener.adapter.RetryingAcknowledgingMessageListenerAdapter.onMessage(RetryingAcknowledgingMessageListenerAdapter.java:73)
at org.springframework.kafka.listener.adapter.RetryingAcknowledgingMessageListenerAdapter.onMessage(RetryingAcknowledgingMessageListenerAdapter.java:39)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:792)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:736)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.access$2100(KafkaMessageListenerContainer.java:246)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer$ListenerInvoker.run(KafkaMessageListenerContainer.java:1025)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Note: I am using spring cloud sleuth and zipkin dependency as well.
With Edgware (SCSt Ditmars), you have to specify which headers will be transferred.
See Kafka Binder Properties.
This is because Edgware was based on Kafka before it supported headers natively and we encode the headers into the payload.
spring.cloud.stream.kafka.binder.headers
The list of custom headers that will be transported by the binder.
Default: empty.
You should also be sure to upgrade spring-kafka to 1.3.9.RELEASE and kafka-clients to 0.11.0.2.
Preferably, though, upgrade to Finchley or Greemwich. Those versions support headers natively.

Resources