Spring Cloud Kafka Streams Dynamic Message Conversion based on Header info - spring-kafka

I'm trying to use Spring Cloud Kafka Streams to process messages off of a Kafka Topic that contains different types of messages. For instance, we receive a JSON message from the topic which can be either Type A or Type B message. The producer adds message type in the header, is there a way to Read that header info within the Functional Binder and convert the message accordingly? Or also is there a "Choice" option for branching as messages come in, to route the message to the right convertor?

If you configure the binding to use nativeDecoding, the deserialzation is done by Kafka (via the value.deserializer consumer property).
spring-kafka provides a JsonDeserializer which looks for type information in specific headers (set by a corresponding JsonSerializer.
It also provides a DelegatingDeserializer which allows you to select which deserializer to use based on the value in a spring.kafka.serialization.selector header.
See the Spring for Apache Kafka Reference Manual for more information.

Related

Biztalk 2013r2 - Checking for multiple return schemas

I'm sending data to a REST service, but need to make sure I can handle response issues generated from the XML-RPC end of the service.
The problem is the return schema for a fault is completely different to the standardised response schema.
I've changed my response types to System.Xml.XmlDocument, but I'm hoping there's an easier way of me checking for a specific schema being returned (so that I can then suspend the instance to be investigated from the console end).
Can I return multiple schemas from one response, and if so - what's the best way to achieve this?
Yes, you can return multiple different schemas on the same response port.
What you can do is create an internal schema, and create maps from all the external schema that map to it, and have to on your Inbound Maps in your send port. Your could also set a field value, e.g. Succeeded to True or False, which you can either use for routing by promoting it and having subscriptions use the promoted property, or use it in an Orchestration (where distinguishing it would be enough).

Pact Request That Depends on the Response from A Previous Request

I am using the Pact framework to test some APIs from a service. I have one API that initiates some backend execution. Let's call it request A and the response returns a unique execution ID. The second API (request B) send the execution ID returned from request A to pull the execution status. How do I set up the pact test in this case? The problem is the execution ID that is generated dynamically. I know a provider can inject some provider state to the consumer. So potentially, the execution ID could be injected. But I am not sure how to make the injection from the provider side. It requires access to the response from the response A and inject the execution ID (with the provider state callback, perhaps) for the second request.
You need to have a lot of control over what is happening in your provider for Pact to work for you.
Each interaction is verified individually (and in some frameworks, in a random order), and all state should be cleared in between interactions, so you need to use provider states to set up any data that would have been created by the initial request. In regards to something like the execution IDs, you could use a different implementation of the code that generates the IDs that you only use for Pact Tests.

How a kafka consumer with #KafkaListener annotation handles max.poll.records

I'm using spring boot 2.1.7.RELEASE and spring-kafka 2.2.7.RELEASE.And I'm using #KafkaListener annotation to create a consumer and I'm using all default settings for the consumer.
As per the apache kafka documentation, the default value for 'max.poll.records' is 500.
Here I'm trying to understand, how spring is handling the records processing. Now my question is, If we have already published 500 messages onto a Topic A and have a consumer (using #KafkaListener) subscribed to this topic ,
Does this spring listener would get all those 500 records and then is doing some kind of caching before passing one by one record to the method annotated with #KafkaListener or would it pull only one record at once and pass that to the method with #KafkaListener annotation
The #KafkaListener is based on the KafkaMessageListenerContainer and, in turn, is fully based on the ConsumerRecords<K, V> org.apache.kafka.clients.consumer.Consumer.poll(Duration timeout) API.
The option you mention has nothing to do with Spring for Apache Kafka. You would deal with the same behavior even without Spring.
See that returned ConsumerRecords for more info how records are fetched from Kafka.
With Kafka t really doesn't matter how we fetch records. Only an offset commit matters.
But that's different story. You need to understand for yourself that Spring for Apache Kafka is just a wrapper around standard Kafka Client. It doesn't make an opinion how to poll records from topics.

How to create command by consuming message from kafka topic rather than through Rest API

I'm using Axon version (3.3) which seamlessly supports Kafka with annotation in the SpringBoot Main class using
#SpringBootApplication(exclude = KafkaAutoConfiguration.class)
In our use case, the command side microservice need to pick message from kafka topic rather than we expose it as Rest api. It will store the event in event store and then move it to another kafka topic for query side microservice to consume.
Since KafkaAutoCOnfiguration is disabled, I cannot use spring-kafka configuration to write a consumer. How can I consume a normal message in Axon?
I tried writing a normal Kafka spring Consumer but since Kafka Auto COnfiguration is disabled, initial trigger for the command is not picked up from the Kafka topic
I think I can help you out with this.
The Axon Kafka Extension is solely meant for Events.
Thus, it is not intended to dispatch Commands or Queries from one node to another.
This is very intentionally, as Event messages have different routing needs apposed to Command and Query messages.
Axon views Kafka a fine fit as an Event Bus and as such this is supported through the framework.
It is however not ideal for Command messages (should be routed to a single handler, always) or Query messages (can be routed to a single handler, several handlers or have a subscription model).
Thus, I you'd want to "abuse" Kafka for different types of messages in conjunction with Axon, you will have to write your own component/service for it.
I would however stick to the messaging paradigm and separate these concerns.
For far increasing simplicity when routing messages between Axon applications, I'd highly recommend trying out Axon Server.
Additionally, here you can hear/see Allard Buijze point out the different routing needs per message type (thus the reason why Axon's Kafka Extension only deals with Event messages).

Biztalk client defined subscription items

I am designing a Biztalk solution which requires client applications to subscribe and receive only a certain subset of event messages depending on their user permissions. Subscription will be done through topic or content based routing. The client will subscribe once and receive many messages until they choose to unsubscribe.
Client applications will number in the 100s and subscribed topics could change on a regular basis, so defining an individual send port from Biztalk for each reciever isn't a viable solution.
I have thought I could build an additional message broker service which holds the individual client subscriptions and distributes messages sent from a biztalk port.
I have also seen that a recipient list pattern can be build using orchestrations. This appears to me to still follow a request-response pattern though and I am after 1 way subscribe message to many returned event messages.
My message broker solution seems to me to be doubling up on what Biztalk should be good at so I imagine I am missing some important functionality somewhere. Has anyone tried such an application before and can give some pointers? Should I be investingating the ESB toolkit as a solution? I have had a look on the net but nothing makes it very clear for this type of topic-subscription model.
Thanks,
Phil
Do take a look at the ESB Toolkit. You can use the itinerary functionality that it adds to BizTalk, either with one of the built-in resolvers (e.g., UDDI) or with your own custom resolver. This allows you to route messages based on configuration (stored in Business Rules or elsewhere).
You will find a developer-oriented overview video of the ESB Toolkit on MSDN, which is a decent introduction to the design process and tooling. There are several other helpful videos there as well.
Your specific scenario can accomplished with a single itinerary, as described here. Use a receive pipeline with the ESB Dispatch Disassembler component, configure multiple resolvers, and for each resolver a new message is produced.
There are also two samples to look at:
The Itinerary On-Ramp Sample - builds a set of SOAP headers that contain the itinerary that you create in the test client, loads the specific message file from disk, appends the itinerary headers to the message, and submits it to the ESB through an Itinerary on-ramp for processing.
The Scatter-Gather Sample - Also appends SOAP headers containing the itinerary to the message, which is submitted to the ESB through an on-ramp for processing. A Broker orchestration analyzes the settings for its itinerary step, retrieves a collection of resolvers associated with the itinerary step, and for each of those resolvers resolves the service endpoint. After that, the orchestration activates the proper ServiceDispatcher orchestration instances to dispatch the outbound request messages.
You should also look at "How to: Route a Single Message to Multiple Recipients Using an Itinerary Routing Slip" or perhaps look into creating a custom itinerary message service (documentation is here).

Resources