Prevent __TypeId__ to be used in Spring Cloud Stream - spring-kafka

We had a rogue producer setting a Kafka Header __TypeId__ to a class that was part of the producer, but not of a consumer implemented within a Spring Cloud Stream application using Kafka Streams binder. It resulted in an exception
java.lang.IllegalArgumentException: The class 'com.bad.MyClass' is not in the trusted packages: [java.util, java.lang, de.datev.pws.loon.dcp.foreignmodels.*]. If you believe this class is safe to deserialize, please provide its name. If the serialization is only done by a trusted source, you can also enable trust all (*).
How can we ensure within the consumer that this TypeId header is ignored?
Some stackoverflow answers point to spring.json.use.type.headers=false, but it seems to be an "old" property, that is no more valid.
application.yaml:
spring:
json.use.type.headers: false
application:
name: dcp-all
kafka:
bootstrap-servers: 'xxxxx.kafka.dev.dvint.de:9093'
cloud:
stream:
kafka:
streams:
binder:
required-acks: -1 # all in-sync-replicas
...
Stack trace:
at org.springframework.kafka.support.mapping.DefaultJackson2JavaTypeMapper.getClassIdType(DefaultJackson2JavaTypeMapper.java:129)
at org.springframework.kafka.support.mapping.DefaultJackson2JavaTypeMapper.toJavaType(DefaultJackson2JavaTypeMapper.java:103)
at org.springframework.kafka.support.serializer.JsonDeserializer.deserialize(JsonDeserializer.java:569)
at org.apache.kafka.streams.processor.internals.SourceNode.deserializeValue(SourceNode.java:58)
at org.apache.kafka.streams.processor.internals.RecordDeserializer.deserialize(RecordDeserializer.java:66)
at org.apache.kafka.streams.processor.internals.RecordQueue.updateHead(RecordQueue.java:176)
at org.apache.kafka.streams.processor.internals.RecordQueue.addRawRecords(RecordQueue.java:112)
at org.apache.kafka.streams.processor.internals.PartitionGroup.addRawRecords(PartitionGroup.java:304)
at org.apache.kafka.streams.processor.internals.StreamTask.addRecords(StreamTask.java:960)
at org.apache.kafka.streams.processor.internals.TaskManager.addRecordsToTasks(TaskManager.java:1068)
at org.apache.kafka.streams.processor.internals.StreamThread.pollPhase(StreamThread.java:962)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:751)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:604)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:576)
Here is a unit test
#Test
void consumeWorksEvenWithBadTypesHeader() throws JsonProcessingException, InterruptedException {
Map<String, Object> producerProps = KafkaTestUtils.producerProps(embeddedKafka);
producerProps.put("key.serializer", StringSerializer.class.getName());
DefaultKafkaProducerFactory<String, String> pf = new DefaultKafkaProducerFactory<>(producerProps);
List<Header> headers = Arrays.asList(new RecordHeader("__TypeId__", "com.bad.MyClass".getBytes()));
ProducerRecord<String,String> p = new ProducerRecord(TOPIC1, 0, "any-key",
"{ ... some valid JSON ...}", headers);
try {
KafkaTemplate<String, String> template = new KafkaTemplate<>(pf, true);
template.send(p);
ConsumerRecord<String, String> consumerRecord = KafkaTestUtils.getSingleRecord(consumer, TOPIC2, DEFAULT_CONSUMER_POLL_TIME);
// Assertions ...
} finally {
pf.destroy();
}
}

You have 2 options:
On the producer side set the property to omit adding the type info headers
On the consumer side, set the property to not use the type info headers
https://docs.spring.io/spring-kafka/docs/current/reference/html/#json-serde
It is not an "old" property.
/**
* Kafka config property for using type headers (default true).
* #since 2.2.3
*/
public static final String USE_TYPE_INFO_HEADERS = "spring.json.use.type.headers";
It needs to be set in the consumer properties.

Related

Unable to set some producer settings for kafka with spring boot

I'm trying to set the retry.backoff.ms setting for kafka in my producer using the DefaultKafkaProducerFactory from org.springframework.kafka.core. Here's what I got:
public class KafkaProducerFactory extends DefaultKafkaProducerFactory {
public KafkaProducerFactory(Map<String, Object> config) {
super(config);
}
#Configuration
public class MyAppProducerConfig {
#Value("${myApp.delivery-timeout-ms:#{120000}}")
private int deliveryTimeoutMs;
#Value("${myApp.retry-backoff-ms:#{30000}}")
private int retryBackoffMs;
Producer<MyKey, MyValue> myAppProducer() {
Map<String, Object> config = new HashMap<>();
config.put(org.apache.kafka.clients.producer.ProducerConfig.DELIVERY_TIMEOUT_MS_CONFIG, deliveryTimeoutMs);
config.put(org.apache.kafka.clients.producer.ProducerConfig.RETRY_BACKOFF_MS_CONFIG, retryBackoffMs);
final var factory = new KafkaProducerFactory<MyKey, MyValue>(config);
return factory.createProducer(); // calls DefaultKafkaProducerFactory
}
Now when I add the following to my application.yaml
myApp:
retry-backoff-ms = 50
delivery-timeout-ms = 1000
This is what I see in the logging when I start the application:
o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
delivery.timeout.ms = 1000
retry.backoff.ms = 1000
so the delivery.timeout.ms was set, but the retry.backoff.ms wasn't even though I did the exact same for both.
I did find how to set application properties to default kafka producer template without setting from kafka producer config bean, but I didn't see either property listed under integrated properties.
So hopefully someone can give me some pointers.
After an intense debugging session I found the issue. DefaultKafkaProducerFactory is in a shared library between teams and I'm not super familiar with the class since it's my first time touching it.
Turns out the createProducer() call in DefaultKafkaProducerFactory calls another function that is overriden in KafkaProducerFactory which then creates an AxualProducer.
And the AxualProducerConfig always sets retry.backoff.ms to 1000ms.

contractVerifierMessaging.receive is null

I'm setting up contract tests for Kafka messaging with Test Containers in a way described in spring-cloud-contract-samples/producer_kafka_middleware/. Works good with Embedded Kafka but not with TestContainers.
When I try to run the generated ContractVerifierTest:
public void validate_shouldProduceKafkaMessage() throws Exception {
// when:
triggerMessageSent();
// then:
ContractVerifierMessage response = contractVerifierMessaging.receive("kafka-messages",
contract(this, "shouldProduceKafkaMessage.yml"));
Cannot invoke "org.springframework.messaging.Message.getPayload()" because "receive" is null
is thrown
Kafka container is running, the topic is created. When debugging receive method I see the message is null in the message(destination);
Contract itself:
label("triggerMessage")
input {
triggeredBy("triggerMessageSent()")
}
outputMessage {
sentTo "kafka-messages"
body(file("kafkaMessage.json"))
Base test configuration:
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE, classes = {TestConfig.class, ServiceApplication.class})
#Testcontainers
#AutoConfigureMessageVerifier
#ActiveProfiles("test")
public abstract class BaseClass {
What am I missing? Maybe a point of communication between the container and ContractVerifierMessage methods?
Resolved the issue by adding a specific topic name to listen() method in KafkaMessageVerifier implementation class.
So instead of #KafkaListener(id = "listener", topicPattern = ".*"), it works with:
#KafkaListener(topics = {"my-messages-topic"})
public void listen(ConsumerRecord payload, #Header(KafkaHeaders.RECEIVED_TOPIC)

How to start with PACT contract testing in java for a newbie

I have to do a POC on contract testing using pact, but I couldn't found anything helpful for a newbie. Can someone help me with the working code, how to install, execute I will be grateful.
I tried to explain below.
Consumer: Contract created by consumer.
Provider: Contracts tested by provider.
Pack Broker: After contracts are created under location (like targer/pacts) defined by you, you must publish the contracts to the common platform where consumer and provider will see.
Consumer side - Create contract for provider
public class CreateContractForProvider {
#Rule //Provider, HostInterface and Port defined with #Rule annotation (Used PactProviderRuleMk2)
public PactProviderRuleMk2 pactProviderRuleMk2 = new PactProviderRuleMk2(
// Provider Application Name
"ProviderName",
//Mock Server
"localhost",
8112,
this);
#Pact(consumer = "ConsumerName") // Consumer Application Name (Our application) - Consumer defined with #Pact annotation(
public RequestResponsePact createPact(PactDslWithProvider builder) {
Map<String, String> headers = new HashMap();
headers.put("Content-Type", "application/json"); //Defined headers
//Defined responses with PactDslJsonBody()
DslPart expectedResultBodyWhenGetPayments = new PactDslJsonBody()
.integerType("id",308545)
.integerType("contractNo",854452)
.numberType("amount",3312.5)
.stringType("status","UNPAID")
.asBody();
return builder
.uponReceiving("A request for all payments")
.path("/payments")
.method("GET")
.willRespondWith()
.status(200)
.headers(headers)
.body(expectedResultBodyWhenGetPayments).toPact(); //Response bodyies and headers used in return builder
// We can define more than one endpoint with .uponReceiving or .given
//Then we have to test beacuse contracts are created test stage.
//When we say test with #PactVerification, the server we described above stands up(localhost:8112). İf we get localhost:8112/(definedpathlike/payments) its return expectedResultBodyWhenGetPayments.If the test is successful, the contracts is create.
#Test
#PactVerification()
public void pactVerification() {
int contractNo=((Integer) new ContractTestUtil(pactProviderRuleMk2.getPort()).getContractResponse("/payments","contractNo")).intValue();
assertTrue(contractNo == 854452);
}}
Test Util
public class ContractTestUtil {
int port=8111;
public ContractTestUtil(int port) {
this.port=port;
System.out.println("Custom port "+port);
}
public Object getContractResponse(String path,String object) {
try {
System.setProperty("pact.rootDir", "./target/pacts");
System.setProperty("pact.rootDir", "./target/pacts");
String url=String.format("Http://localhost:%d"+path, port);
System.out.println("using url: "+url);
HttpResponse httpResponse = Request.Get(url).execute().returnResponse();
String json = EntityUtils.toString(httpResponse.getEntity());
System.out.println("json="+json);
JSONObject jsonObject = new JSONObject(json);
return jsonObject.get(object);
}
catch (Exception e) {
System.out.println("Unable to get object="+e);
return null;
}
}}
Define Pact Broker
The PactBrokerUr lmust be defined before publishing in pom.
<plugin>
<!-- mvn pact:publish -->
<groupId>au.com.dius</groupId>
<artifactId>pact-jvm-provider-maven_2.11</artifactId>
<version>3.5.10</version>
<configuration>
<pactDirectory>./target/pacts</pactDirectory> <!-- Defaults to ${project.build.directory}/pacts -->
<pactBrokerUrl>http://yourmachine:8113</pactBrokerUrl>
<projectVersion>1.1</projectVersion> <!-- Defaults to ${project.version} -->
<trimSnapshot>true</trimSnapshot> <!-- Defaults to false -->
</configuration>
</plugin>
Now, we can publish with pact:puplish command.
Provider Side - Call contracts created by consumer
In this stage you can test with failsafe plugin. Beacuse its integraion test.
#RunWith(PactRunner.class) // Say JUnit to run tests with custom Runner
#Provider("ProviderName")
#Consumer("ConsumerName")// Set up name of tested provider// Provider Application Name
#PactBroker(port = "8113", host = "yourmachine")
public class VerifyContractsWhichCreatedForProviderIT {
private static ConfigurableWebApplicationContext configurableWebApplicationContext;
#BeforeClass
public static void start() {
configurableWebApplicationContext = (ConfigurableWebApplicationContext)
SpringApplication.run(Application.class);
}
#TestTarget // Annotation denotes Target that will be used for tests
public final Target target = new HttpTarget(8080); //Test Target
}
Finally,you can create contrats and verify contrast created for you with clean test pact:publish verify command.

Configure kafka topic retention policy during creation in spring-mvc?

Configure retention policy of all topics during creation
Trying to configure rentention.ms using spring, as I get an error of:
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.PolicyViolationException: Invalid retention.ms specified. The allowed range is [3600000..2592000000]
From what I've read the new value is -1 (infinity) so is out of that range
Following what was in
How to configure kafka topic retention policy during creation in spring-mvc? ,I added the below code but it seems to have no effect.
Any ideas/hints on how might solve this?
ApplicationConfigurationTest.java
#test
public void kafkaAdmin () {
KafkaAdmin admin = configuration.admin();
assertThat(admin, instanceOf(KafkaAdmin.class));
}
ApplicationConfiguration.java
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(TopicConfig.RETENTION_MS_CONFIG, "1680000");
return new KafkaAdmin(configs);
}
Found the solution by setting the value
spring.kafka.streams.topic.retention.ms: 86400000
in application.yml.
Our application uses spring mvc, hence the spring notation.
topic.retention.ms is the value that needs to be set in the streams config
86400000 is a random value just used as it is in range of [3600000..2592000000]

Spring Kafka bean return types

The documentation for spring kafka stream support shows something like:
#Bean
public KStream<Integer, String> kStream(StreamsBuilder kStreamBuilder) {
KStream<Integer, String> stream = kStreamBuilder.stream("streamingTopic1");
// ... stream config
return stream;
}
However, I might want a topology dependent on multiple streams or tables. Can I do:
#Bean
public KStream<Integer, String> kStream(StreamsBuilder kStreamBuilder) {
KStream<Integer, String> stream1 = kStreamBuilder.stream("streamingTopic1");
KStream<Integer, String> stream2 = kStreamBuilder.stream("streamingTopic1");
// ... stream config
return stream;
}
In other words, is the bean returned relevant, or is it only important that kStreamBuilder is being mutated?
It depends.
If you don't need a reference to the KStream elsewhere, there is no need to define it as a bean at all you can auto wire the StreamsBuilder which is created by the factory bean.
If you need a reference, then each one must be its own bean.
For example, Spring Cloud Stream builds a partial stream which the application then modifies. See here.

Resources