Unable to set some producer settings for kafka with spring boot - spring-kafka

I'm trying to set the retry.backoff.ms setting for kafka in my producer using the DefaultKafkaProducerFactory from org.springframework.kafka.core. Here's what I got:
public class KafkaProducerFactory extends DefaultKafkaProducerFactory {
public KafkaProducerFactory(Map<String, Object> config) {
super(config);
}
#Configuration
public class MyAppProducerConfig {
#Value("${myApp.delivery-timeout-ms:#{120000}}")
private int deliveryTimeoutMs;
#Value("${myApp.retry-backoff-ms:#{30000}}")
private int retryBackoffMs;
Producer<MyKey, MyValue> myAppProducer() {
Map<String, Object> config = new HashMap<>();
config.put(org.apache.kafka.clients.producer.ProducerConfig.DELIVERY_TIMEOUT_MS_CONFIG, deliveryTimeoutMs);
config.put(org.apache.kafka.clients.producer.ProducerConfig.RETRY_BACKOFF_MS_CONFIG, retryBackoffMs);
final var factory = new KafkaProducerFactory<MyKey, MyValue>(config);
return factory.createProducer(); // calls DefaultKafkaProducerFactory
}
Now when I add the following to my application.yaml
myApp:
retry-backoff-ms = 50
delivery-timeout-ms = 1000
This is what I see in the logging when I start the application:
o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
delivery.timeout.ms = 1000
retry.backoff.ms = 1000
so the delivery.timeout.ms was set, but the retry.backoff.ms wasn't even though I did the exact same for both.
I did find how to set application properties to default kafka producer template without setting from kafka producer config bean, but I didn't see either property listed under integrated properties.
So hopefully someone can give me some pointers.

After an intense debugging session I found the issue. DefaultKafkaProducerFactory is in a shared library between teams and I'm not super familiar with the class since it's my first time touching it.
Turns out the createProducer() call in DefaultKafkaProducerFactory calls another function that is overriden in KafkaProducerFactory which then creates an AxualProducer.
And the AxualProducerConfig always sets retry.backoff.ms to 1000ms.

Related

How to test annotation based role-specific security in test annotated with `#WebMvcTest` and `#WithMockUser`

My Problem is that I can not test the role-based authentication of my annotated Rest Endpoints in my tests. The specified roles seem to make no difference
I am using annotation based security configuration on my REST-controllers like this:
#RestController
#RequestMapping("rest/person")
class PersonRestController(
private val securityService: SecurityService,
private val personService: PersonService,
) {
#GetMapping("/list")
#Secured(UserRole.Role.ROLE_COMPANY)
fun list(): List<Person> {
val companyId = securityService.currentUser.companyId
return personService.findByCompany(companyId)
}
}
In my Web Layer tests I am using #WebMvcTest with a shared config class, that provides all required beans (we have some shared ExceptionHandlers and Interceptors that I would like to test with my Controllers)
My Test looks like this:
#WebMvcTest(PersonRestController::class)
#Import(RestControllerTestConfig::class)
class GroupedScheduleRestControllerTest {
#Autowired
private lateinit var mvc: MockMvc
#MockBean
private lateinit var personService: PersonService
// This bean is provided by RestControllerTestConfig
#Autowired
private lateinit var someSharedService: SomeSharedService
#Test
#WithMockUser(value = "user#company.at")
fun testReturnsEmptyList() {
val response = mvc.perform(MockMvcRequestBuilders.get("/rest/person/list"))
response.andExpect(status().isOk)
.andExpect(jsonPath("$.length()").value(0))
}
}
Now I would like to add a unit test, that verifies, that only a user with the role COMPANY can access this endpoint - but I can't get this to work. My test always runs through when I add WithMockUser, independent of what I pass in for the roles.
And it always fails with a 401 Unauthorized when I remove WithMockUser so some security setup seems to be happening but the #Secured in my RestEndpoint seems to have no effect.
Am I missing some configuration here to notify #WebMvcTest to pick up the Security annotations from the instantiated RestController?
Okay in order for the #Secured annotations to be picked up, I added #EnableGlobalMethodSecurity(securedEnabled = true) to my Configuration class and it worked like a charm 🙈

Options to reduce processing rate using Spring Kafka

I'm using Spring Boot 2.2.7 and Spring Kafka. I have a KafkaListener which is a continuously processing stats data from a topic and writing the data into MongoDB and Elasticsearch (using Spring Data).
My configuration is as follows:
#Configuration
public class StatListenerConfig {
#Autowired
private KafkaConfig kafkaConfig;
#Bean
public ConsumerFactory<String, StatsRequestDto> statsConsumerFactory() {
return new DefaultKafkaConsumerFactory<>(kafkaConfig.statsConsumerConfigs());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, StatsRequestDto> kafkaStatsListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, StatsRequestDto> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(statsConsumerFactory());
factory.getContainerProperties().setAckMode(AckMode.RECORD);
return factory;
}
}
#Service
public class StatListener {
private static final Logger LOGGER = LoggerFactory.getLogger(StatListener.class);
#Autowired
private StatsService statsService;
#KafkaListener(topics = "${kafka.topic.stats}", containerFactory = "kafkaStatsListenerContainerFactory")
public void receive(#Payload StatsRequestDto data) {
Stat stats = statsService.convertToStats(data);
statsService.save(stats).get();
}
}
The save method is an async method.
The problem I am having is that when the queue is being processed, Elastisearch CPU consumption is around 250%. This leads to sporadic timeout errors across the application. I am looking into how I can optimise Elasticsearch as indexing can cause CPU spikes.
I wanted to check that if I used an async method (like above), the next message from the topic would not be processed until the previous one had completed. If that is correct, what options are there in Spring Kafka that I could use to relieve pressure of a downstream operation that might take time to complete.
Any advice would be much appreciated.
In version 2.3, we added the idleBetweenPolls container property.
With earlier versions, you could simulate that by, say, sleeping in the consumer for some time after some number of records.
You just need to be sure the sleep+processing time for the records returned by a poll does not exceed max.poll.intervsl.ms, to avoid a rebalance.

Configure kafka topic retention policy during creation in spring-mvc?

Configure retention policy of all topics during creation
Trying to configure rentention.ms using spring, as I get an error of:
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.PolicyViolationException: Invalid retention.ms specified. The allowed range is [3600000..2592000000]
From what I've read the new value is -1 (infinity) so is out of that range
Following what was in
How to configure kafka topic retention policy during creation in spring-mvc? ,I added the below code but it seems to have no effect.
Any ideas/hints on how might solve this?
ApplicationConfigurationTest.java
#test
public void kafkaAdmin () {
KafkaAdmin admin = configuration.admin();
assertThat(admin, instanceOf(KafkaAdmin.class));
}
ApplicationConfiguration.java
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(TopicConfig.RETENTION_MS_CONFIG, "1680000");
return new KafkaAdmin(configs);
}
Found the solution by setting the value
spring.kafka.streams.topic.retention.ms: 86400000
in application.yml.
Our application uses spring mvc, hence the spring notation.
topic.retention.ms is the value that needs to be set in the streams config
86400000 is a random value just used as it is in range of [3600000..2592000000]

From consumer end, Is there an option to create a topic with custom configurations?

I'm writing a kafka consumer using 'org.springframework.kafka.annotation.KafkaListener' (#KafkaListener) annotation. This annotation is expecting the topic to be already at the time of subscribing and trying to create the topic if the topic is not present.
In my case, i don't want the consumer to create a topic with default configuration but it should create a topic with custom configurations (like the no of partitions, clean up policy etc). Is there any option for this in spring-kafka?
See the documentation configuring topics.
If you define a KafkaAdmin bean in your application context, it can automatically add topics to the broker. To do so, you can add a NewTopic #Bean for each topic to the application context. The following example shows how to do so:
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
StringUtils.arrayToCommaDelimitedString(embeddedKafka().getBrokerAddresses()));
return new KafkaAdmin(configs);
}
#Bean
public NewTopic topic1() {
return new NewTopic("thing1", 10, (short) 2);
}
#Bean
public NewTopic topic2() {
return new NewTopic("thing2", 10, (short) 2);
}
By default, if the broker is not available, a message is logged, but the context continues to load. You can programmatically invoke the admin’s initialize() method to try again later. If you wish this condition to be considered fatal, set the admin’s fatalIfBrokerNotAvailable property to true. The context then fails to initialize.
If the broker supports it (1.0.0 or higher), the admin increases the number of partitions if it is found that an existing topic has fewer partitions than the NewTopic.numPartitions.
If you are using Spring Boot, you don't need an admin bean because boot will automatically configure one for you.

How to use ResourceProcessorHandlerMethodReturnValueHandler in a spring-hateos project

When using spring-data-rest there is a post processing of Resource classes returned from Controllers (e.g. RepositoryRestControllers). The proper ResourceProcessor is called in the post processing.
The class responsible for this is ResourceProcessorHandlerMethodReturnValueHandler which is part of spring-hateoas.
I now have a project that only uses spring-hateoas and I wonder how to configure ResourceProcessorHandlerMethodReturnValueHandler in such a scenario. It looks like the auto configuration part of it still resides in spring-data-rest.
Any hints on how to enable ResourceProcessorHandlerMethodReturnValueHandler in a spring-hateoas context?
I've been looking at this recently too, and documentation on how to achieve this is non-existent. If you create a bean of type ResourceProcessorInvokingHandlerAdapter, you seem to lose the the auto-configured RequestMappingHandlerAdapter and all its features. As such, I wanted to avoid using this bean or losing the WebMvcAutoConfiguration, since all I really wanted was the ResourceProcessorHandlerMethodReturnValueHandler.
You can't just add a ResourceProcessorHandlerMethodReturnValueHandler via WebMvcConfigurer.addReturnValueHandlers, because what we need to do is actually override the entire list, as is what happens in ResourceProcessorInvokingHandlerAdapter.afterPropertiesSet:
#Override
public void afterPropertiesSet() {
super.afterPropertiesSet();
// Retrieve actual handlers to use as delegate
HandlerMethodReturnValueHandlerComposite oldHandlers = getReturnValueHandlersComposite();
// Set up ResourceProcessingHandlerMethodResolver to delegate to originally configured ones
List<HandlerMethodReturnValueHandler> newHandlers = new ArrayList<HandlerMethodReturnValueHandler>();
newHandlers.add(new ResourceProcessorHandlerMethodReturnValueHandler(oldHandlers, invoker));
// Configure the new handler to be used
this.setReturnValueHandlers(newHandlers);
}
So, without a better solution available, I added a BeanPostProcessor to handle setting the List of handlers on an existing RequestMappingHandlerAdapter:
#Component
#RequiredArgsConstructor
#ConditionalOnBean(ResourceProcessor.class)
public class ResourceProcessorHandlerMethodReturnValueHandlerConfigurer implements BeanPostProcessor {
private final Collection<ResourceProcessor<?>> resourceProcessors;
#Override
public Object postProcessAfterInitialization(Object bean, String beanName)
throws BeansException {
if (bean instanceof RequestMappingHandlerAdapter) {
RequestMappingHandlerAdapter requestMappingHandlerAdapter = (RequestMappingHandlerAdapter) bean;
List<HandlerMethodReturnValueHandler> handlers =
requestMappingHandlerAdapter.getReturnValueHandlers();
HandlerMethodReturnValueHandlerComposite delegate =
handlers instanceof HandlerMethodReturnValueHandlerComposite ?
(HandlerMethodReturnValueHandlerComposite) handlers :
new HandlerMethodReturnValueHandlerComposite().addHandlers(handlers);
requestMappingHandlerAdapter.setReturnValueHandlers(Arrays.asList(
new ResourceProcessorHandlerMethodReturnValueHandler(delegate,
new ResourceProcessorInvoker(resourceProcessors))));
return requestMappingHandlerAdapter;
}
else return bean;
}
}
This has seemed to work so far...

Resources