How can I change [kafka-consumer-1] this thread name? - spring-kafka

I set the concurrency to 10 and I can see 10 different thread id but the thread name is all the same. How can set the listener name? I tried container.setBeanName but no luck. Please help. By the way I am use 1.1.2 version

The thread names are unique; it's just that boot's logging configuration truncates the name by default; we will fix the default thread naming but, in the meantime, you can either change the logging configuration or use named executors. Use setConsumerTaskExecutor(execC()) and setListenerTaskExecutor(execL()) on the container's ContainerProperties ...
#Bean
public AsyncListenableTaskExecutor execC() {
ThreadPoolTaskExecutor tpte = new ThreadPoolTaskExecutor();
tpte.setCorePoolSize(15);
return tpte;
}
#Bean
public AsyncListenableTaskExecutor execL() {
ThreadPoolTaskExecutor tpte = new ThreadPoolTaskExecutor();
tpte.setCorePoolSize(15);
return tpte;
}

Related

Combining blocking and non-blocking retries in Spring Kafka

I am trying to implement non blocking retries with single topic fixed back-off.
I am able to do so, thanks to documentation https://docs.spring.io/spring-kafka/reference/html/#single-topic-fixed-delay-retries.
Now I also need to perform a few blocked/local retries on main topic. I have been trying to implement this using DefaultErrorHandler as below:
#Bean
public DefaultErrorHandler retryErrorHandler() {
return new DefaultErrorHandler(new FixedBackOff(2000, 3));
}
This does not seem to work with RetryableTopic.
I have also tried the following approach retry-topic-combine-blocking https://docs.spring.io/spring-kafka/reference/html/#retry-topic-combine-blocking using ListenerContainerFactoryConfigurer
but issue I am facing here is creating beans KafkaConsumerBackoffManager, DeadLetterPublishingRecovererFactory and especially KafkaConsumerBackoffManager.
I need to know if this another way to achieve this using spring kafka framework or is there a way to construct above beans ?
We're currently working on improving configuration for the non-blocking retries components.
For now, as documented here, you should inject these beans such as:
#Bean(name = RetryTopicInternalBeanNames.LISTENER_CONTAINER_FACTORY_CONFIGURER_NAME)
public ListenerContainerFactoryConfigurer lcfc(KafkaConsumerBackoffManager kafkaConsumerBackoffManager,
DeadLetterPublishingRecovererFactory deadLetterPublishingRecovererFactory,
#Qualifier(RetryTopicInternalBeanNames
.INTERNAL_BACKOFF_CLOCK_BEAN_NAME) Clock clock) {
ListenerContainerFactoryConfigurer lcfc = new ListenerContainerFactoryConfigurer(kafkaConsumerBackoffManager, deadLetterPublishingRecovererFactory, clock);
lcfc.setBlockingRetryableExceptions(MyBlockingRetryException.class, MyOtherBlockingRetryException.class);
lcfc.setBlockingRetriesBackOff(new FixedBackOff(500, 5)); // Optional
return lcfc;
}}
Also, there's a known issue where if you try to inject the beans before the first #KafkaListener bean with retryable topic is processed, the feature's component's beans won't be present in the context yet and will throw an error.
Does that happen to you?
We're currently working on a fix for this, but we should be able to work around that if that's your problem.
EDIT: Since the problem is that components are not instantiated yet, the most guaranteed workaround is to provide the components yourself.
Here's a sample on how to do that. Of course, adjust it accordingly if you need any further customization.
#Configuration
public static class SO71705876Configuration {
#Bean(name = RetryTopicInternalBeanNames.LISTENER_CONTAINER_FACTORY_CONFIGURER_NAME)
public ListenerContainerFactoryConfigurer lcfc(KafkaConsumerBackoffManager kafkaConsumerBackoffManager,
DeadLetterPublishingRecovererFactory deadLetterPublishingRecovererFactory) {
ListenerContainerFactoryConfigurer lcfc = new ListenerContainerFactoryConfigurer(kafkaConsumerBackoffManager, deadLetterPublishingRecovererFactory, Clock.systemUTC());
lcfc.setBlockingRetryableExceptions(IllegalArgumentException.class, IllegalStateException.class);
lcfc.setBlockingRetriesBackOff(new FixedBackOff(500, 5)); // Optional
return lcfc;
}
#Bean(name = RetryTopicInternalBeanNames.KAFKA_CONSUMER_BACKOFF_MANAGER)
public KafkaConsumerBackoffManager backOffManager(ApplicationContext context) {
PartitionPausingBackOffManagerFactory managerFactory =
new PartitionPausingBackOffManagerFactory();
managerFactory.setApplicationContext(context);
return managerFactory.create();
}
#Bean(name = RetryTopicInternalBeanNames.DEAD_LETTER_PUBLISHING_RECOVERER_FACTORY_BEAN_NAME)
public DeadLetterPublishingRecovererFactory dlprFactory(DestinationTopicResolver resolver) {
return new DeadLetterPublishingRecovererFactory(resolver);
}
#Bean(name = RetryTopicInternalBeanNames.DESTINATION_TOPIC_CONTAINER_NAME)
public DestinationTopicResolver destinationTopicResolver(ApplicationContext context) {
return new DefaultDestinationTopicResolver(Clock.systemUTC(), context);
}
In the next release this should not be a problem anymore. Please let me know if that works for you, or if any further adjustment to this workaround is necessary.
Thanks.

From consumer end, Is there an option to create a topic with custom configurations?

I'm writing a kafka consumer using 'org.springframework.kafka.annotation.KafkaListener' (#KafkaListener) annotation. This annotation is expecting the topic to be already at the time of subscribing and trying to create the topic if the topic is not present.
In my case, i don't want the consumer to create a topic with default configuration but it should create a topic with custom configurations (like the no of partitions, clean up policy etc). Is there any option for this in spring-kafka?
See the documentation configuring topics.
If you define a KafkaAdmin bean in your application context, it can automatically add topics to the broker. To do so, you can add a NewTopic #Bean for each topic to the application context. The following example shows how to do so:
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
StringUtils.arrayToCommaDelimitedString(embeddedKafka().getBrokerAddresses()));
return new KafkaAdmin(configs);
}
#Bean
public NewTopic topic1() {
return new NewTopic("thing1", 10, (short) 2);
}
#Bean
public NewTopic topic2() {
return new NewTopic("thing2", 10, (short) 2);
}
By default, if the broker is not available, a message is logged, but the context continues to load. You can programmatically invoke the admin’s initialize() method to try again later. If you wish this condition to be considered fatal, set the admin’s fatalIfBrokerNotAvailable property to true. The context then fails to initialize.
If the broker supports it (1.0.0 or higher), the admin increases the number of partitions if it is found that an existing topic has fewer partitions than the NewTopic.numPartitions.
If you are using Spring Boot, you don't need an admin bean because boot will automatically configure one for you.

How to use ResourceProcessorHandlerMethodReturnValueHandler in a spring-hateos project

When using spring-data-rest there is a post processing of Resource classes returned from Controllers (e.g. RepositoryRestControllers). The proper ResourceProcessor is called in the post processing.
The class responsible for this is ResourceProcessorHandlerMethodReturnValueHandler which is part of spring-hateoas.
I now have a project that only uses spring-hateoas and I wonder how to configure ResourceProcessorHandlerMethodReturnValueHandler in such a scenario. It looks like the auto configuration part of it still resides in spring-data-rest.
Any hints on how to enable ResourceProcessorHandlerMethodReturnValueHandler in a spring-hateoas context?
I've been looking at this recently too, and documentation on how to achieve this is non-existent. If you create a bean of type ResourceProcessorInvokingHandlerAdapter, you seem to lose the the auto-configured RequestMappingHandlerAdapter and all its features. As such, I wanted to avoid using this bean or losing the WebMvcAutoConfiguration, since all I really wanted was the ResourceProcessorHandlerMethodReturnValueHandler.
You can't just add a ResourceProcessorHandlerMethodReturnValueHandler via WebMvcConfigurer.addReturnValueHandlers, because what we need to do is actually override the entire list, as is what happens in ResourceProcessorInvokingHandlerAdapter.afterPropertiesSet:
#Override
public void afterPropertiesSet() {
super.afterPropertiesSet();
// Retrieve actual handlers to use as delegate
HandlerMethodReturnValueHandlerComposite oldHandlers = getReturnValueHandlersComposite();
// Set up ResourceProcessingHandlerMethodResolver to delegate to originally configured ones
List<HandlerMethodReturnValueHandler> newHandlers = new ArrayList<HandlerMethodReturnValueHandler>();
newHandlers.add(new ResourceProcessorHandlerMethodReturnValueHandler(oldHandlers, invoker));
// Configure the new handler to be used
this.setReturnValueHandlers(newHandlers);
}
So, without a better solution available, I added a BeanPostProcessor to handle setting the List of handlers on an existing RequestMappingHandlerAdapter:
#Component
#RequiredArgsConstructor
#ConditionalOnBean(ResourceProcessor.class)
public class ResourceProcessorHandlerMethodReturnValueHandlerConfigurer implements BeanPostProcessor {
private final Collection<ResourceProcessor<?>> resourceProcessors;
#Override
public Object postProcessAfterInitialization(Object bean, String beanName)
throws BeansException {
if (bean instanceof RequestMappingHandlerAdapter) {
RequestMappingHandlerAdapter requestMappingHandlerAdapter = (RequestMappingHandlerAdapter) bean;
List<HandlerMethodReturnValueHandler> handlers =
requestMappingHandlerAdapter.getReturnValueHandlers();
HandlerMethodReturnValueHandlerComposite delegate =
handlers instanceof HandlerMethodReturnValueHandlerComposite ?
(HandlerMethodReturnValueHandlerComposite) handlers :
new HandlerMethodReturnValueHandlerComposite().addHandlers(handlers);
requestMappingHandlerAdapter.setReturnValueHandlers(Arrays.asList(
new ResourceProcessorHandlerMethodReturnValueHandler(delegate,
new ResourceProcessorInvoker(resourceProcessors))));
return requestMappingHandlerAdapter;
}
else return bean;
}
}
This has seemed to work so far...

how to get HttpContext.Current.GetOwinContext() in startup

I very read for this problem but i can not fixed this so i think create a new question in this site.
HttpContext.Current.GetOwinContext();
i want get GetOwinContext values with above code . above code there are in my startup.cs
[assembly: OwinStartupAttribute(typeof(OwinTest.Startup))]
public partial class Startup
{
public void Configuration(IAppBuilder app)
{
ConfigureAuth(app);
var c = HttpContext.Current.GetOwinContext();
}
}
and i get this error
//No owin.Environment item was found in the context
but var c = HttpContext.Current.GetOwinContext(); work for me in HomeController fine.!
I just get GetOwinContext in my startup.cs class.
thankfull
You can't do that. The OWIN context does not exist without a request, and the Startup class only runs once for the application, not for each request. Your Startup class should initialize your middleware and your application and the middleware and the application should access the OWIN context when needed.
As mentioned, what you are asking isn't possible. However, depending on your requirements, the following is possible and gives you access within the context of creating object instances. This is something I needed in order to check for whether an instance was already added else where (I have multiple startup classes in different projects).
public void Configuration(IAppBuilder app)
{
ConfigureAuth(app);
// Ensure we have our "main" access setup
app.CreatePerOwinContext<DataAccessor>(
(options, owinContext) =>
{
// Check that an instance hasn't already been added to
// the OwinContext in another plugin
return owinContext.Get<DataAccessor>() ?? DataAccessor.CreateInstance(options, owinContext);
}
);
}
Within the CreatePerOwinContext we have access to the OwinContext, so we can access it at the point of creating a new type. This might not help everyone as it's a little more specific to a person's needs, but is useful to know.

Can't use a session ejb in my managed bean cause i get a Null Pointer Exception

First of all I want to say I'm pretty new in programming with ejb and jsf, and I'm trying to complete a project started by a friend of mine.
I'm getting a NullPointerException caused by the invoke of the method utenteSessionBean.CheckUtentebyId(username) of the session bean object called utenteSessionBean, declared inside the managed bean called Neo4jMBean.
I learned that it's not necessary creating and initializing a session bean (as you must do with a normal java object) in managed bean, but it's enough declaring it.
Here is the code of the session bean, which retrieves data from a DB
#Stateless
#LocalBean
public class UtenteSessionBean {
EntityManagerFactory emf = Persistence.createEntityManagerFactory("EnterpriseApplication2-ejbPU");
public boolean CheckUtentebyId(String username){
EntityManager em = emf.createEntityManager();
Query query = em.createNamedQuery("Utente.findByUsername");
query.setParameter("username", username);
List<Utente> Res=query.getResultList();
//completare funzione ctrl+spazio
System.out.println("pre");
System.out.println("pre"+Res.isEmpty());
em.close();
System.out.println("post");
System.out.println("post"+Res.isEmpty());
if(Res.size()>=1)
{
return true;
}
else
{
return false;
}
}
}
Here's the code of the managed bean:
#ManagedBean
#ViewScoped
public class Neo4jMBean {
#EJB
private UtenteSessionBean utenteSessionBean;
static String SERVER_ROOT_URI = "http://localhost:7474/db/data/";
public Neo4jMBean() {
}
public boolean getUser(String username) {
return utenteSessionBean.CheckUtentebyId(username);
}
}
I've searched on StackOverFlow many times a solution for fixing this problem, but I haven't found something that works for me yet.
I fixed it accessing the EJB Components using JNDI.
In few words, if i use an EJB in a managed bean method, i need to add the next lines of code:
InitialContext ic = new InitialContext();
SessionBeanName = (SessionBeanClass) ic.lookup("java:global/NameOfTheApplication/NameOfTheEJBpackage/NameOfTheSessionBean");
It must be surronded by a try-catch statement
Create empty beans.xml file in your WEB-INF folder to enable CDI

Resources