Retrying Kafka errors using #RetryableTopic - spring-kafka

Is there a way to specify the dlt used when retrying with spring-kafka #RetryableTopic.
I use a listener with the following configuration :
#RetryableTopic(
attempts = "4",
backoff = #Backoff(delay = 1000),
autoCreateTopics = "false",
topicSuffixingStrategy = TopicSuffixingStrategy.SUFFIX_WITH_DELAY_VALUE,
fixedDelayTopicStrategy = FixedDelayStrategy.SINGLE_TOPIC)
#KafkaListener(topics = "${spring.kafka.template.default-topic}")
This retries using a single topic but uses my main topic +"_dlt" for exhausted retries even though I have a dead letter topic with a different name configured at :
spring:
kafka:
consumer:
template:
dead-letter-topic: its_dead_jim
I've used a DeadLetterPublishingRecoverer in the past and have implemented the dlt resolver function but I don't see a way to override the default behavior in the documentation for RetryableTopic. I've looked at RetryTopicConfigurationBuilder and RetryTopicConfigurer
but nothing seem applicable to change the DLT name.

I am not sure why you think there is such a property on the template.
See the documentation.
Extend RetryTopicConfigurationSupport in a #Configuration class and...
Custom naming strategies
More complex naming strategies can be accomplished by registering a bean that implements RetryTopicNamesProviderFactory. The default implementation is SuffixingRetryTopicNamesProviderFactory and a different implementation can be registered in the following way:
#Override
protected RetryTopicComponentFactory createComponentFactory() {
return new RetryTopicComponentFactory() {
#Override
public RetryTopicNamesProviderFactory retryTopicNamesProviderFactory() {
return new CustomRetryTopicNamesProviderFactory();
}
};
}
As an example the following implementation, in addition to the standard suffix, adds a prefix to retry/dl topics names:
public class CustomRetryTopicNamesProviderFactory implements RetryTopicNamesProviderFactory {
#Override
public RetryTopicNamesProvider createRetryTopicNamesProvider(
DestinationTopic.Properties properties) {
if(properties.isMainEndpoint()) {
return new SuffixingRetryTopicNamesProvider(properties);
}
else {
return new SuffixingRetryTopicNamesProvider(properties) {
#Override
public String getTopicName(String topic) {
return "my-prefix-" + super.getTopicName(topic);
}
};
}
}
}

Related

'RoutingSlipCompleted' does not contain a definition for 'GetVariable'

after using massTransit (8.0.8) I got following error :
'RoutingSlipCompleted' does not contain a definition for 'GetVariable'
and the best extension method overload
'RoutingSlipEventExtensions.GetVariable(ConsumeContext,
string, Guid)' requires a receiver of type
'ConsumeContext'
here is my code:
using MassTransit;
using MassTransit.Courier.Contracts;
using MassTransit.Courier;
public class CheckInventoriesConsumer: IConsumer<ICheckInventoryRequest>
, IConsumer<RoutingSlipCompleted>
, IConsumer<RoutingSlipFaulted>
{
private readonly IEndpointNameFormatter _formatter;
public CheckInventoriesConsumer(IEndpointNameFormatter formatter)
{
_formatter = formatter;
}
public async Task Consume(ConsumeContext<ICheckInventoryRequest> context)
{
var routingSlip = CreateRoutingSlip(context);
await context.Execute(routingSlip);
}
private RoutingSlip CreateRoutingSlip(ConsumeContext<ICheckInventoryRequest> context)
{ // lot of code here
}
public async Task Consume(ConsumeContext<RoutingSlipCompleted> context)
{
// error is here
context.Message.GetVariable<Guid>(nameof(ConsumeContext.RequestId));
throw new NotImplementedException();
}
}
It is not going to find GetVariable method from MassTransit.Courier and I encounter with this error.
As you've already found based upon your comments:
context.GetVariable<Guid>(nameof(ConsumeContext.RequestId));
Is the right solution.
MassTransit Version 8 has more extensive serialization support, and the SerializationContext (from ConsumeContext) is needed to properly deserialize the variable from the routing slip event.

How to get threadlocal for concurrency consumer?

I am developing spring kafka consumer. Due to message volume, I need use concurrency to make sure throughput. Due to used concurrency, I used threadlocal object to save thread based data. Now I need remove this threadlocal object after use it.
Spring document with below links suggested to implement a EventListener which listen to event ConsumerStoppedEvent . But did not mention any sample eventlistener code to get threadlocal object and remove the value. May you please let me know how to get the threadlocal instance in this case?
Code samples will be appreciated.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#thread-safety
Something like this:
#SpringBootApplication
public class So71884752Application {
public static void main(String[] args) {
SpringApplication.run(So71884752Application.class, args);
}
#Bean
public NewTopic topic2() {
return TopicBuilder.name("topic1").partitions(2).build();
}
#Component
static class MyListener implements ApplicationListener<ConsumerStoppedEvent> {
private static final ThreadLocal<Long> threadLocalState = new ThreadLocal<>();
#KafkaListener(topics = "topic1", groupId = "my-consumer", concurrency = "2")
public void listen() {
long id = Thread.currentThread().getId();
System.out.println("set thread id to ThreadLocal: " + id);
threadLocalState.set(id);
}
#Override
public void onApplicationEvent(ConsumerStoppedEvent event) {
System.out.println("Remove from ThreadLocal: " + threadLocalState.get());
threadLocalState.remove();
}
}
}
So, I have two concurrent listener containers for those two partitions in the topic. Each of them is going to call this my #KafkaListener method anyway. I store the thread id into the ThreadLocal. For simple use-case and testing the feature.
The I implement ApplicationListener<ConsumerStoppedEvent> which is emitted in the appropriate consumer thread. And that one helps me to extract ThreadLocal value and clean it up in the end of consumer life.
The test against embedded Kafka looks like this:
#SpringBootTest
#EmbeddedKafka(bootstrapServersProperty = "spring.kafka.bootstrap-servers")
#DirtiesContext
class So71884752ApplicationTests {
#Autowired
KafkaTemplate<String, String> kafkaTemplate;
#Autowired
KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry;
#Test
void contextLoads() throws InterruptedException {
this.kafkaTemplate.send("topic1", "1", "foo");
this.kafkaTemplate.send("topic1", "2", "bar");
this.kafkaTemplate.flush();
Thread.sleep(1000); // Give it a chance to consume data
this.kafkaListenerEndpointRegistry.stop();
}
}
Right. It doesn't verify anything, but it demonstrate how that event can happen.
I see something like this in log output:
set thread id to ThreadLocal: 125
set thread id to ThreadLocal: 127
...
Remove from ThreadLocal: 125
Remove from ThreadLocal: 127
So, whatever that doc says is correct.

Spring cloud stream - Autowiring underlying Consumer for a given PollableMessageSource

Is it possible to get a hold of underlying KafkaConsumer bean for a defined PollableMessageSource?
I have Binding defined as:
public interface TestBindings {
String TEST_SOURCE = "test";
#Input(TEST_SOURCE)
PollableMessageSource testTopic();
}
and config class:
#EnableBinding(TestBindings.class)
public class TestBindingsPoller {
#Bean
public ApplicationRunner testPoller(PollableMessageSource testTopic) {
// Get kafka consumer for PollableMessageSource
KafkaConsumer kafkaConsumer = getConsumer(testTopic);
return args -> {
while (true) {
if (!testTopic.poll(...) {
Thread.sleep(500);
}
}
};
}
}
The question is, how can I get KafkaConsumer that corresponds to testTopic? Is there any way to get it from beans that are wired in spring cloud stream?
The KafkaMessageSource populates a KafkaConsumer into headers, so it is available in the place you receive messages: https://github.com/spring-projects/spring-kafka/blob/master/spring-kafka/src/main/java/org/springframework/kafka/support/converter/MessageConverter.java#L57.
If you are going to do stuff like poll yourself, I would suggest to inject a ConsumerFactory and use a consumer from there already.

Spring OAuth2 Making `state` param at least 32 characters long

I am attempting to authorize against an external identity provider. Everything seems setup fine, but I keep getting a validation error with my identity provider because the state parameter automatically tacked onto my authorization request is not long enough:
For example:
&state=uYG5DC
The requirements of my IDP say that this state param must be at least 32-characters long. How can I programmatically increase the size of this auto-generated number?
Even if I could generate this number myself, it is not possible to override with other methods I have seen suggested. The following attempt fails because my manual setting of ?state=abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz is superceded by the autogenerated param placed after it during the actual request:
#Bean
public OAuth2ProtectedResourceDetails loginGovOpenId() {
AuthorizationCodeResourceDetails details = new AuthorizationCodeResourceDetails() {
#Override
public String getUserAuthorizationUri() {
return super.getUserAuthorizationUri() + "?state=abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz";
}
};
details.setClientId(clientId);
details.setAccessTokenUri(accessTokenUri);
details.setUserAuthorizationUri(userAuthorizationUri);
details.setScope(Arrays.asList("openid", "email"));
details.setPreEstablishedRedirectUri(redirectUri);
details.setUseCurrentUri(true);
return details;
}
The 6-character setting seems to be set here, is there a way to override this?
https://github.com/spring-projects/spring-security-oauth/blob/master/spring-security-oauth2/src/main/java/org/springframework/security/oauth2/common/util/RandomValueStringGenerator.java
With the help of this post:
spring security StateKeyGenerator custom instance
I was able to come up with a working solution.
In my configuration class marked with these annotations:
#Configuration
#EnableOAuth2Client
I configured the following beans:
#Bean
public OAuth2ProtectedResourceDetails loginGovOpenId() {
AuthorizationCodeResourceDetails details = new AuthorizationCodeResourceDetails();
AuthorizationCodeResourceDetails details = new
details.setClientId(clientId);
details.setClientSecret(clientSecret);
details.setAccessTokenUri(accessTokenUri);
details.setUserAuthorizationUri(userAuthorizationUri);
details.setScope(Arrays.asList("openid", "email"));
details.setPreEstablishedRedirectUri(redirectUri);
details.setUseCurrentUri(true);
return details;
}
#Bean
public StateKeyGenerator stateKeyGenerator() {
return new CustomStateKeyGenerator();
}
#Bean
public AccessTokenProvider accessTokenProvider() {
AuthorizationCodeAccessTokenProvider accessTokenProvider = new AuthorizationCodeAccessTokenProvider();
accessTokenProvider.setStateKeyGenerator(stateKeyGenerator());
return accessTokenProvider;
}
#Bean
public OAuth2RestTemplate loginGovOpenIdTemplate(final OAuth2ClientContext clientContext) {
final OAuth2RestTemplate template = new OAuth2RestTemplate(loginGovOpenId(), clientContext);
template.setAccessTokenProvider(accessTokenProvider());
return template;
}
Where my CustomStateKeyGenerator implementation class looks as follows:
public class CustomStateKeyGenerator implements StateKeyGenerator {
// login.gov requires state to be at least 32-characters long
private static int length = 32;
private RandomValueStringGenerator generator = new RandomValueStringGenerator(length);
#Override
public String generateKey(OAuth2ProtectedResourceDetails resource) {
return generator.generate();
}
}

Quartz.net and Ninject: how to bind implementation to my job using NInject

I am actually working in an ASP.Net MVC 4 web application where we are using NInject for dependency injection. We are also using UnitOfWork and Repositories based on Entity framework.
We would like to use Quartz.net in our application to start some custom job periodically. I would like that NInject bind automatically the services that we need in our job.
It could be something like this:
public class DispatchingJob : IJob
{
private readonly IDispatchingManagementService _dispatchingManagementService;
public DispatchingJob(IDispatchingManagementService dispatchingManagementService )
{
_dispatchingManagementService = dispatchingManagementService ;
}
public void Execute(IJobExecutionContext context)
{
LogManager.Instance.Info(string.Format("Dispatching job started at: {0}", DateTime.Now));
_dispatchingManagementService.DispatchAtomicChecks();
LogManager.Instance.Info(string.Format("Dispatching job ended at: {0}", DateTime.Now));
}
}
So far, in our NInjectWebCommon binding is configured like this (using request scope):
kernel.Bind<IDispatchingManagementService>().To<DispatchingManagementService>();
Is it possible to inject the correct implementation into our custom job using NInject ? and how to do it ? I have read already few posts on stack overflow, however i need some advises and some example using NInject.
Use a JobFactory in your Quartz schedule, and resolve your job instance there.
So, in your NInject config set up the job (I'm guessing at the correct NInject syntax here)
// Assuming you only have one IJob
kernel.Bind<IJob>().To<DispatchingJob>();
Then, create a JobFactory: [edit: this is a modified version of #BatteryBackupUnit's answer here]
public class NInjectJobFactory : IJobFactory
{
private readonly IResolutionRoot resolutionRoot;
public NinjectJobFactory(IResolutionRoot resolutionRoot)
{
this.resolutionRoot = resolutionRoot;
}
public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
{
// If you have multiple jobs, specify the name as
// bundle.JobDetail.JobType.Name, or pass the type, whatever
// NInject wants..
return (IJob)this.resolutionRoot.Get<IJob>();
}
public void ReturnJob(IJob job)
{
this.resolutionRoot.Release(job);
}
}
Then, when you create the scheduler, assign the JobFactory to it:
private IScheduler GetSchedule(IResolutionRoot root)
{
var schedule = new StdSchedulerFactory().GetScheduler();
schedule.JobFactory = new NInjectJobFactory(root);
return schedule;
}
Quartz will then use the JobFactory to create the job, and NInject will resolve the dependencies for you.
Regarding scoping of the IUnitOfWork, as per a comment of the answer i linked, you can do
// default for web requests
Bind<IUnitOfWork>().To<UnitOfWork>()
.InRequestScope();
// fall back to `InCallScope()` when there's no web request.
Bind<IUnitOfWork>().To<UnitOfWork>()
.When(x => HttpContext.Current == null)
.InCallScope();
There's only one caveat that you should be aware of:
With incorrect usage of async in a web request, you may mistakenly be resolving a IUnitOfWork in a worker thread where HttpContext.Current is null. Now without the fallback binding, this would fail with an exception which would show you that you've done something wrong. With the fallback binding however, the issue may present itself in an obscured way. That is, it may work sometimes, but sometimes not. This is because there will be two (or even more) IUnitOfWork instances for the same request.
To remedy this, we can make the binding more specific. For this, we need some parameter to tell us to use another than InRequestScope(). Have a look at:
public class NonRequestScopedParameter : Ninject.Parameters.IParameter
{
public bool Equals(IParameter other)
{
if (other == null)
{
return false;
}
return other is NonRequestScopedParameter;
}
public object GetValue(IContext context, ITarget target)
{
throw new NotSupportedException("this parameter does not provide a value");
}
public string Name
{
get { return typeof(NonRequestScopedParameter).Name; }
}
// this is very important
public bool ShouldInherit
{
get { return true; }
}
}
now adapt the job factory as follows:
public class NInjectJobFactory : IJobFactory
{
private readonly IResolutionRoot resolutionRoot;
public NinjectJobFactory(IResolutionRoot resolutionRoot)
{
this.resolutionRoot = resolutionRoot;
}
public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
{
return (IJob) this.resolutionRoot.Get(
bundle.JobDetail.JobType,
new NonrequestScopedParameter()); // parameter goes here
}
public void ReturnJob(IJob job)
{
this.resolutionRoot.Release(job);
}
}
and adapt the IUnitOfWork bindings:
Bind<IUnitOfWork>().To<UnitOfWork>()
.InRequestScope();
Bind<IUnitOfWork>().To<UnitOfWork>()
.When(x => x.Parameters.OfType<NonRequestScopedParameter>().Any())
.InCallScope();
This way, if you use async wrong, there'll still be an exception, but IUnitOfWork scoping will still work for quartz tasks.
For any users that could be interested, here is the solution that finally worked for me.
I have made it working doing some adjustment to match my project. Please note that in the method NewJob, I have replaced the call to Kernel.Get by _resolutionRoot.Get.
As you can find here:
public class JobFactory : IJobFactory
{
private readonly IResolutionRoot _resolutionRoot;
public JobFactory(IResolutionRoot resolutionRoot)
{
this._resolutionRoot = resolutionRoot;
}
public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
{
try
{
return (IJob)_resolutionRoot.Get(
bundle.JobDetail.JobType, new NonRequestScopedParameter()); // parameter goes here
}
catch (Exception ex)
{
LogManager.Instance.Info(string.Format("Exception raised in JobFactory"));
}
}
public void ReturnJob(IJob job)
{
}
}
And here is the call schedule my job:
public static void RegisterScheduler(IKernel kernel)
{
try
{
var scheduler = new StdSchedulerFactory().GetScheduler();
scheduler.JobFactory = new JobFactory(kernel);
....
}
}
Thank you very much for your help
Thanks so much for your response. I have implemented something like that and the binding is working :):
public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
{
var resolver = DependencyResolver.Current;
var myJob = (IJob)resolver.GetService(typeof(IJob));
return myJob;
}
As I told before I am using in my project a service and unit of work (based on EF) that are both injected with NInject.
public class DispatchingManagementService : IDispatchingManagementService
{
private readonly IUnitOfWork _unitOfWork;
public DispatchingManagementService(IUnitOfWork unitOfWork)
{
_unitOfWork = unitOfWork;
}
}
Please find here how I am binding the implementations:
kernel.Bind<IUnitOfWork>().To<EfUnitOfWork>()
kernel.Bind<IDispatchingManagementService>().To<DispatchingManagementService>();
kernel.Bind<IJob>().To<DispatchingJob>();
To resume, the binding of IUnitOfWork is done for:
- Eevery time a new request is coming to my application ASP.Net MVC: Request scope
- Every time I am running the job: InCallScope
What are the best practices according to the behavior of EF ? I have find information to use CallInScope. Is it possible to tell NInject to get a scope ByRequest everytime a new request is coming to the application, and a InCallScope everytime my job is running ? How to do that ?
Thank you very much for your help

Resources