In a spring-boot (version 2.1.4) application, there is requirement to migrate apache-kafka to spring-kafka.
Current kafka consumer does:
1) KafkaConsumer bean initialized at the time of application up
2) It has "0" topic partition set
3) poll the data using apache kafkaConsumer into ConsumerRecord
4) The application its own has Retry mechanism to wait and poll again till max_retry
The legacy code looks below:
while (!done.get()) {
ConsumerRecords<byte[], <byte[]> records = kafkaConsumer.poll(<MAX_VALUE>);
if (records.isEmpty()) {
retryCount++;
Thread.sleep(<some_time>);
} else {
// Process records;
}
if (retryCount > <max_retry_count>) {
done.set(true);
}
}
Tried below approaches:
1) Using spring kafka annotation (#KafkaListener), but it does not let us have control over polling.
2) Created "ConcurrentMessageListenerContainer" and setupMessageListener adds records into queue for polling. This gives us control on consumer.
I wanted to know, am I heading towards correct direction?
What would be better solution to achieve above requirement using spring-kafka?
It's not clear what you mean by "control on consumer". Creating a container is the same as using a #KafkaListener (a container is created under the covers).
Spring uses a "message-driven" approach.
You can set the idleEventInterval and the container will publish a ListenerContainerIdleEvent if no records are received during that time. You can listen for these events with an ApplicationListener bean, or an #EventListener method.
Related
I am working on a microservice based application in azure. My requirement is I had a service bus and I need to consume that service bus message in web api. Currently I implemented through azure functions, but my company asked to use api. Is it possible?, If possible please show me how to do it
You can create Background service to listen to message from service bus queue.
Below are few key points that needs to be noted:
Background task that runs on a timer.
Hosted service that activates a scoped service. The scoped service can use dependency injection (DI).
Queued background tasks that run sequentially.
App Settings:
1. {
2. "AppSettings": {
3. "QueueConnectionString": "<replace your RootManageSharedAccessKey here>",
4. "QueueName": "order-queue"
5. }
6. }
You can refer to c-sharpcorner blog for step by step process.
There is a simple way to get a simple message.
ServiceBusClient client = new ServiceBusClient("Endpoint=sb://yourservicesbusnamespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=Your_SharedAccess");
var receiver = client.CreateReceiver("Your Queue");
var message = await receiver.ReceiveMessagesAsync(1);
string ascii = Encoding.ASCII.GetString(message.FirstOrDefault().Body);
Console.WriteLine("Received Single Message: " + ascii);
await receiver.CompleteMessageAsync(message.FirstOrDefault());
I did some modifications from this post
https://ciaranodonnell.dev/posts/receiving-from-azure-servicebus/
I have a spring-boot Kafka project which is a web-service exposing API to get Kafka message in response.
What i want is whenever i call the rest end point the Kafka should start searching from beginning it does it as i used earliest in auto-reset config but i have to start server again and again to make it listen to Kafka from starting.
#KafkaListener(topics = {"topic"})
public void storeMessagesMessages(ConsumerRecord record) {
if (record.value().toString().contains(uuid) {
this.messageToBeReturnedByApi = record.value()
}
}
Or i can say i want this listener part to be invoked only when i call web service endpoint
Your listener should extend AbstractConsumerSeekAware; you can then perform arbitrary seek operations. See https://docs.spring.io/spring-kafka/docs/2.6.2/reference/html/#seek
I have been reading This Book on page 58 to understand how to do asynchronous event integration between microservices.
Using RabbitMQ and publish/subscribe patterns facilitates pushing events out to subscribers. However, given microservice architectures and docker usage I expect to have more than once instance of a microservice 'type' running. From what I understand all instances will subscribe to the event and therefore would all receive it.
The book doesn't clearly explain how to ensure only one of the instances handle the request.
I have looked into the duplication section, but that describes a pattern that explains how to deduplicate within a service instance but not necessarily against them...
Each microservice instance would subscribe using something similar to:
public void Subscribe<T, TH>()
where T : IntegrationEvent
where TH : IIntegrationEventHandler<T>
{
var eventName = _subsManager.GetEventKey<T>();
var containsKey = _subsManager.HasSubscriptionsForEvent(eventName);
if (!containsKey)
{
if (!_persistentConnection.IsConnected)
{
_persistentConnection.TryConnect();
}
using (var channel = _persistentConnection.CreateModel())
{
channel.QueueBind(queue: _queueName,
exchange: BROKER_NAME,
routingKey: eventName);
}
}
_subsManager.AddSubscription<T, TH>();
}
I need to understand how a multiple microservice instances of the same 'type' of microservice can deduplicate without loosing the message if the service goes down while processing.
From what I understand all instances will subscribe to the event and
therefore would all receive it.
Only one instance of subscriber will process the message/event. When you have multiple instances of a service running and subscribed to same subscription the first one to pick the message will set the message invisible from the subscription (called visibility timeout). If the service instance is able to process the message in given time it will tell the queue to delete the message and if it's not able to process the message in time , the message will re-appear in queue for any instance to pick it up again.
All standard service bus (rabbitMQ, SQS, Azure Serivce bus etc) provide this feature out of box.
By the way i have read this book and used the above code from eShotContainers and it works the way i described.
You should look into following pattern as well
Competing Consumers pattern
Hope that helps!
Given that I have a web/SOAP service, how do I setup and teardown a proper transaction context for Rebus (the messaging bus)? When Rebus is calling into a message handler this is not a problem since Rebus will setup the transaction context before calling the handler - but what about the opposite where a web service handler needs to send/publish a message via Rebus?
I am not interested in how to implement an HTTP module or similar - only the basics around Rebus: what is needed to prepare Rebus for sending a message?
The web service code has its own transaction going on when talking to the application database. I need to be able to setup Rebus when setting up the database transaction and comit/rollback Rebus when doing the same with the database.
I have a similar problem with standalone command line programs that needs to both interaction with a database and sending Rebus messages.
Rebus will automatically enlist send and publish operations in its own "ambient transaction context", which is accessed via the static(*) AmbientTransactionContext.Current property.
You could implement ITransactionContext yourself if you wanted to, but Rebus comes with DefaultTransactionContext in the box.
You use it like this:
using(var context = new DefaultTransactionContext())
{
AmbientTransactionContext.Current = context;
// send and publish things in here
// complete the transaction
await context.Complete();
}
which could easily be put e.g. in an OWIN middleware or something similar.
(*) The property is static, but the underlying value is bound to the current execution context (by using CallContext.LogicalGet/SetData), which means that you can think of it as thread-bound, with the nice property that it flows as expected to continuations.
In Rebus 2.0.2 it is possible to customize the accessors used to get/set the context by calling AmbientTransactionContext.SetAccessors(...) with an Action<ITransactionContext> and a Func<ITransactionContext>, e.g. like this:
AmbientTransactionContext.SetAccessors(
context => {
if (HttpContext.Current == null) {
throw new InvalidOperationException("Can't set the transaction context when there is no HTTP context");
}
HttpContext.Current.Items["current-rbs-context"] = context
},
() => HttpContext.Current?.Items["current-rbs-context"] as ITransactionContext
);
which in this case makes it work in a way that flows properly even when using old school HTTP modules ;)
I am trying to implement delayed queue with overriding of messages using Active MQ.
Each message is scheduled to be delivered with delay of x (say 60 seconds)
In between if same message is received again it should override previous message.
So even if I receive 10 messages say in x seconds. Only one message should be processed.
Is there clean way to accomplish this?
The question has two parts that need to be addressed separately:
Can a message be delayed in ActiveMQ?
Yes - see Delay and Schedule Message Delivery. You need to set <broker ... schedulerSupport="true"> in your ActiveMQ config, as well as setting the AMQ_SCHEDULED_DELAY property of the JMS message saying how long you want the message to be delayed (10000 in your case).
Is there any way to prevent the same message being consumed more than once?
Yes, but that's an application concern rather than an ActiveMQ one. It's often referred to as de-duplication or idempotent consumption. The simplest way if you only have one consumer is to keep track of messages received in a map, and check that map whether you receive a message. It it has been seen, discard.
For more complex use cases where you have multiple consumers on different machines, or you want that state to survive application restart, you will need to keep a table of messages seen in a database, and query it each time.
Please vote this answer up if it helps, as it encourages people to help you out.
Also according to method from ActiveMQ BrokerService class you should configure persistence to have ability to use scheduler functionality.
public boolean isSchedulerSupport() {
return this.schedulerSupport && (isPersistent() || jobSchedulerStore != null);
}
you can configure activemq broker to enable "schedulerSupport" with the following entry in your activemq.xml file located in conf directory of your activemq home directory.
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}" schedulerSupport="true">
You can Override the BrokerService in your configuration
#Configuration
#EnableJms
public class JMSConfiguration {
#Bean
public BrokerService brokerService() throws Exception {
BrokerService brokerService = new BrokerService();
brokerService.setSchedulerSupport(true);
return brokerService;
}
}