Is it possible to check Kafka broker and the topic which Im sending message is healthy before sending message - spring-kafka

I have a batch application which needs to send message to two different Kafka topic on two different clusters. I want to make sure my kafka broker in which my producer app is connecting healthy before sending any messages.
Is it possible to do this kind of check programatically in Spring Boot.

See KafkaAdmin.describeTopics() API:
/**
* Obtain {#link TopicDescription}s for these topics.
* #param topicNames the topic names.
* #return a map of name:topicDescription.
*/
Map<String, TopicDescription> describeTopics(String... topicNames);
It does connect to the configured cluster and requests the info for topics and their partitions.
See docs for more info: https://docs.spring.io/spring-kafka/docs/current/reference/html/#configuring-topics

Related

Azure Service Bus Topic trigger in Function App- limiting the number of message read

I have a service bus trigger in an Azure function app which reads the messages ( which are in Json format) coming from the subscription. I would like to know if there is a way to limit the number of request processed by Service bus. So for example if my service bus get triggered and it has 20 messages to be processed, I would like only the first 10 to be processed and then next 10. How can I achieve that?
I am asking this because I am doing some manipulation with the received messages, first i creating a list of the information and running some sql query over it in C# and would prefer my code to NOT handle all the messages at once.
you can configure this in the host.json. Here's the documentation:
learn.microsoft.com
Just add this "maxConcurrentCalls": 10 to the messageHandlerOptions, then it will just process 10 messages simultaneously.

Keep messages when I use the Messenger component with Doctrine

I am using the Messenger component configured with Doctrine to store the messages in a database table.
I see that in the default configuration the table "messenger_messages" is automatically created. In it there is a "delivered_at" field.
I want to keep all the messages that are dispatched but when the messages are consumed the corresponding records are automatically deleted.
When I run the process via php bin/console messenger:consume async -vv I see that a timestamp is written to the "delivered_at" field but then the entire record is deleted.
Is there a way that the records are not erased and that the date and time of sending the message is recorded?
Giving an answer to the original question and kind of ignoring the following clarifications:
Register a new transport that should hold the sent messages (named 'sent' here):
# config/packages/messenger.yaml
framework:
messenger:
transports:
sent: 'doctrine://default?queue_name=sent'
Then create a new EventSubscriber that forwards the sent messages to the 'sent' transport:
<?php
namespace App\EventSubscriber;
use Symfony\Component\EventDispatcher\EventSubscriberInterface;
use Symfony\Component\Messenger\Event\WorkerMessageHandledEvent;
use Symfony\Component\Messenger\Transport\Sender\SenderInterface;
class MessengerMessageConsumedSubscriber implements EventSubscriberInterface
{
private SenderInterface $sentSender;
public function __construct(SenderInterface $sentSender)
{
$this->sentSender = $sentSender;
}
public static function getSubscribedEvents(): array
{
return [
WorkerMessageHandledEvent::class => [
['onWorkerMessageHandled', 10],
]
];
}
public function onWorkerMessageHandled(WorkerMessageHandledEvent $event)
{
$this->sentSender->send($event->getEnvelope());
}
}
Hint the constructor to pick the appropriate sender for the 'sent' transport:
# config/services.yaml
services:
App\EventSubscriber\MessengerMessageConsumedSubscriber:
arguments: ['#messenger.transport.sent']
This way the messages (well, actually copies of your messages) will be kept and messenger_messages.created_at will hold the time of sending.
I agree with yivi though in that you should probably NOT keep the messages in messenger_messages but instead log the data somewhere else...
(Verified on symfony/messenger v5.4.7)
It may not express the problem correctly. My application dispatches emails using the Messenger component.
Every email that is dispatched by the application is audited in a file. I can know the amount of mail that the application sends in a period of time.
However, the audited number is not real. The application counts everything that is dispatched. It does not count those that actually reach their destination.
Messenger processes the queue and does not know if the mail is sent by the Mail Server. Just dispatch.
What happen? Emails are obtained from an HTML form. Malformed domains are counted by the application and by Messenger as sent emails.
What I want to obtain is an indicator of how many emails the Mail Server has successfully processed.
I suppose that the solution to my problem is not through the application or the Messenger component, but rather by obtaining some kind of audit from the Mail Server itself.
I tried what Jakumi suggested but the trigger captures all the messages that get to the queue. Even malformed domains like foo#hotmai or bar#aaa.com. The count in this table matches my audit file that records sent emails.
My problem is to count the effectively sent.
Thank you very much for the comments and suggestions.
PS: I apologize for my English. I have used Google's translation services. Wait you understand.

how to handle Virtual Topic with Spring Cloud Contract

I'm trying to setup spring-cloud-contract with Active MQ Virtual topics as message system. I have a problem that virtual topic use different names for sending and receiving the message. But in contract we can define just one output channel in SendTo part of outputMessage. Does anyone know how to handle this scenario?
outputMessage {
sentTo "verifications"
body(
'''
'''
}
Well it looks like we have something like producer and consumer variables in contract which lets us configure values at producer and consumer end.
so, doing something like
sentTo $(consumer('VirtualTopic.A'), producer('Consumer.B.VirtualTopic.A'))
will do the trick. Here We are sending to the virtual topic and receiving from the queue.
Off course we need to add correct destination in our tests while sending at producer end and receiving at consumer end. This part will do the trick for auto generated test to verify received message at producer and send message to consumer.
Hope this helps

"messages_ready" from RabbitMQ queue using Pika

I need to get the number of messages that are ready. A queue has three types of messages: 1. Total 2. Unack'd 3. Ready
Ready is the ones' that are in the queue but haven't been consumed yet.
Currently I use requests
url = "http://<RABBITHOST>:15672/api/queues/%2f/{}".format(q)
res = requests.get(url, auth=("<user>","<password>")).json()
messages_in_queue = res.get("messages_ready")
The problem here is that I have to pass in the username and password. Using Pika I believe you can get the "total" messages. Is there any way to get the other two types (unack'd and ready) using Pika?
No, the AMQP protocol doesn't support getting unacknowledged messages. You will still have to use the HTTP API for that. If you do a passive queue declaration, the message count returned is the number of ready messages.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

How to deduplicate events when using RabbitMQ Publish/Subscribe Microservice Event Bus

I have been reading This Book on page 58 to understand how to do asynchronous event integration between microservices.
Using RabbitMQ and publish/subscribe patterns facilitates pushing events out to subscribers. However, given microservice architectures and docker usage I expect to have more than once instance of a microservice 'type' running. From what I understand all instances will subscribe to the event and therefore would all receive it.
The book doesn't clearly explain how to ensure only one of the instances handle the request.
I have looked into the duplication section, but that describes a pattern that explains how to deduplicate within a service instance but not necessarily against them...
Each microservice instance would subscribe using something similar to:
public void Subscribe<T, TH>()
where T : IntegrationEvent
where TH : IIntegrationEventHandler<T>
{
var eventName = _subsManager.GetEventKey<T>();
var containsKey = _subsManager.HasSubscriptionsForEvent(eventName);
if (!containsKey)
{
if (!_persistentConnection.IsConnected)
{
_persistentConnection.TryConnect();
}
using (var channel = _persistentConnection.CreateModel())
{
channel.QueueBind(queue: _queueName,
exchange: BROKER_NAME,
routingKey: eventName);
}
}
_subsManager.AddSubscription<T, TH>();
}
I need to understand how a multiple microservice instances of the same 'type' of microservice can deduplicate without loosing the message if the service goes down while processing.
From what I understand all instances will subscribe to the event and
therefore would all receive it.
Only one instance of subscriber will process the message/event. When you have multiple instances of a service running and subscribed to same subscription the first one to pick the message will set the message invisible from the subscription (called visibility timeout). If the service instance is able to process the message in given time it will tell the queue to delete the message and if it's not able to process the message in time , the message will re-appear in queue for any instance to pick it up again.
All standard service bus (rabbitMQ, SQS, Azure Serivce bus etc) provide this feature out of box.
By the way i have read this book and used the above code from eShotContainers and it works the way i described.
You should look into following pattern as well
Competing Consumers pattern
Hope that helps!

Resources