Keep messages when I use the Messenger component with Doctrine - symfony

I am using the Messenger component configured with Doctrine to store the messages in a database table.
I see that in the default configuration the table "messenger_messages" is automatically created. In it there is a "delivered_at" field.
I want to keep all the messages that are dispatched but when the messages are consumed the corresponding records are automatically deleted.
When I run the process via php bin/console messenger:consume async -vv I see that a timestamp is written to the "delivered_at" field but then the entire record is deleted.
Is there a way that the records are not erased and that the date and time of sending the message is recorded?

Giving an answer to the original question and kind of ignoring the following clarifications:
Register a new transport that should hold the sent messages (named 'sent' here):
# config/packages/messenger.yaml
framework:
messenger:
transports:
sent: 'doctrine://default?queue_name=sent'
Then create a new EventSubscriber that forwards the sent messages to the 'sent' transport:
<?php
namespace App\EventSubscriber;
use Symfony\Component\EventDispatcher\EventSubscriberInterface;
use Symfony\Component\Messenger\Event\WorkerMessageHandledEvent;
use Symfony\Component\Messenger\Transport\Sender\SenderInterface;
class MessengerMessageConsumedSubscriber implements EventSubscriberInterface
{
private SenderInterface $sentSender;
public function __construct(SenderInterface $sentSender)
{
$this->sentSender = $sentSender;
}
public static function getSubscribedEvents(): array
{
return [
WorkerMessageHandledEvent::class => [
['onWorkerMessageHandled', 10],
]
];
}
public function onWorkerMessageHandled(WorkerMessageHandledEvent $event)
{
$this->sentSender->send($event->getEnvelope());
}
}
Hint the constructor to pick the appropriate sender for the 'sent' transport:
# config/services.yaml
services:
App\EventSubscriber\MessengerMessageConsumedSubscriber:
arguments: ['#messenger.transport.sent']
This way the messages (well, actually copies of your messages) will be kept and messenger_messages.created_at will hold the time of sending.
I agree with yivi though in that you should probably NOT keep the messages in messenger_messages but instead log the data somewhere else...
(Verified on symfony/messenger v5.4.7)

It may not express the problem correctly. My application dispatches emails using the Messenger component.
Every email that is dispatched by the application is audited in a file. I can know the amount of mail that the application sends in a period of time.
However, the audited number is not real. The application counts everything that is dispatched. It does not count those that actually reach their destination.
Messenger processes the queue and does not know if the mail is sent by the Mail Server. Just dispatch.
What happen? Emails are obtained from an HTML form. Malformed domains are counted by the application and by Messenger as sent emails.
What I want to obtain is an indicator of how many emails the Mail Server has successfully processed.
I suppose that the solution to my problem is not through the application or the Messenger component, but rather by obtaining some kind of audit from the Mail Server itself.
I tried what Jakumi suggested but the trigger captures all the messages that get to the queue. Even malformed domains like foo#hotmai or bar#aaa.com. The count in this table matches my audit file that records sent emails.
My problem is to count the effectively sent.
Thank you very much for the comments and suggestions.
PS: I apologize for my English. I have used Google's translation services. Wait you understand.

Related

Restoring messages from database

I am thinking of a way to manage failed messages in Rebus.
In my second level retry strategy I want to save the message and exception details into the database so that I can later review the error details and decide whether to resend the message to the be reprocessed or ignore and delete.
In the handler I am capturing details as follows:
public async Task Handle(IFailed<StudentCreated> failedMessage)
{
//Logic to Defer Message with rebus_defer_count not shown
DictionarySerializer dictionarySerializer = new
DictionarySerializer();
ObjectSerializer objectSerializer = new ObjectSerializer();
string headers =
dictionarySerializer.SerializeToString(failedMessage.Headers);
string message =
objectSerializer.SerializeToString(failedMessage.Message);
Exception lastException= failedMessage.Exceptions.Last();
string exception = objectSerializer.SerializeToString(lastException);
//Logic to save the message and error details in the database not shown
}
This will enable me to save the message and error details into the database where I can create a dashboard to view the messages and resolve them as I wish rather than in the broker queue such as RabbitMQ.
Now my question is how can I return them to the handler where the error was raised using the information provided in the headers?
What is the best way to do it with REBUS provided I have all the details from the Failed Message as shown in my code snippet?
Regards
What you're trying to achieve will be much easier if you make a small change to your application. You see, Rebus already has a built-in service in place for handling failed messages called IErrorHandler.
You can register your own error handler like this:
Configure.With(...)
.(...)
.Options(o => o.Register<IErrorHandler>(c => new MyCustomErrorHandler()))
.Start();
thus replacing the default error handler (which btw. is PoisonQueueErrorHandler)
The error handler gets to handle the message in the form of the raw TransportMessage (i.e. simply headers and a byte[]) when all retries have failed, so this is the perfect place to save the message to your database.
If you then look here, you can see how Rebus' default error handler adds its own queue name as the rbs2-source-queue header, meaning that the message can later be sent back to that queue.
With this information, it should be fairly easy to write some code that inspects the message for its source queue and sends a RabbitMQ message to that queue.
This will only work if the re-delivery service has access to the RabbitMQ instance where all of your Rebus endpoints are running, of course. It's less straightforward, if you want to implement this in a general way: E.g. if you were using Fleet Manager, each Rebus instance would use a long-polling protocol to query the server for commands, which enables Fleet Manager to tell any Rebus instance to e.g. send a previously failed message to any queue it has access to.

How to deduplicate events when using RabbitMQ Publish/Subscribe Microservice Event Bus

I have been reading This Book on page 58 to understand how to do asynchronous event integration between microservices.
Using RabbitMQ and publish/subscribe patterns facilitates pushing events out to subscribers. However, given microservice architectures and docker usage I expect to have more than once instance of a microservice 'type' running. From what I understand all instances will subscribe to the event and therefore would all receive it.
The book doesn't clearly explain how to ensure only one of the instances handle the request.
I have looked into the duplication section, but that describes a pattern that explains how to deduplicate within a service instance but not necessarily against them...
Each microservice instance would subscribe using something similar to:
public void Subscribe<T, TH>()
where T : IntegrationEvent
where TH : IIntegrationEventHandler<T>
{
var eventName = _subsManager.GetEventKey<T>();
var containsKey = _subsManager.HasSubscriptionsForEvent(eventName);
if (!containsKey)
{
if (!_persistentConnection.IsConnected)
{
_persistentConnection.TryConnect();
}
using (var channel = _persistentConnection.CreateModel())
{
channel.QueueBind(queue: _queueName,
exchange: BROKER_NAME,
routingKey: eventName);
}
}
_subsManager.AddSubscription<T, TH>();
}
I need to understand how a multiple microservice instances of the same 'type' of microservice can deduplicate without loosing the message if the service goes down while processing.
From what I understand all instances will subscribe to the event and
therefore would all receive it.
Only one instance of subscriber will process the message/event. When you have multiple instances of a service running and subscribed to same subscription the first one to pick the message will set the message invisible from the subscription (called visibility timeout). If the service instance is able to process the message in given time it will tell the queue to delete the message and if it's not able to process the message in time , the message will re-appear in queue for any instance to pick it up again.
All standard service bus (rabbitMQ, SQS, Azure Serivce bus etc) provide this feature out of box.
By the way i have read this book and used the above code from eShotContainers and it works the way i described.
You should look into following pattern as well
Competing Consumers pattern
Hope that helps!

Skype returning empty CHATMESSAGES results through the DBus API

I am trying to use Skype's DBus API in order to retrieve the list of messages (message IDs) I've exchanged with a contact. However, both the SEARCH CHATMESSAGES <target> (protocol >= 3) and the SEARCH MESSAGES <target> (protocol < 3) commands return unexpectedly empty results.
Here is the trace of a few exchanges I had with the API. I used d-feet to send my requests, but the result is exactly the same when I send the request from my own program.
Bus name: com.Skype.API
Object: /com/Skype
Interface: com.Skype.API
Method used: Invoke(String request)
Trace:
-> NAME dfeet
<- OK
-> PROTOCOL 8
<- PROTOCOL 8
-> SEARCH CHATMESSAGES mycontact
<-
The same thing happens with two other SEARCH commands:
SEARCH MESSAGES <target> (with PROTOCOL 2).
SEARCH CHATS
Additionally, I also get an empty result when I try to request a message list based on a chat ID: GET CHAT <chat_id> GETMESSAGES.
However, commands such as SEARCH FRIENDS, SEARCH CALLS, or SEARCH ACTIVECHATS work just fine, and return their lists of IDs (contacts IDs, calls IDs, or chat IDs) as expected.
It might also be worth noting that this happens for all contacts, regardless of how many messages I've exchanged with them (I thought at first that there might be too many messages involved, but the result is the same, whether I've sent 3, or thousands of messages to the contact).
Is there anything that would explain why I get these empty responses through DBus, for these requests?
Skype will not use Invoke's return value when its reply is too heavy. As it so happens, when Skype has too much data to prepare and transfer after a request, it automatically returns an empty string to the Invoke call. The true, heavy reply is then prepared asynchrously by Skype, and the client program must be ready to receive it when it eventually arrives.
Whenever you are communicating with Skype over DBus, your application must act as both a client (calling Invoke), and a server (providing a DBus object for Skype to reach). This design was a little unexpected (I guess we could argue on its quality), but here is what it requires you to do:
Make your program a DBus "server" (providing objects to reach). Through your bus name to Skype, register an object path called /com/Skype/Client implementing the com.Skype.API.Client interface.
Prepare a message handler for the only method of this interface: Notify(s). This is the method Skype will try to call to send you the heavy reply to one of your previous requests.
Program your own mechanism to match your Invoke request with the asynchronous Notify message coming in as an answer later on.
The creation of an object can be done through dbus_connection_register_object_path, the parameters for which are:
The DBusConnection structure representing your bus name.
The object path you are registering, here /com/Skype/Client.
A table of message handlers (DBusObjectPathVTable) used to process all incoming requests.
Data to be sent to these handlers when they are called. This is additional data, not the actual message being received since you're just setting up the handler here.
For instance...
DBusHandlerResult notify_handler(DBusConnection *connection,
DBusMessage *message,
void *user_data){
return DBUS_HANDLER_RESULT_HANDLED;
}
void unregister_handler(DBusConnection *connection,
void *user_data){}
DBusObjectPathVTable vtable = {
unregister_handler,
message_handler,
NULL
};
if(!dbus_connection_register_object_path(connection,
"/com/Skype/Client",
&vtable, NULL)){
// Error...
}
Note that this is just an object's definition. In order to actually hook on the Notify calls, you'll have to select() on a DBusWatch file descriptor, and dispatch the incoming DBusMessage in order to have your message handler called.
If you are working with other bindings, you'll probably find much faster ways to setup objects and start working as a client application. See:
GLib's g_dbus_connection_register_object
Exporting objects with dbus-python
QtDBus's QDBusConnection::registerObject
... (other bindings)

How to subscribe for RabbitMQ notification messages?

I am developing a Qt5 server application and I am using the QAMQP library.
What I want to do is the following:
Another server should send a message whenever something about a user
should change
My server, which is distributed among multiple machines and has multiple processes per machine needs to be notified about these updates
The thing is, I am not sure about the architecture that I should build. I just know that whenever something about some user changes, the server needs to send a message to the RabbitMQ broker and all my processes that are interested in updates for that particular user should get the message. But should I create one queue per process, and bind it with a separate exchange for each user? Or maybe create in each process a separate queue for each user and bind that somehow to some exchange. Fanout exchanges come to mind, and one queue per process, I am just not sure about the queue-exchange relations even though I've spent quiet some time trying to figure it out.
Update, in order to clarify things and write about the progress
I have a distributed application that needs to be notified for product changes. Those changes happen often and are tracked by another platform. I want to get those updates in my application.
In order to achieve that, each one of my application instances creates it's own queue. Then, whenever an instance is interested in updates for a particular product it creates an exchange for that product and binds it to the queue, like this:
Exchange type : 'direct'
Exchange name : 'product_update'
Routing key : 'PRODUCT_CODE'
Where PRODUCT_CODE is a string that represents the code of the product. In the platform that track the changes, I just publish messages with the corresponding exchanges.
The problem comes when i need to unsubscribe for a product update. I am using the QAMQP library, and in the destructor of the QAMQP::Exchange there's an unconditional remove() call.
When that function is called I am getting error in the RabbitMQ log, which looks like this:
=ERROR REPORT==== 28-Jan-2014::08:41:35 ===
connection <0.937.0>, channel 7 - soft error:
{amqp_error,precondition_failed,
"exchange 'product_update' in vhost 'test-app' in use",
'exchange.delete'}
I am not sure how to properly unsubscribe. I know from the RabbitMQ web interface that I have only one exchange ('product_update') which has bindings to multiple queues with difference routing keys.
I can see that the call to remove() in QAMQP tries to delete the exchange, but since it's used by my other processes, it's still in use and cannot be removed, which I beleive is ok.
But what should I do to delete the exchange object that I created? Should I first unbind it from the queue? I believe that i should be able to delete the object without calling remove(), but I may be mistaken or I may doing it wrong.
Also, if there's a better pattern for what I am trying to accomplish, please advice.
Here's some sample code, per request.
ProductUpdater::ProductUpdater(QObject* parent) : QObject(parent)
{
mClient = new QAMQP::Client(this);
mClient->setAutoReconnect(true);
mClient->open(mConnStr);
connect(mClient, SIGNAL(connected()), this, SLOT(amqp_connected()));
}
void ProductUpdater::amqp_connected()
{
mQueue = mClient->createQueue();
connect(mQueue, SIGNAL(declared()), this, SLOT(amqp_queue_declared()));
connect(mQueue, SIGNAL(messageReceived(QAMQP::Queue*)),
this, SLOT(message_received(QAMQP::Queue*)));
mQueue->setNoAck(false);
mQueue->declare(QString(), QAMQP::Queue::QueueOptions(QAMQP::Queue::AutoDelete));
}
void ProductUpdater::amqp_queue_declared()
{
mQueue->consume();
}
void ProductUpdater::amqp_exchange_declared()
{
QAMQP::Exchange* exchange = qobject_cast<QAMQP::Exchange*>(sender());
if (mKeys.contains(exchange))
mQueue->bind(exchange, mKeys.value(exchange));
}
void ProductUpdater::message_received(QAMQP::Queue* queue)
{
while (queue->hasMessage())
{
const QAMQP::MessagePtr message = queue->getMessage();
processMessage(message);
if (!queue->noAck())
queue->ack(message);
}
}
bool ProductUpdater::subscribe(const QString& productId)
{
if (!mClient)
return false;
foreach (const QString& id, mSubscriptions) {
if (id == productId)
return true; // already subscribed
}
QAMQP::Exchange* exchange = mClient->createExchange("product_update");
mSubscriptions.insert(productId, exchange);
connect(exchange, SIGNAL(declared()), this, SLOT(amqp_exchange_declared()));
exchange->declare(QStringLiteral("direct"));
return true;
}
void ProductUpdater::unsubscribe(const QString& productId)
{
if (!mSubscriptions.contains(productId))
return;
QAMQP::Exchange* exchange = mSubscriptions.take(productId);
if (exchange) {
// This may even be unnecessary...?
mQueue->unbind(exchange, productId);
// This will produce an error in the RabbitMQ log
// But if exchange isn't destroyed, we have a memory leak
// if we do exchange->deleteLater(); it'll also produce an error...
// exchange->remove();
}
}
Amy,
I think your doubt is related to the message distribution style (or patterns) and the exchange types available for RabbitMQ. So, I'll try to cover them all with a short explanation and you can decide which will fit best for your scenario (RabbitMQ tutorials explained in another way).
Work Queue
Using the default exchange and a binding key you can post messages directly yo a queue. Once a message arrives for a queue, the consumers "compete" to grab the message, it means a message is not delivered to more than one consumer. If there are multiple consumers listening to a single queue, the messages will be delivered in a round-robin fashion.
Use this approach when you have work to do and you want to scale across multiple servers/processes easily.
Publish/Subscribe
In this model, one single sent message may reach many consumers listening on their queues. For this scenario, where you must unselectively dispatch messages to all consumers, you can use a fanout exchange. These exchanges are "dumb" and acts just like their names imply: like a fan. One thing enters and is replicated without any intelligence to all queues that are bound to the exchange. You could as well use direct exchanges, but only if you need to do any filtering or routing on the messages.
Use this scenario when you have something like an event and you may need multiple servers, processes and consumers to handle that event, each one doing a task of different nature to handle the event. If you do not need any filter/routing, use fanout exchange for this scenario.
Routing / Topic
A particular case of the Publish/Subscribe model, where you can have queues "listen" on the exchange using filters, that may have pattern matching (topics) or not (just route).
If you need pattern matching, use topic exchange type. If you don't, use direct.
When a queue "listens" to an exchange, a binding is used. In this binding, you may specify a binding key.
To deliver the message to the correct queues, the exchange examines the message's routing key. If it matches the binding key, the message is forwarded to that queue. The match strategy depends on wether you are using topic or direct exchange, as said before.
TL;DR:
For your scenario, if each process do something different with the User change event, use a single exchange with fanout type. Each class of handler declares the same queue name bound to that exchange. This relates to the Publish/Subscribe model above. You can distribute work to among consumers of the same class listening on the same queue name, even if they don't reside on the same process.
However, if all the consumers that are interested in the event perform the same task when handling, use the work queue model.
Hope this helps,

ActiveMQ Override scheduled message

I am trying to implement delayed queue with overriding of messages using Active MQ.
Each message is scheduled to be delivered with delay of x (say 60 seconds)
In between if same message is received again it should override previous message.
So even if I receive 10 messages say in x seconds. Only one message should be processed.
Is there clean way to accomplish this?
The question has two parts that need to be addressed separately:
Can a message be delayed in ActiveMQ?
Yes - see Delay and Schedule Message Delivery. You need to set <broker ... schedulerSupport="true"> in your ActiveMQ config, as well as setting the AMQ_SCHEDULED_DELAY property of the JMS message saying how long you want the message to be delayed (10000 in your case).
Is there any way to prevent the same message being consumed more than once?
Yes, but that's an application concern rather than an ActiveMQ one. It's often referred to as de-duplication or idempotent consumption. The simplest way if you only have one consumer is to keep track of messages received in a map, and check that map whether you receive a message. It it has been seen, discard.
For more complex use cases where you have multiple consumers on different machines, or you want that state to survive application restart, you will need to keep a table of messages seen in a database, and query it each time.
Please vote this answer up if it helps, as it encourages people to help you out.
Also according to method from ActiveMQ BrokerService class you should configure persistence to have ability to use scheduler functionality.
public boolean isSchedulerSupport() {
return this.schedulerSupport && (isPersistent() || jobSchedulerStore != null);
}
you can configure activemq broker to enable "schedulerSupport" with the following entry in your activemq.xml file located in conf directory of your activemq home directory.
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}" schedulerSupport="true">
You can Override the BrokerService in your configuration
#Configuration
#EnableJms
public class JMSConfiguration {
#Bean
public BrokerService brokerService() throws Exception {
BrokerService brokerService = new BrokerService();
brokerService.setSchedulerSupport(true);
return brokerService;
}
}

Resources