Using senders independent in Symfony messenger component - symfony

I'm using Symfony 4.2 and have one message to dispatch via messenger component which is a notification that should be sent via a few channels (for example SMS and email). I'm wondering how to make these senders independent (for example first channel fails and throw an exception) - how to make a try to send independent via the second sender? Currently, when one of the senders in the chain fails the rest can't make a try of delivering notification.
Catching exception on the sender level seems not to be a good solution, because returning envelop causes that it will be stamped as sent what is not true.
I've started to make message per channel to keep sentStamp convention, but It seems that should be one message and few channels listening for one message (even configuration indicates to that with senders keyword):
routing:
'App\Messenger\Command\Notification\SendSomeInformation':
senders:
- App\Messenger\Sender\Notification\EmailSender
- App\Messenger\Sender\Notification\SmsSender
There is some good approach for such problem?

One possibility would be to configure two different transports, and assign each handler to different transports, so if one of them fails and dequeues the message, the other can still have a chance to run.
# config/packages/messenger.yaml
transports:
async1:
# dsn
async2:
# dsn
...
routing:
'App\Messenger\Command\Notification\SendSomeInformation': [async1, async2]
Restricting handlers to transports can be done either in code or config, choose what works better for you.
In config:
# config/services.yaml
App\Messenger\Sender\Notification\SmsSender:
tags:
- { name: 'messenger.message_handler', from_transport: 'async1'}

Related

Is it possible to use message Pact for ActiveSupport::Notification messages?

Message Pact is non Http approach, see for more details:
https://docs.pact.io/getting_started/how_pact_works#non-http-testing-message-pact
ActiveSupport::Notification - is part of [Rails ActiveSupport][1], see for more details:
https://apidock.com/rails/ActiveSupport/Notifications
As I understand, ActiveSupport::Notification is using memory for a queue underneath, but not the external requests that are expected by Pact, so probably it can be done only via some other Message Queue, like Kafka, for example:
ActiveSupport::Notifications.subscribe("my_message") do |payload|
Kafka.produce(queue: 'test-queue', message: payload)
end
where Kafka.produce can be handled by Pact.
However, in this way, there is makes sense to remove ActiveSupport::Notifications and keep using only Kafka but this is the next step.

propagate OpenTracing TraceIds from publisher to consumer using MassTransit.RabbitMQ

Using MassTransit.RabbitMQ v5.3.2 and OpenTracing.Contrib.NetCore v0.5.0.
I'm able publish and consume events to RabbitMQ using MassTransit and I've got OpenTracing working with Jaeger, but I haven't managed to get my OpenTracing TraceIds propogated from my message publisher to my message consumer - The publisher and consumer traces have different TraceIds.
I've configured MassTransit with the following filter:
cfg.UseDiagnosticsActivity(new DiagnosticListener("test"));
I'm not actually sure what the listener name should be, hence "test". The documentation doesn't have an example for OpenTracing. Anyways, this adds a 'Publishing Message' span to the active trace on the publish side, and automatically sets up a 'Consuming Message' trace on the consumer side; however they're separate traces. How would I go about consolidating this into a single trace?
I could set a TraceId header using:
cfg.ConfigureSend(configurator => configurator.UseExecute(context => context.Headers.Set("TraceId", GlobalTracer.Instance.ActiveSpan.Context.TraceId)))
but then how would I configure my message consumer so that this is the root TraceId? Interested to see how I might do this, or if there's a different approach...
Thanks!
If anyone is interested; I ended up solving this by creating some publish and consume MassTransit middleware to do the trace propagation via trace injection and extraction respectively.
I've put the solution up on GitHub - https://github.com/yesmarket/MassTransit.OpenTracing
Still interested to hear if there's a better way of doing this...

How to correctly send custom message to server after some event in OMNET++

I have a problem with the message, which I send from my custom TCP client app service to the server (also with my custom app layer service) in OMNET++ simulation.
My TCPCustomClientApp service is created from TCPBasicCientApp service from INET framework. I overrode some methods like initialize, handleMessage, socketEstablished and I added some helper methods for my needs.
I have my custom message, now, after some trigger from a network, I would like to send this message to the server encapsulated to GenericAppMsg.
this is my code:
...
if (trigger){
connect(); // connect to the server - 3way TCP handshake
auto customMsg = new MyCustomMessage();
customMsg->set ...
msgBuffer.push_back(customMsg); // list with messages
}
then in method socketEstablished(int connId, void *ptr) I have this code for sending:
auto msg = new GenericAppMsg();
msg->setByteLength(requestLength);
msg->setExpectedReplyLength(replyLength);
msg->setServerClose(false);
msg->setKind(1); // set message kind to 1 = TCP_I_DATA (definned in enum TcpStatusInd in TCPCommand.msg)
msg->encapsulate(msgBuffer.front()); // encapsulate my custom message into GenericAppMsg
sendPacket(msg);
The problem is, that when this message comes to the server kind is 3 = ESTABLISHED.
What am I missing? Is this sending wrong?
The kind field is a freely usable field in messages that can be used for anything, but you should be aware that there is absolutely no guarantee that you will get the same value for the kind field on the receiving side. This is considered meta data that is bound to the actual message object. Downwards in the various lower OSI layers the packet can be aggregated or fragmented so the identity of the message object is not kept.
In short, exchanging data in the kind field is safe only if it is used for the communication between two modules that are directly connected. If there is anything between them, you cannot be sure whether the message is forwarded, or recreated with the same content or that some modules on the path between them decides to use the kind field to something else.
Anything that you want to pass to the other end must be encapsulated inside the message.

How to dispatch message to several destination with apache camel?

My problematic seem to be simple, but I haven't find yet a way to solve it...
I have a legacy system which is working and a new system which will replace it. This is only rest webservices call, so I'm using simple bridge endpoint on http service.
To ensure the iso-functional run, I want to put them behind a camel route dispatching message to both system but returning only the response of the legacy one and log the response of both system to be sure there are running in same way...
I create this route :
from("servlet:proxy?matchOnUriPrefix=true")
.streamCaching()
.setHeader("CamelHttpMethod", header("CamelHttpMethod"))
.to("log:com.mylog?showAll=true&multiline=true&showStreams=true")
.multicast()
.to(urlServer1 + "?bridgeEndpoint=true")
.to(urlServer2 + "?bridgeEndpoint=true")
.to("log:com.mylog?showAll=true&multiline=true&showStreams=true")
;
It works to call each services and to log messages, but response are in a mess...
If the first server doesn't respond, the second is not call, if the second respond an error, only that error is send back to client...
Any Idea ?
You can check for some more details in multicast docs http://camel.apache.org/multicast.html
Default behaviour of multicast (your case) is:
parallelProcessing is false so routes are called one by one
To correctly implement your case you need probably:
add error handling for each external service call so exception will not stop correct processing
configure or implement some aggregator strategy and put it to the strategyRef so you can combine results from all calls to the single multicast result

What could cause a message (from a polling receive location) to be ignored by subscribing orchestration?

I'll try provide as much information as possible:
No error message.
The instance stays in the "ready service instances".
The receive location has the same parameters (except URI, the three polling queries, user account/pw and receive pipeline) as another receive location that points to another database/table which works.
The pipeline is waiting for the correct schema.
The port surface and receive location are both waiting for the correct schema.
In my test example, there are only 10 lines being returned.
The message, which contains those 10 lines, validates against the schema.
I tried to let the instance alone to no avail - 30+ minutes - and no change in its condition.
I had also tried suspending and then resuming it which then places the instance in the "dehydrated orchestrations" list. Again, with no error message.
I'm able to get the message by looking at the body of the message that's in the "ready to run" service. (This is the message that validates versus the schema I use in Visual Studio.)
How might something like this arise?
Stupid question, but I have to ask... Is the corresponding host instance running?

Resources