I have an orchestration called MyUsefulOrch, hosted in an application MySharedApp.
MyUsefulOrch has an inbound messagebox-direct-bound port to receive requests, and after doing some useful work, an outbound messagebox-direct-bound port to send a message to the caller.
Now, I have another orchestration called MyCallerOrch which wants to benefit from the useful processing provided by MyUsefulOrch. However, MyCallerOrch is hosted in a different application, MyCallingApp.
I do not want to have any references to the assembly which contains MyUsefulOrch from MyCallerOrch.
My problem now is making sure I can send a message to MyUsefulOrch from MyCallerOrch and receive a response from it.
Ahah! Correlation should do the trick! But how do I go about getting correlation to work in this scenario?
For example:
Would I put a correlation id in a property schema and stuff a guid into the message context under this property from MyCallerOrch just before sending it to the messagebox?
How do I ensure that MyCallerOrch receives only the responses it needs to receive from MyUsefulOrch?
Do I need to put the correlation id value into the message body of the messages which are sent between the two orchestrations?
I would greatly appreciate any help, ideally as descriptive as possible, about how to acheive this.
Many thanks in advance.
If you use a two-way, request/response send port in the caller orchestration to send messages to the useful orchestration, then you can use correlation to route the relevant messages back to the userful orch from the caller.
The trick is that you will need to modify the useful orch (to make it more useful, of course).
If you do not/cannot control whether or not callers to the userful orch are expecting a response back, then you would need to make the inbound (request) port a one-way port. The orchestration would then complete by sending to a one-way outbound (response) port.
To ensure that messages received from two-way/request-response callers are routed back properly, the construct shape of the outbound message inside your useful orch will need to set the following message properties to true using a message assignment shape:
BTS.RouteDirectToTP
BTS.IsRequestResponse
Before setting those two properties, though, also make sure to do something like msgOut(*) f= msgIn(*); in the same message assignment shape to ensure that other properties get copied over. If the inbound and outbound messages are not the same, then you have to manually set each of the required properties, one at a time.
Those properties, of course, in addition to the two above, are what help ensure that the result of the useful orch is properly routed to the caller. They should be inside your correlation set and are:
BTS.CorrelationToken
BTS.EpmRRCorrelationToken
BTS.IsRequestResponse
BTS.ReqRespTransmitPipelineID
BTS.RouteDirectToTP
I'm getting a bit ahead of myself, however, as you assign the correlation set to the outbound send shape only if BTS.EpmRRCorrelationToken exists msgIn. This is critical. I have used a decision shape in an orchcestration, with the decision based upon that exact phrase. If the result is true, then send the previously constructed message out and assign the correlation set from above as the Initializing correlation set. This will cause BizTalk to route the message back to the caller as its expected response.
If the result of the decision was false then the caller of the useful orchestration was one-way. You will still likely want to send out a result (and just have someone else subscribe to it). You can even use the same send port as for two-way responses, just do not assign the correlation set.
You will want to thoroughly test this, of course. It does work for me in the one scenario in which I have used it, but that doesn't absolve others from doing their due diligence.
I think you are pretty much on the right track
Since the 2 applications are going to send messages to eachother, if you use strongly typed schemas, both apps will need to know about the schemas.
In this case recommend that you separate the common schemas off into a separate assembly, and reference this from both your orchestration apps.
(Schemas registered on the Server must have unique XMLNS#ROOTs, even across multiple applications)
However, if you really can't stand even a shared schema assembly reference, you might need to resort to untyped messages.
Richard Seroter has an example here
His article also explains a technique for auto stamping a correlation GUID on the context properties.
Edit : Good point. It is possible to promote custom context properties on the message without a Pipeline - see the tricks here and here - this would suffice to send the context property to MyUsefulOrch and similarly, the Custom context could be promoted on the return message from within MyUsefulOrch (since MyUsefulOrch doesn't need any correlation). However I can't think how, on the return to MyCallingOrch that the custom context property can be used to continue the "following correlation", unless you add a new correlating property into the return message.
Related
I'll answer this myself, just adding here for documentation if anyone else encounters it.
We are using dynamic WCF-SQL port. I had it working in one test orchestration, but when I copied code to the real orchestration, it gave the error:
Wcf.Action Must be a message part property of message part ...
and similar for each of the lines below (in a Message Assignment shape in a BizTalk orchestration).
The issue was just to remove the
.Messagepart
Example:
SQLRequestMessage(WCF.Action) = etc...
The built-in WCF related promoted fields are on the message, not the parts of the message.
My test orchestration didn't use multipart message types, but in the real orchestration that I'm modified, our standard is to use them.
Once I saw the issue, it was obvious, but was knocking my head to figure it out for a while.
I'm maintaining an event-sourced application that went far off the road I'm afraid.
In one case a command is received by an aggregate root that publishes an event that is handled by an event handler that needs to do 2 things:
send a command (cmd1) to another aggregate root that will publish an event that will create a number of sagas each firing of some commands that are eventually handled by a number of aggregates
send a second command (cmd2) that will also lead to all sorts of command/event/command sequences.
In schematic form:
cmd0 -> AR0 -> evt0 -> evtHandler -> cmd1 -> AR1 -> evt1 -> saga stuff and more cmds and evts
|-> cmd2 -> AR2 -> evt2 -> more saga stuff, cmds and evts
Everything happens in the same thread and everything happens in 1 transaction started at the first command handling.
Now the goal: all events, saga's, aggregate calls originated from the first command (cmd1) should happen first and then all events, saga's and aggregate calls originated from the second command (cmd2) should happen.
Here's the observation: cmd1 calls AR1 that published evt1 but after that cmd2 calls AR2 publishing evt2. All other events and commands originating from cmd1 are mingled with those from cmd2.
First I thought I could get away with it using the UnitOfWork but even explicitly creating a separate unit of work for handling cmd1 didn't solve the problem. Looking at the implementation in AbstractEventBus I see that the events are simply merged in the parent unit of work and thus end up being merged with the ones originating from cmd2.
Here's the question: Is there a way to first call cmd1 and wait until all effects originating from that command are handled before calling cmd2 while still preserving the transactional atomicity that I currently have?
To be completely honest with you Jan, the best would be if the components within your application don't rely to much on that order.
It essentially means you have distinct message handling components, which in essence could be different micro service, but they are all tied together as the order is important.
Ideally, you'd set up your components to work on their own.
So, aggregates handle a command and publish the result, done.
Sagas react to events, regardless of where they come from, and react on them with actions (e.g. command dispatching).
Embracing the eventuality would help here, as it will drop the entire requirement of waiting for one process to complete.
From a theoretical stance, that would be my response.
From a more pragmatic corner looking at your question, I'd like to point out that it sounds like a rabbit hole you are going in to. You don't only want cmd1 handling to be done, you want event handling on all sagas to be resolved, including commands coming out of that too, correct? Who's here to tell what the number of Sagas is? Or what the number of commands those saga dispatch need to be taken into account? These criteria will likely change over time, adding more an more stuff which needs to happen "in a single transaction".
Well, yes there are way to wait for processing from some parts, to pull them all in a single transaction. But to be honest with you, I wouldn't recommend taking that route, as it will only make using such a message based system more and more complex.
The crux is what all effects are. From the point of dispatching that command, you should only care if that exact command handles successfully yes or no, and that's where the concerns should end.
I know this does not give you a simple programmatic solution, as you need to adjust the design. But I think decoupling is the only right way to go hear.
That's my two cents to the situation, hope this helps you further in any way Jan.
Message Anticipation explanation update
In essence, the messages you'd use in an Axon application form a boundary. A boundary after which the components essentially don't have a clue what is going to handle those messages. The behaviour per message differs a little, but might clarify what opens you have too:
Commands - Commands are consistently routed to a single handle, on a single instance. Furthermore, you can anticipate a response, in the form of an OK or NOK. OK's mean the handler is void or the identifier of a created entity (like the aggregate itself). NOK's typically are the exceptions you throw from your command handling methods, which signal something went wrong or the command simply couldn't be executed and it should be let know to the dispatching end.
Events - Events will be broadcast to any component which has subscribed itself to the EventBus as being capable to handle a given event. Note that event handling is segregated in time from the actual publication point of the event. This means there is no way there are results from event handling which could (or should) be returned to the dispatcher of an event.
Queries - Query messages can be routed in several forms. Either a single component is best suited to answer the query (called Point-to-Point queries). You can also dispatch a query to several handlers and aggregate the results (called Scatter-Gather queries). Lastly, you can subscribe to query models by doing a "Subscription query", which is essentially a combination of a point-to-point followed up by a Flux of updates. Clearly, query dispatching would mean you are receiving a result from some component. It's just that you have freedom in the type of query you do. If any assurance is required about the "up-to-date"-ness of a query response should be part of the implementation of the query being sent and how it is handled by a #QueryHandler annotated method.
Hope this provides some additional clarity at what each of the messages do in an Axon application!
Currently I use the AcknoledgingMessageListener to implement a Kafka consumer using spring-Kafka. This implementation helps me listen on a specific topic and process messages with a manual ack.
I now need to build the following capability:
Let us assume that for an some environmental exception or some entry of bad data via this topic, I need to replay data on a topic from and to a specific offset. This would be a manual trigger (mostly via the execution of a Java class).
It would be ideal if I can retrieve the messages between those offsets and feed it is a replay topic so that a new consumer can process those messages thus keeping the offsets intact on the original topic.
CosumerSeekAware interface - if this is the answer how can I trigger this externally? Via let say a mvn -Dexec. I am not sure if this is even possible
Also let say that I have an crash time stamp with me, is it possible to introspect the topic to find the offset corresponding to the crash so that I can replay from that offset?
Can I find offsets corresponding to some specific data so that I can replay those specific offsets?
All of these requirements are towards building a resilience layer around our Kafka capabilities. I need all of these to be managed by a separate executable class that can be triggered manually providing the relevant data (like time stamps etc). This class should determine offsets and then seek to that offset, retrieve the messages corresponding to those offsets and post them to a separate topic. Can someone please point me in the right direction? I’m afraid I’m going around in circles.
so that a new consumer can process those messages thus keeping the offsets intact on the original topic.
Just create a new listener container with a different group id (new consumer) and use a ConsumerAwareRebalanceListener (or ConsumerSeekAware) to perform the seeks when the partitions are assigned.
Here is a sample CARL that seeks all assigned topics based on a timestamp.
You will need some mechanism to know when the new consumer should stop consuming (at which time you can stop() the new container). Maybe set max.poll.records=1 on the new consumer so he doesn't prefetch past the failure point.
I am not sure what you mean by #3.
I have a Java program which connects to the internet and send files (emails with attachments, SSL, javamail).
It sends only one email at a time.
Is there a way that my program could track the network traffic it itself is generating?
That way I could track progress of emails being sent...
It would also be nice if it was cross-platform solution...
Here's another approach that only works for sending messages...
The data for a message to be sent is produced by the Message.writeTo method and filtered through various streams that send it directly out the socket. You could subclass MimeMessage, override the writeTo method, wrap the OutputStream with your own OutputStream that counts the data flowing through it (similar to my other suggestion), and reports that to your program. In code...
public class MyMessage extends MimeMessage {
...
public void writeTo(OutputStream os, String[] ignoreList) throws IOException, MessagingException {
super.writeTo(new MyCountingStream(os), ignoreList);
}
}
If you want percent completion you could first use Message.writeTo to write the message to a stream that does nothing but count the amount of data being written, while throwing away the data. Then you know how big the message really is, so when the message is being sent you can tell what percent of the message that is.
Hope that helps...
Another user's approach is here:
Using JProgressBar with Java Mail ( knowing the progress after transport.send() )
At a lower level, if you want to monitor how many bytes are being sent, you should be able to write your own SocketFactory that produces Sockets that produce wrapped InputStreams and OutputStreams that monitor the amount of data passing through them. It's a bit of work, and perhaps lower level than you really want, but it's another approach.
I've been meaning to do this myself for some time, but I'm still waiting for that round tuit... :-)
Anyway, here's just a bit more detail. There might be gotchas I'm not aware of once you get into it...
You need to create your own SocketFactory class. There's a trivial example in the JavaMail SSLNOTES.txt file that delegates to another factory to do the work. Instead of factory.createSocket(...), you need to use "new MySocket(factory.createSocket(...))", where MySocket is a class you write that overrides all the methods to delegate to the Socket that's passed in the constructor. Except the getInputStream and getOutputStream methods, which have to use a similar approach to wrap the returned streams with stream classes you create yourself. Those stream classes then have to override all the read and write methods to keep track of how much data if being transferred, and make that information available however you want to your code that wants to monitor progress. Before you do an operation that you want to monitor, you reset the count. Then as the operation progresses, the count will be updated. What it won't give you is a "percent completion" measure, since you have no idea how much low level data needs to be sent to complete the operation.
I have a situation where a main orchestration is responsible for processing a convoy of messages. These messages belong to a set of customers, the orchestration will read the messages as they come in, and for each new customer id it finds, it will spin up a new orchestration that is responsible for processing the messages of a particular customer. I have to preserve the order of messages as they come in, so the newly created orchestrations should process the message it has and wait for additional messages from the main orchestration.
Tried different ways to tackle this, but was not able to successfuly implement it.
I would like to hear your opinions on how this could be done.
Thanks.
It sounds like what you want is a set of nested convoys. While it might be possible to get that working, it's going to... well, hurt. In particular, my first worry would be maintenance: any changes to the process would be a pain in the neck to make, and, much worse, deployment would really, really suck.
Personally, I would really try to find an alternative way to implement this and avoid the convoys if possible, but that would depend a lot on your specific scenario.
A few questions, if you don't mind:
What are your ordering requirements? For example, do you only need ordered processing for each customer on a single incoming batch, or across batches? If the latter, could you make do without the master orchestration and just force a single convoy'd instance per customer? Still not great, but would likely simplify things a lot.
What are you failure requirements with respect to ordering? Should it completely stop processing? Save message and keep going? What about retries?
Is ordering based purely on the arrival time of the message? Is there anything in the message that you could use to force ordering internally instead of relying purely on the arrival time?
What does the processing of the individual messages do? Is the ordering requirement only to ensure that certain preconditions are met when a specific message is processed (for example, messages represent some tree structure that requires parents are processed before children).
I don't think you need a master orchestration to start up the sub-orchestrations. I am assumin you are not talking about the master orchestration implmenting a convoy pattern. So, if that's the case, here's what I might do.
There is a brief example here on how to implment a singleton orchestration. This example shows you how to setup an orchestration that will only ever exist once. All the messages going to it will be lined up in order of receipt and processed one at a time. Your example differs in that you want to have this done by customer ID. This is pretty simple. Promote the customer ID in the inbound message and add it to the correlation type. Now, there will only ever be one instance of the orchestration per customer.
The problem with singletons is this. You have to kill them at some point or they will live forever as dehydrated orchestrations. So, you need to have them end. You can do this if there is a way for the last message for a given customer to signal the orchestration that it's time to die through an attribute or such. If this is not possible, then you need to set a timer. If no messags are received in x seconds, terminate the orch. This is all easy to do, but it can introduce Zombies. Zombies occur when that orchestration is in the process of being shut down when another message for that customer comes in. this can usually be solved by tweeking the time to wait. Regardless, it will cause the occasional Zombie.
A note fromt he field. We've done this and it's really not a great long term solution. We were receiving customer info updates and we had to ensure ordered processing. We did this singleton approach and it's been problematic from the Zombie issue and the exeption issue. If the Singleton orchestration throws an exception, it will block the processing for a all future messages for that customer. So - handle every single possible exception. The real solution would have been to have the far end system check the time stamps from the update messages and discard ones that were older than the last update. We wanted to go this way, but the receiving system didn't want to do this extra work.