We have a parent orchestration where we call a child orchestration.
However, once the child orchestration has completed, we would like to return a message to the parent orchestration.
What is the best / most standard way to do this?
A) Just publish the message from the child orchestration to the messagebox and correlate this in the parent with a receive
B) Can we use a message as a C# style ref / out parameter when passing it from the Parent to Child orchestration?
Other?
Thanks
Stuart
You can go either way...
B is the Easiest solution. You use a return parameter with the "Call Shape." Two major cons of this way is that you will be processing Synchronously. So if these two orchestrations aren't related, or are in separate business processes's you will have a lot of orchestrations waiting. Second is that you are coupling the parent and child. So you are creating a dependency between parent and child. Anytime you need to stop the parent, you will have to stop the child.
If your situation doesn't meet the above you might want to be decoupled and perform Asynchronously go for the Start shape or messagebox directbound ports. MessageBox direct bound ports is the only true decoupled scenario and is the most scalable.
Related
Let's say I have a stage controller and I want to write a method to move the stage. I want to be able to have the method either return after the stage has physically completed the stage move, or has started the stage move. For any kind of external control of hardware, I typically write async methods with a Task return. This way, users can await on the completion of the task, e.g. await the stage to finish it's move, or just call the move method, and await the returned task at a later point if necessary.
Is this the right approach for controller external hardware? Should these kind of methods be written synchronously with with separate methods used to determine operation completed? People I talk to seem to have an issue with using async methods; mostly because they feel it is too indeterminate for them for hardware control.
Is this the right approach for controller external hardware? Should these kind of methods be written synchronously with with separate methods used to determine operation completed?
I hesitate to use async for any kind of system that is driven by external forces. One that I've seen a lot is people try to use tasks to represent "the user pressed this button". And your example reminds me of that, but with external hardware in place of a person.
The problem with these kinds of approaches is twofold. First, it restricts you to a very linear logic flow. Second, it doesn't easily provide results other than success/fail. What if the hardware does something other than what was instructed? How easy is it to do logic that tries to do A but then times out waiting for state A' to be reached so it tries to do B?
Bear in mind that tasks must be completed. While it's possible to handle this using something like task cancellation (or hardware-specific exceptions), that can considerably complicate the logic code. Particularly when you consider timeouts, retries, and fallback logic.
So, I generally avoid using tasks for modeling that kind of domain. Something like an observable may be a better fit, or even just a Channel of state updates. Both of those permit the hardware to "push" its state and allows the logic code to respond appropriately, usually with a state machine of its own.
I want to implement a use case where two responder flows (different classes) are initiated by the same parent flow.
I get the following exception:
java.lang.IllegalArgumentException: com.flow.initialFlows.InitialFlow has been specified as the initiating flow by both com.flow.responder.Responder1 and com.flow.responder2.Responder2
in How can I test two different responder flows in the same CorDapp? the suggestion was to use setCordappPackages(). this method is used in test scenarios and is part of the corda test package. what can you use outside test scenarios?
A single node cannot have two responder flows registered for the same initiating flow.
This is by design. Otherwise, the responding node know which of the two flows to invoke.
Suppose, I launch a parent process, which launches a subprocess, but then the parent receives a SIGINT. I want the parent to exit, but I don't want the child process to linger and/or become a zombie. I need to make sure it dies.
If I can determine that the child also received a SIGINT, then it is probably cleaning up on its own. In that case, I'd prefer to briefly wait while it finishes and exits on its own. But if it did not receive a SIGINT, then I will send it a SIGTERM (or SIGKILL) immediately and let the parent proceed with its own cleanup.
How can I figure out if the child recevied the SIGINT? (Leaving aside the fact that it might not even respond to SIGINT...) Do I just have to guess, based on whether or not the parent is running in the foreground process group? What if the SIGINT was sent programmatically, not via Ctrl+C?
How can I figure out if the child received the SIGINT?
Perhaps you can't. And what should matter to you is if the child handled SIGINT (it could have ignored it). See my answer to your other question.
However, in many cases, the signal sent by Ctrl C was sent to a process group. Then you might have got that signal too.
In pathological cases, your entire system experiments thrashing and the child process had not even being scheduled yet to process the signal.
I want the parent to exit, but I don't want the child process to linger and/or become a zombie. I need to make sure it dies.
Maybe you want to use somewhere daemon(3) ?
BTW, I don't understand your question, because I have to guess its (ungiven) motivations. Are you caring about job control or implementing a shell? In what concrete cases do you really care that the child got the SIGINT and what does that mean to you?
I saw a tutorial video explain the chain of responsibility design pattern, and I think I understand how it works but I'm not sure when I would really use it. What are some common usages of the chain of responsibility?
From the GoF:
Known Uses
Several class libraries use the Chain of Responsibility
pattern to handle user events. They use different names for the
Handler class, but the idea is the same: When the user clicks the
mouse or presses a key, an event gets generated and passed along the
chain. MacApp [App89] and ET++ [WGM88] call it "EventHandler,"
Symantec's TCL library [Sym93b] calls it "Bureaucrat," and NeXT's
AppKit [Add94] uses the name "Responder."
The Unidraw framework for graphical editors defines Command objects
that encapsulate requests to Component and ComponentView objects
[VL90]. Commands are requests in the sense that a component or
component view may interpret a command to perform an operation. This
corresponds to the "requests as objects" approach described in
Implementation. Components and component views may be structured
hierarchically. A component or a component view may forward command
interpretation to its parent, which may in turn forward it to its
parent, and so on, thereby forming a chain of responsibility.
ET++ uses Chain of Responsibility to handle graphical update. A
graphical object calls the InvalidateRect operation whenever it must
update a part of its appearance. A graphical object can't handle
InvalidateRect by itself, because it doesn't know enough about its
context. For example, a graphical object can be enclosed in objects
like Scrollers or Zoomers that transform its coordinate system. That
means the object might be scrolled or zoomed so that it's partially
out of view. Therefore the default implementation of InvalidateRect
forwards the request to the enclosing container object. The last
object in the forwarding chain is a Window instance. By the time
Window receives the request, the invalidation rectangle is guaranteed
to be transformed properly. The Window handles InvalidateRect by
notifying the window system interface and requesting an update.
I am supposed to write a C application in Unix such that N children processes will be forked from the parent process and I will send messages to these children and children are supposed to send messages each other.
However the problem is, I need to send messages to a specific target child process. i.e. parent will send to child 1, child 1 will send to child 2, ... and child n will send to 1 (circularly).
The problem is, if I create only one message queue, any of n children may dequeue the message (since any of them may run after parent process due to kernel scheduler) therefore the message will be dequeued in wrong process!
In my application, there will be max. 1 message in the queue at a time. The only solution comes to my mind is to create n different message queues and pass messages to appropriate queue so that a specific target process can receive it. But I think there must me a more legitimate solution.
Any ideas?
Contraints: Pipes between processes are not allowed, I know that mq is inefficient here. I'll also implement them, both are required. P.S. This is kinda homework (damn I am the creator of http://canyoudomyhomework.com), however this is not just a homework, a challenging question IMHO.)
Depending on the performance requirements, a brokered (router) solution feels most appropriate.
The parent could act as the router, or could spawn a specific process to do this job.
Define a simple message structure that has its first element as the intended target, we can also designate the parent process as zero.
Each process has only one queue, between itself and the broker. All messages are processed and routed in one place, thereby avoiding the NxN fan-out you mention.
Good Luck