I am considering using deferred messages as a delay/timeout alerting mechanism within my Saga. I would do this by sending a deferred message at the same time as sending a message to a handler to do some time-consuming work.
If that handler fails to publish a response within the deferred timespan the Saga is alerted to the timeout/delay via the deferred message arrival. This is different to a handler failure as its still running, just slower than expected.
The issue comes if everything runs as expected, its possible that the Saga will complete all of its steps and you'd find many deferred messages waiting to be delivered to a Saga that no longer exists. Is there a way to clean up the deferred messages you know are no longer required?
Perhaps there is a nicer way of implementing this functionality in Rebus?
Once sent, deferred messages cannot be cancelled.
But Rebus happens to ignore messages that cannot be correlated with a saga instance, and the saga handler does not allow for that particular message type to initiate a new saga, so if the saga instance is gone, the message will simply be swallowed.
That's the difference between using IHandleMessages<CanBeIgnored> and IAmInitiatedBy<CannotBeIgnored> on your saga. 🙂
Related
In my java servlet, I want to use a coroutine to send emails. I would like to launch the coroutine in a non-blocking fashion. Potentially however, it might happen that the time for sending the email could take longer than expected. If the request to the servlet completes before the coroutine completes, I am not sure if that would result in terminating the coroutine.
The use-case of using a coroutine for sending emails is not really important since you could just as well use a coroutine to carry out any long task.
Does anyone have any idea whether a coroutine gets terminated as soon as the servlet request completes, or will it continue its task until completed?
I'm writing a wrapper around gRPC unary calls, but I'm having an issue: let's say I have a ClientAsyncResponseReader object which is created and starts a request like so
response_reader_ = std::unique_ptr<grpc::ClientAsyncResponseReader<ResponseType>>(
grpc::internal::ClientAsyncResponseReaderFactory<ResponseType>::Create(
channel.get(), completion_queue, rpc_method, &client_context_, request, true
)
);
response_reader_->Finish(
response_sharedptr_.get(), status_sharedptr_.get(), static_cast<void*>(some_tag)
);
// Set a breakpoint here
where all of the arguments are valid.
I was under the impression that when the Finish call returned, the request object was guaranteed to have been sent out over the wire. However by setting a breakpoint after that Finish() call (in the client program, to be clear) and inspecting my server's logs, I've discovered that the server does not log the request until after I resume from the breakpoint.
This would seem to indicate that there's something else I need to wait on in order to ensure that the request is really sent out: and moreover, that the thread executing the code above still has some sort of role in sending out the request which appears post-breakpoint.
Of course, perhaps my assumptions are wrong and the server isn't logging the request as soon as it comes in. If not though, then clearly I don't understand gRPC's semantics as well as I should, so I was hoping for some more experienced insight.
You can see the code for my unary call abstraction here. It should be sufficient, but if anything else is required I'm happy to provide it.
EDIT: The plot thickens. After setting a breakpoint on the server's handler for the incoming requests, it looks like the call to Finish generally does "ensure" that the request has been sent out: except for the first request sent by the process. I guess that there is some state maintained either in grpc::channel or maybe even in grpc::completion_queue which is delaying the initial request
From the documentation
response_reader_ = std::unique_ptr<grpc::ClientAsyncResponseReader<ResponseType>>(
grpc::internal::ClientAsyncResponseReaderFactory<ResponseType>::Create(
channel.get(), completion_queue, rpc_method, &client_context_, request, true
)
);
This will start a call and write the request out (start=true). This function does not have a tag parameter. So there is no way the completion queue can notify when the call start is finished. Calling an RPC method is a bit complicated, it basically involves creating the network packet and putting it in the wire. It can fail if there is a transient failure of the transport or the channel completely gone or the user did something stupid. Another thing, why we need the tag notification is that the completion queue is really a contention point. All RPC objects talk to this, it can happen completion queue is not free and the request is still pending.
response_reader_->Finish(
response_sharedptr_.get(), status_sharedptr_.get(), static_cast<void*>(some_tag)
This one will request the RPC runtime to receive the server's response. The output is when the server response arrives, then the completion queue will notify the client. At this point. we assume that there is no error on the client side, everything okay and the request is already in flight. So the status of Finish call will never be false for unary rpc.
This would seem to indicate that there's something else I need to wait on in order to ensure that the request is really sent out: and moreover, that the thread executing the code above still has some sort of role in sending out the request which appears post-breakpoint.
Perhaps, you want to reuse the request object(I did some experiments on that). For me, I keep the request object in memory till the response arrives. There is no way to guarantee that the request object won't be required after the create call.
According to the docs...
The call must be executed in a BLOCKING way. Flows don’t currently
support suspending to await the response to a call to an external
resource For this reason, the call should be provided with a timeout
to prevent the flow from suspending forever. If the timeout elapses,
this should be treated as a soft failure and handled by the flow’s
business logic
How do I create an initiator flow that times out if it does not receive a response in an allotted time? Are there any examples of this?
As of Corda 3, there is no mechanism for causing a flow to time out. When the docs say "the call should be provided with a timeout", this refers to the HTTP call.
The only alternative currently is to check how long the HTTP call has taken when the response is received, and throw an error in the flow if the time window is exceeded.
Is Javamail asynchronous or synchronous? That is, if I send off an email, do I continue processing immediately afterwards, or do I wait until it's complete?
Furthermore, are there any ways that I could catch that an email failed to be delivered for any reason?
I'd also like to know these answers for Spring's MailSender abstraction.
Thanks.
It is synchronous, since it transfers the message to the server and processes the server's response before returning. The send docs explain in further detail. The message will throw a SendFailedException, or another MessagingException,
if the send fails immediately. But "success does not imply that the message was delivered to the ultimate recipient, as failures may occur in later stages of delivery."
Can any one explain what the difference is between the following methods of WorkflowApplication:
Abort
Cancel
Terminate
After further investigating this issue, I want to summarize the differences:
Terminate :
the Completed event of the workflow application will be triggered
the CompletionState (WorkflowApplicationCompletedEventArgs) is Faulted
the Unloaded event of the workflow application will be triggered
the workflow completes
OnBodyCompleted on the activity will be called
Cancel:
the Completed event of the workflow application will be triggered
the CompletionState (WorkflowApplicationCompletedEventArgs) is Cancelled
the Unloaded event of the workflow application will be triggered
the workflow completes
OnBodyCompleted on the activity will be called
Abort:
the Aborted event of the workflow application will be triggered
the workflow does not complete
An unhandled exception
triggers OnUnhandledException
in this eventhandler the return value (of type UnhandledExceptionAction) determines what will happen next:
UnhandledExceptionAction.Terminate will terminate the workflow instance
UnhandledExceptionAction.Cancel will cancel the workflow instance
UnhandledExceptionAction.Abort will abort the workflow instance
Each will trigger the corresponding events explained above
Update: Abort does not seem to trigger unloading of the instance in the SQL persistence store. So it seems to me, you better use Cancel or Terminate and if you have to perform some action based upon the completion status, you can check CompletionState in the Complete event.
First, hats off to Steffen Opel (and his comments below). I failed to catch that my original post linked documentation that was WF 3.5 specific. Did a little more digging around.
For posterity's sake, I have left my previous response below, labelled as WF3.5. Please see WF4.0 for a few notes regarding Canceling, Abort, and Terminate in WF4.0.
WF4.0
Unfortunately, there is little explicit documentation discussing differences in Cancel, Abort, and Terminate in WF4.0. However, from member method documentation,
On Abort, a) activity is immediately halted, b) Aborted handler is invoked, and c) Completed handler is not invoked.
On Cancel, a) activity is given a grace period to stop gracefully after which a TimeoutException is thrown, b) Completed handler is invoked.
On Terminate, a) activity is given a grace period to stop gracefully after which a TimeoutException is thrown, b) Completed handler is invoked.
The differences between Abort and Cancel/Terminate is quite striking. Simply call Abort to kill a Workflow outright. The difference between Cancel and Terminate is more nuanced. Cancel does not require any sort of reason (it is a void parameterless method), whereas Terminate requires a reason (in either string or Exception format).
In all cases, Workflow runtime will not perform any implicit action on your behalf (ie Workflows will not auto-self destruct a la WF3.5 Terminate). However, with the highly customizable exception\event handling exposed by the runtime, any such features may be implemented with relative ease.
WF3.5
Canceling
According to Msdn documentation
An activity is put into the Canceling state by a parent activity explicitly, or because an exception was thrown during the execution of that activity.
While Canceling may be used to stop an entire Workflow (ie invoked on root Activity), it is typically used to stop discrete portions of a Workflow (ie either as error-recovery or an explicit action on part of parent). In short, Canceling is a means of Workflow control-flow.
Abort and Terminate
Again, according to Msdn documentation
Abort is different from Terminate in that while Abort simply clears the in-memory workflow instance and can be restarted from the last persistence point, Terminate clears the in-memory workflow instance and informs the persistence service that the instance has been cleared from memory. For the SqlWorkflowPersistenceService, this means that all state information for that workflow instance is deleted from the database upon termination. You will not be able to reload the workflow instance from a previously stored persistence point.
Which is pretty clear in itself. Abort merely stops in-memory execution whereas Terminate stops in-memory execution and destroys any persisted state.