Corda - flow timeout - corda

According to the docs...
The call must be executed in a BLOCKING way. Flows don’t currently
support suspending to await the response to a call to an external
resource For this reason, the call should be provided with a timeout
to prevent the flow from suspending forever. If the timeout elapses,
this should be treated as a soft failure and handled by the flow’s
business logic
How do I create an initiator flow that times out if it does not receive a response in an allotted time? Are there any examples of this?

As of Corda 3, there is no mechanism for causing a flow to time out. When the docs say "the call should be provided with a timeout", this refers to the HTTP call.
The only alternative currently is to check how long the HTTP call has taken when the response is received, and throw an error in the flow if the time window is exceeded.

Related

How to cancel a deferred Rebus message?

I am considering using deferred messages as a delay/timeout alerting mechanism within my Saga. I would do this by sending a deferred message at the same time as sending a message to a handler to do some time-consuming work.
If that handler fails to publish a response within the deferred timespan the Saga is alerted to the timeout/delay via the deferred message arrival. This is different to a handler failure as its still running, just slower than expected.
The issue comes if everything runs as expected, its possible that the Saga will complete all of its steps and you'd find many deferred messages waiting to be delivered to a Saga that no longer exists. Is there a way to clean up the deferred messages you know are no longer required?
Perhaps there is a nicer way of implementing this functionality in Rebus?
Once sent, deferred messages cannot be cancelled.
But Rebus happens to ignore messages that cannot be correlated with a saga instance, and the saga handler does not allow for that particular message type to initiate a new saga, so if the saga instance is gone, the message will simply be swallowed.
That's the difference between using IHandleMessages<CanBeIgnored> and IAmInitiatedBy<CannotBeIgnored> on your saga. 🙂

Grpc C++: How to wait until a unary request has been sent?

I'm writing a wrapper around gRPC unary calls, but I'm having an issue: let's say I have a ClientAsyncResponseReader object which is created and starts a request like so
response_reader_ = std::unique_ptr<grpc::ClientAsyncResponseReader<ResponseType>>(
grpc::internal::ClientAsyncResponseReaderFactory<ResponseType>::Create(
channel.get(), completion_queue, rpc_method, &client_context_, request, true
)
);
response_reader_->Finish(
response_sharedptr_.get(), status_sharedptr_.get(), static_cast<void*>(some_tag)
);
// Set a breakpoint here
where all of the arguments are valid.
I was under the impression that when the Finish call returned, the request object was guaranteed to have been sent out over the wire. However by setting a breakpoint after that Finish() call (in the client program, to be clear) and inspecting my server's logs, I've discovered that the server does not log the request until after I resume from the breakpoint.
This would seem to indicate that there's something else I need to wait on in order to ensure that the request is really sent out: and moreover, that the thread executing the code above still has some sort of role in sending out the request which appears post-breakpoint.
Of course, perhaps my assumptions are wrong and the server isn't logging the request as soon as it comes in. If not though, then clearly I don't understand gRPC's semantics as well as I should, so I was hoping for some more experienced insight.
You can see the code for my unary call abstraction here. It should be sufficient, but if anything else is required I'm happy to provide it.
EDIT: The plot thickens. After setting a breakpoint on the server's handler for the incoming requests, it looks like the call to Finish generally does "ensure" that the request has been sent out: except for the first request sent by the process. I guess that there is some state maintained either in grpc::channel or maybe even in grpc::completion_queue which is delaying the initial request
From the documentation
response_reader_ = std::unique_ptr<grpc::ClientAsyncResponseReader<ResponseType>>(
grpc::internal::ClientAsyncResponseReaderFactory<ResponseType>::Create(
channel.get(), completion_queue, rpc_method, &client_context_, request, true
)
);
This will start a call and write the request out (start=true). This function does not have a tag parameter. So there is no way the completion queue can notify when the call start is finished. Calling an RPC method is a bit complicated, it basically involves creating the network packet and putting it in the wire. It can fail if there is a transient failure of the transport or the channel completely gone or the user did something stupid. Another thing, why we need the tag notification is that the completion queue is really a contention point. All RPC objects talk to this, it can happen completion queue is not free and the request is still pending.
response_reader_->Finish(
response_sharedptr_.get(), status_sharedptr_.get(), static_cast<void*>(some_tag)
This one will request the RPC runtime to receive the server's response. The output is when the server response arrives, then the completion queue will notify the client. At this point. we assume that there is no error on the client side, everything okay and the request is already in flight. So the status of Finish call will never be false for unary rpc.
This would seem to indicate that there's something else I need to wait on in order to ensure that the request is really sent out: and moreover, that the thread executing the code above still has some sort of role in sending out the request which appears post-breakpoint.
Perhaps, you want to reuse the request object(I did some experiments on that). For me, I keep the request object in memory till the response arrives. There is no way to guarantee that the request object won't be required after the create call.

Not able to do async dispatch to consumer and understand how "prefetch limit" is relevant

My understanding was that default behavior of ActiveMQ is to do async dispatch of messages to the consumers, but when I tried to test it by doing a Thread.sleep(60000); in my MessageListener#onMessage() then broker was not able to send queued messages until it received the acknowledgment from the dispatch of previous message.
So, then I tried to explicitly set the async flag, just in case, using ((ActiveMQConnectionFactory)connectionFactory).setDispatchAsync(true); as mentioned here but still same behavior.
Is there a way in which I can make sure that my ActiveMQ broker doesn't get blocked if one of the consumer is taking long time, please note that I know and read about "slow consumers" but this is not what I want, I want a truly async dispatch where-in where broker sends the message doesn't wait for any acknowledgement/response.
EDIT:
I just read about what-is-the-prefetch-limit-for and I am wondering that when broker is sending message synchronously to the consumer then what's the point of "prefetch limit"?
With the default configuration, ActiveMQ is configured to use a dispatch thread per Queue - you can use set the optimizedDispatch property on the destination policy entry - see configuring Queues.
set the optimizedDispatch="true" in activemq.xml
optimizedDispatch :
Default Value : false
Description : Don't use a separate thread for dispatching from a Queue.
Note that by doing a Thread.sleep(60000); in the MessageListener#onMessage() when using a single consumer the dispatcher of the consumer cannot send another messages.
UPDATE
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue=">" optimizedDispatch="true"/>
<policyEntries>
<policyMap>
<destinationPolicy>
queue=">" means all queues
EDIT by OP (hagrawal): To help future visitor to catch the concept quickly I am putting below the core concept in nut shell, please feel free to read all the comments below in order to know more. Many thanks to #HassenBennour for clarifying all this.
If there are 2 consumers connected and messages getting produced then
it will do robin round message dispatching to those consumer, but
suppose no consumer is connected, broker got 4 messages enqueued, a
consumer got connected with 3 as prefetch limit then it will deliver 3
messages to the consumer and then wait, meanwhile if some other
consumer gets connected then it will immediately deliver 4th message
to that otherwise it will wait for acknowledgment of 1st message
before delivering 4th message to same consumer.

Apache Camel Architecture

I am working on prototyping a new web service for my company and we are considering Apache Camel as our integration framework. Here is a quick run-down of the high-level architecture:
-IBM Websphere MQ as the queuing solution
1) we receive http request
2) asynchronously persist this request
3a) do some processing on the request
3b) send to another tier for further processing
4) asynchronously update the request record in DB
5) respond to caller
What I want to do is:
When a http request comes in, put it on a queue to be processed and wait n seconds. If the web handler doesn't get a response in n seconds, reply to the caller with a custom message
Once the request is on the processing queue, a camel route is listening to this queue to process. When it pulls a message from queue, put a copy of the request on a different queue to be persisted asynchronously. Do some processing on the request. Then send it to another queue to be further processed and wait for a response. Then put it back on the persist queue to be asynchronously updated.
Then respond to web listener. Then web listener responds to web caller.
I am reading everything I can about Apache Camel and there is a lot of information about there. I might be on a little bit of information overload, and any help on the following concerns would be greatly appreciated:
1)
If the web listeners use an InOut exchange (with the first processing tier) without a replyTo queue defined, it will create a temporary queue for the response. What happens if this request times out? I understand I can set a requestTimeout on the exchange and, if it times out, catch that exception and set a custom message. But, will that temporary queue be killed? Or will they build up over time as requests time out?
2)
When it comes to scaling the processing tiers (adding more instances of those same routes on different machines), is it customary that if the instance that picks up the response (using a fixed reply to queue) is different than the instance that picked up the request, all the information about the original request is inside the message, so there is no need to share data across instances (unless of course there is data that is shared, like aggregrates and such)?
Any other tips and tricks when building a system like this would be very helpful.
Thanks!
I would say this solution is too complicated and there are too many areas which are hard both in terms of maintenance and also complexity. There is too much many steps mixing async and sync communication.
Why not simply the solution to the following steps:
Synchronously http request
Put message on MQ with reply to header
Message is picked up and sent to backend
If reply is not received within a given time transaction is terminated.
The reply to queue is removed
Requestor is notified.

Is Javamail asynchronous or synchronous?

Is Javamail asynchronous or synchronous? That is, if I send off an email, do I continue processing immediately afterwards, or do I wait until it's complete?
Furthermore, are there any ways that I could catch that an email failed to be delivered for any reason?
I'd also like to know these answers for Spring's MailSender abstraction.
Thanks.
It is synchronous, since it transfers the message to the server and processes the server's response before returning. The send docs explain in further detail. The message will throw a SendFailedException, or another MessagingException,
if the send fails immediately. But "success does not imply that the message was delivered to the ultimate recipient, as failures may occur in later stages of delivery."

Resources