notify controller action method from other independent handler within specific time - asynchronous

I have a situation where I need to wait for response from device(using mqtt broker which doesnt matter in current questions context).
Whenever I get an API call on one specific endpoint
I need to wait(2-5 seconds depending upon the need) for response from device on the other handler(mqtt handler => https://github.com/gausby/tortoise)
this handler needs to notify me somehow I got this msg(if handler received msg withing that time) for the particular device id
if device matches and controller action method get notified we send back success response otherwise we send failure response
Any msg received before or after wait time doesnt matter(just consider it unsubscribed)
I am not really sure about whats the best way to achieve above requirement. any help is welcome, thanks

spawn() a process for the first handler. In the first handler, spawn() another process for the second handler passing self() as one of the arguments. Then enter a receive clause with a 2-5 second timeout specified in the after clause. Have the second handler send() a message to the first handler with the data that the second handler acquires.
If the receive in the first handler times out, then do whatever you want to do in the after clause, if the receive reads a message before it times out, then do whatever you need to do with the data.
Then, if you let the process running the first handler die, then you won't have to worry about junk messages in its mailbox.

Related

How to cancel a deferred Rebus message?

I am considering using deferred messages as a delay/timeout alerting mechanism within my Saga. I would do this by sending a deferred message at the same time as sending a message to a handler to do some time-consuming work.
If that handler fails to publish a response within the deferred timespan the Saga is alerted to the timeout/delay via the deferred message arrival. This is different to a handler failure as its still running, just slower than expected.
The issue comes if everything runs as expected, its possible that the Saga will complete all of its steps and you'd find many deferred messages waiting to be delivered to a Saga that no longer exists. Is there a way to clean up the deferred messages you know are no longer required?
Perhaps there is a nicer way of implementing this functionality in Rebus?
Once sent, deferred messages cannot be cancelled.
But Rebus happens to ignore messages that cannot be correlated with a saga instance, and the saga handler does not allow for that particular message type to initiate a new saga, so if the saga instance is gone, the message will simply be swallowed.
That's the difference between using IHandleMessages<CanBeIgnored> and IAmInitiatedBy<CannotBeIgnored> on your saga. 🙂

Grpc C++: How to wait until a unary request has been sent?

I'm writing a wrapper around gRPC unary calls, but I'm having an issue: let's say I have a ClientAsyncResponseReader object which is created and starts a request like so
response_reader_ = std::unique_ptr<grpc::ClientAsyncResponseReader<ResponseType>>(
grpc::internal::ClientAsyncResponseReaderFactory<ResponseType>::Create(
channel.get(), completion_queue, rpc_method, &client_context_, request, true
)
);
response_reader_->Finish(
response_sharedptr_.get(), status_sharedptr_.get(), static_cast<void*>(some_tag)
);
// Set a breakpoint here
where all of the arguments are valid.
I was under the impression that when the Finish call returned, the request object was guaranteed to have been sent out over the wire. However by setting a breakpoint after that Finish() call (in the client program, to be clear) and inspecting my server's logs, I've discovered that the server does not log the request until after I resume from the breakpoint.
This would seem to indicate that there's something else I need to wait on in order to ensure that the request is really sent out: and moreover, that the thread executing the code above still has some sort of role in sending out the request which appears post-breakpoint.
Of course, perhaps my assumptions are wrong and the server isn't logging the request as soon as it comes in. If not though, then clearly I don't understand gRPC's semantics as well as I should, so I was hoping for some more experienced insight.
You can see the code for my unary call abstraction here. It should be sufficient, but if anything else is required I'm happy to provide it.
EDIT: The plot thickens. After setting a breakpoint on the server's handler for the incoming requests, it looks like the call to Finish generally does "ensure" that the request has been sent out: except for the first request sent by the process. I guess that there is some state maintained either in grpc::channel or maybe even in grpc::completion_queue which is delaying the initial request
From the documentation
response_reader_ = std::unique_ptr<grpc::ClientAsyncResponseReader<ResponseType>>(
grpc::internal::ClientAsyncResponseReaderFactory<ResponseType>::Create(
channel.get(), completion_queue, rpc_method, &client_context_, request, true
)
);
This will start a call and write the request out (start=true). This function does not have a tag parameter. So there is no way the completion queue can notify when the call start is finished. Calling an RPC method is a bit complicated, it basically involves creating the network packet and putting it in the wire. It can fail if there is a transient failure of the transport or the channel completely gone or the user did something stupid. Another thing, why we need the tag notification is that the completion queue is really a contention point. All RPC objects talk to this, it can happen completion queue is not free and the request is still pending.
response_reader_->Finish(
response_sharedptr_.get(), status_sharedptr_.get(), static_cast<void*>(some_tag)
This one will request the RPC runtime to receive the server's response. The output is when the server response arrives, then the completion queue will notify the client. At this point. we assume that there is no error on the client side, everything okay and the request is already in flight. So the status of Finish call will never be false for unary rpc.
This would seem to indicate that there's something else I need to wait on in order to ensure that the request is really sent out: and moreover, that the thread executing the code above still has some sort of role in sending out the request which appears post-breakpoint.
Perhaps, you want to reuse the request object(I did some experiments on that). For me, I keep the request object in memory till the response arrives. There is no way to guarantee that the request object won't be required after the create call.

Caliburn Micro unsubscribe message type from event aggregator

I like the Event Aggregator but run into the situation where I subscribe to and publish the same same message. This potentially makes the code run twice. I thought I could make an easy extension method to unsubscribe from the message, publish, and then subscribe to the message.
Is this possible or is there a better pattern (maybe use a GUID for each message to ignore handling a message you sent)?
An idea is to pass the sender in the message and ensure to have received it in a different instance before performing the action.

Understanding how netty works

I'm trying to understand how netty works, and after reading some of the documentation I was to see if I understood how things work at a high level.
Basically netty has an event cycle, so whenever you make a call it gets serialized and the request gets pushed down to the o/s level, and it uses epoll and waits for an event to send back to netty.
When the operation system generates an event that netty subscribed to, netty then has an event loop that gets triggered.
Now the interested part here is, the event that gets triggered has to be parsed, and the client code (or custom code) has to figure out who actually this event is for.
So for example, if this was for a chat application, when a message is sent, it is up to the client code to figure out to send this message via ajax to the correct user.
Is this, at a high level, a correct overview of how netty works?
BTW, when netty listens for events sent via epoll, is this event loop single threaded or does it work from a pool of threads?
Sounds correct to me.
There are more than one event loop thread in Netty, but it does not mean a single Channel's event is handled by multiple event loop threads. Netty picks one thread and assigns it to a Channel. Once assigned, all events related with the Channel is handled by the picked thread.
It does not also necessarily mean that an event loop thread handles only one Channel. An event loop thread can handle multiple Channels.

Can a request be handled and ended prematurely, early in the pipeline?

I have an HttpModule that has bound an event handler to EndRequest.
Is there any way to handle the request inside the event handler? Meaning, I don't just want to run code and keep the request moving -- I want to stop it dead in its tracks, return a 200 Status Code, and call it a day, without it request continuing to the next step in the pipeline.
HttpContext.Current.ApplicationInstance.CompleteRequest();
Documentation

Resources