Grpc C++: How to wait until a unary request has been sent? - grpc

I'm writing a wrapper around gRPC unary calls, but I'm having an issue: let's say I have a ClientAsyncResponseReader object which is created and starts a request like so
response_reader_ = std::unique_ptr<grpc::ClientAsyncResponseReader<ResponseType>>(
grpc::internal::ClientAsyncResponseReaderFactory<ResponseType>::Create(
channel.get(), completion_queue, rpc_method, &client_context_, request, true
)
);
response_reader_->Finish(
response_sharedptr_.get(), status_sharedptr_.get(), static_cast<void*>(some_tag)
);
// Set a breakpoint here
where all of the arguments are valid.
I was under the impression that when the Finish call returned, the request object was guaranteed to have been sent out over the wire. However by setting a breakpoint after that Finish() call (in the client program, to be clear) and inspecting my server's logs, I've discovered that the server does not log the request until after I resume from the breakpoint.
This would seem to indicate that there's something else I need to wait on in order to ensure that the request is really sent out: and moreover, that the thread executing the code above still has some sort of role in sending out the request which appears post-breakpoint.
Of course, perhaps my assumptions are wrong and the server isn't logging the request as soon as it comes in. If not though, then clearly I don't understand gRPC's semantics as well as I should, so I was hoping for some more experienced insight.
You can see the code for my unary call abstraction here. It should be sufficient, but if anything else is required I'm happy to provide it.
EDIT: The plot thickens. After setting a breakpoint on the server's handler for the incoming requests, it looks like the call to Finish generally does "ensure" that the request has been sent out: except for the first request sent by the process. I guess that there is some state maintained either in grpc::channel or maybe even in grpc::completion_queue which is delaying the initial request

From the documentation
response_reader_ = std::unique_ptr<grpc::ClientAsyncResponseReader<ResponseType>>(
grpc::internal::ClientAsyncResponseReaderFactory<ResponseType>::Create(
channel.get(), completion_queue, rpc_method, &client_context_, request, true
)
);
This will start a call and write the request out (start=true). This function does not have a tag parameter. So there is no way the completion queue can notify when the call start is finished. Calling an RPC method is a bit complicated, it basically involves creating the network packet and putting it in the wire. It can fail if there is a transient failure of the transport or the channel completely gone or the user did something stupid. Another thing, why we need the tag notification is that the completion queue is really a contention point. All RPC objects talk to this, it can happen completion queue is not free and the request is still pending.
response_reader_->Finish(
response_sharedptr_.get(), status_sharedptr_.get(), static_cast<void*>(some_tag)
This one will request the RPC runtime to receive the server's response. The output is when the server response arrives, then the completion queue will notify the client. At this point. we assume that there is no error on the client side, everything okay and the request is already in flight. So the status of Finish call will never be false for unary rpc.
This would seem to indicate that there's something else I need to wait on in order to ensure that the request is really sent out: and moreover, that the thread executing the code above still has some sort of role in sending out the request which appears post-breakpoint.
Perhaps, you want to reuse the request object(I did some experiments on that). For me, I keep the request object in memory till the response arrives. There is no way to guarantee that the request object won't be required after the create call.

Related

Corda - flow timeout

According to the docs...
The call must be executed in a BLOCKING way. Flows don’t currently
support suspending to await the response to a call to an external
resource For this reason, the call should be provided with a timeout
to prevent the flow from suspending forever. If the timeout elapses,
this should be treated as a soft failure and handled by the flow’s
business logic
How do I create an initiator flow that times out if it does not receive a response in an allotted time? Are there any examples of this?
As of Corda 3, there is no mechanism for causing a flow to time out. When the docs say "the call should be provided with a timeout", this refers to the HTTP call.
The only alternative currently is to check how long the HTTP call has taken when the response is received, and throw an error in the flow if the time window is exceeded.

Apache Camel Architecture

I am working on prototyping a new web service for my company and we are considering Apache Camel as our integration framework. Here is a quick run-down of the high-level architecture:
-IBM Websphere MQ as the queuing solution
1) we receive http request
2) asynchronously persist this request
3a) do some processing on the request
3b) send to another tier for further processing
4) asynchronously update the request record in DB
5) respond to caller
What I want to do is:
When a http request comes in, put it on a queue to be processed and wait n seconds. If the web handler doesn't get a response in n seconds, reply to the caller with a custom message
Once the request is on the processing queue, a camel route is listening to this queue to process. When it pulls a message from queue, put a copy of the request on a different queue to be persisted asynchronously. Do some processing on the request. Then send it to another queue to be further processed and wait for a response. Then put it back on the persist queue to be asynchronously updated.
Then respond to web listener. Then web listener responds to web caller.
I am reading everything I can about Apache Camel and there is a lot of information about there. I might be on a little bit of information overload, and any help on the following concerns would be greatly appreciated:
1)
If the web listeners use an InOut exchange (with the first processing tier) without a replyTo queue defined, it will create a temporary queue for the response. What happens if this request times out? I understand I can set a requestTimeout on the exchange and, if it times out, catch that exception and set a custom message. But, will that temporary queue be killed? Or will they build up over time as requests time out?
2)
When it comes to scaling the processing tiers (adding more instances of those same routes on different machines), is it customary that if the instance that picks up the response (using a fixed reply to queue) is different than the instance that picked up the request, all the information about the original request is inside the message, so there is no need to share data across instances (unless of course there is data that is shared, like aggregrates and such)?
Any other tips and tricks when building a system like this would be very helpful.
Thanks!
I would say this solution is too complicated and there are too many areas which are hard both in terms of maintenance and also complexity. There is too much many steps mixing async and sync communication.
Why not simply the solution to the following steps:
Synchronously http request
Put message on MQ with reply to header
Message is picked up and sent to backend
If reply is not received within a given time transaction is terminated.
The reply to queue is removed
Requestor is notified.

Implementing a robust and efficient RPC system

I need to have a server which is able to call functions on the client. I always used RPC's in different networking game API's but I never implemented it by myself.
How would I do it?
My naive approach would be:
connect client to the server:
server
fn update_position_client(){
unique_id = 1;
send.to_client(unique_id);
}
client
while recv_messages {
if id == 1
update_position();
}
Is this how I would do it?
This works if you only have a few messages that you want to send, and if the data basically known. To be more robust, you would want to have the ability to dynamically add/remove messages that can be called, and figure out how to look up the methods to be called when RPC is called.
Assuming you want this to be completely transparent to the user, what typically happens in this case is that when the a message is sent, the RPC library will wait until there's a response back. Assuming bi-directional capabilities, what normally happens is that there's a single thread that listens for data. If an RPC message comes in, this thread will figure out what to do with the message, i.e. what method to call in your(local) address space and with what parameters you want to call it with. When you send an RPC message out, the thread that you sent the message out on is blocked(probably with a semaphore) until the return message comes back, at which point your local thread is unblocked and allowed to continue.
A Linux-specific RPC library you could look at would be DBus.

Network protocol implementation like QNAM with delayed processing of requests

I need to implement network protocol working over tcp that basically works next way:
There is requests that are pushed and answers that are read (only one party can initiate request).
I do want to implement it in a way like QNetworkAccessManager: when "requestst is sent, QNAM return a pointer to reply, once requests is served - there is a signal and the result can be used from "reply" object.
I do want to implement it without multithreading.
The major problem is next:
If socket is not connected I have 3 options:
1) return an error (returning null pointeter to reply object is like returning error)
2) emit "finish" from inside "sendRequest" (this is most evil approach)
3) return "reply" from "sendRequest" and later emit signal that request failed. (most wanted)
I really like 3-rd option but the only way I see now is to use timer with 1 ms one shot call - that basically looks like wrong path to implement such thing,
How can I make delayed execution of slot (with passing some parameter like coockie to request)?
It will be good if there is a way to send request delayed as well (like push request to queue, return from call with "reply" object and after then send actual request over network).
All this looks like dealing with event - but I am not sure how best to deal with this subject.
What is the best practice to implement such protocol?
Any advice?

How to send status information from a Web service while is being executed?

I'm new to web development so I'm not sure what's the best option for the problem that I'm having.
Basically I have a web application that calls a web service for processing some data.
This process may take a long time (hours) and I would to know if there is an easy way to send some status information to the client from time to time.
Right now, the client makes the request from the browser and it just waits there until it finishes.
How can I send some information from the web service? I would like to send a percentage and some additional text specifying what is being done.
Thanks
WCF services can be marked as [OneWay] so that they don't return a response.
or, you could have the service kick off the process in an async manner and then just return to the client that the process has/or hasn't kicked off.
Then, the client can poll another method as the other user has suggested.
If you process takes hours you definitely can't use a sync service because you'll hit your execution timeout or rather the connection timeout for the client.
Maybe you can poll another method for status?
If I were you, I would make the original request asynchronous, as in instead of waiting for the response, it just "starts" the task and returns immediately. Then I would have a separate method on your web service that the app can poll periodically to get the status of the job. once it completes, it can display the data like the original request was doing.
if you want to do it synchronously, you can turn off Response.Buffer and write directly to the response.

Resources