Network protocol implementation like QNAM with delayed processing of requests - qt

I need to implement network protocol working over tcp that basically works next way:
There is requests that are pushed and answers that are read (only one party can initiate request).
I do want to implement it in a way like QNetworkAccessManager: when "requestst is sent, QNAM return a pointer to reply, once requests is served - there is a signal and the result can be used from "reply" object.
I do want to implement it without multithreading.
The major problem is next:
If socket is not connected I have 3 options:
1) return an error (returning null pointeter to reply object is like returning error)
2) emit "finish" from inside "sendRequest" (this is most evil approach)
3) return "reply" from "sendRequest" and later emit signal that request failed. (most wanted)
I really like 3-rd option but the only way I see now is to use timer with 1 ms one shot call - that basically looks like wrong path to implement such thing,
How can I make delayed execution of slot (with passing some parameter like coockie to request)?
It will be good if there is a way to send request delayed as well (like push request to queue, return from call with "reply" object and after then send actual request over network).
All this looks like dealing with event - but I am not sure how best to deal with this subject.
What is the best practice to implement such protocol?
Any advice?

Related

Grpc C++: How to wait until a unary request has been sent?

I'm writing a wrapper around gRPC unary calls, but I'm having an issue: let's say I have a ClientAsyncResponseReader object which is created and starts a request like so
response_reader_ = std::unique_ptr<grpc::ClientAsyncResponseReader<ResponseType>>(
grpc::internal::ClientAsyncResponseReaderFactory<ResponseType>::Create(
channel.get(), completion_queue, rpc_method, &client_context_, request, true
)
);
response_reader_->Finish(
response_sharedptr_.get(), status_sharedptr_.get(), static_cast<void*>(some_tag)
);
// Set a breakpoint here
where all of the arguments are valid.
I was under the impression that when the Finish call returned, the request object was guaranteed to have been sent out over the wire. However by setting a breakpoint after that Finish() call (in the client program, to be clear) and inspecting my server's logs, I've discovered that the server does not log the request until after I resume from the breakpoint.
This would seem to indicate that there's something else I need to wait on in order to ensure that the request is really sent out: and moreover, that the thread executing the code above still has some sort of role in sending out the request which appears post-breakpoint.
Of course, perhaps my assumptions are wrong and the server isn't logging the request as soon as it comes in. If not though, then clearly I don't understand gRPC's semantics as well as I should, so I was hoping for some more experienced insight.
You can see the code for my unary call abstraction here. It should be sufficient, but if anything else is required I'm happy to provide it.
EDIT: The plot thickens. After setting a breakpoint on the server's handler for the incoming requests, it looks like the call to Finish generally does "ensure" that the request has been sent out: except for the first request sent by the process. I guess that there is some state maintained either in grpc::channel or maybe even in grpc::completion_queue which is delaying the initial request
From the documentation
response_reader_ = std::unique_ptr<grpc::ClientAsyncResponseReader<ResponseType>>(
grpc::internal::ClientAsyncResponseReaderFactory<ResponseType>::Create(
channel.get(), completion_queue, rpc_method, &client_context_, request, true
)
);
This will start a call and write the request out (start=true). This function does not have a tag parameter. So there is no way the completion queue can notify when the call start is finished. Calling an RPC method is a bit complicated, it basically involves creating the network packet and putting it in the wire. It can fail if there is a transient failure of the transport or the channel completely gone or the user did something stupid. Another thing, why we need the tag notification is that the completion queue is really a contention point. All RPC objects talk to this, it can happen completion queue is not free and the request is still pending.
response_reader_->Finish(
response_sharedptr_.get(), status_sharedptr_.get(), static_cast<void*>(some_tag)
This one will request the RPC runtime to receive the server's response. The output is when the server response arrives, then the completion queue will notify the client. At this point. we assume that there is no error on the client side, everything okay and the request is already in flight. So the status of Finish call will never be false for unary rpc.
This would seem to indicate that there's something else I need to wait on in order to ensure that the request is really sent out: and moreover, that the thread executing the code above still has some sort of role in sending out the request which appears post-breakpoint.
Perhaps, you want to reuse the request object(I did some experiments on that). For me, I keep the request object in memory till the response arrives. There is no way to guarantee that the request object won't be required after the create call.

Write to a list of websockets in tornado from another url handler

I am creating a web application which has a websocket handler. On each successful connection i am appending the websocket handler object to a list.
Another handler class called PostResultHandler accepts POST data. This PostResultHandler will be called by a background process which sends json data. Once this json data is received by PostResultHandler, I want write to the list of websockets.
Currently I am just iterating through the list of websockets and writing the json data to it. But i think it may be a blocking call and the background process which calls the PostResultHandler will be blocked until the result is written to all websockets.
Is there any way to make this piece of code non blocking so that the background process will keep running without any delay
The Tornado examples folder includes a chat demo with websockets, it simply does:
for waiter in cls.waiters:
try:
waiter.write_message(chat)
except:
logging.error("Error sending message", exc_info=True)
This is not a blocking call. The message is buffered on the server immediately and your code continues executing.
Best possible option is to add a callback on ioloop to send the data for each websocket handler instance. You can do something like following.
tornado.ioloop.IOloop.instance().add_callback(partial(websocket_handler_instance.write_message, msg))

Implementing a robust and efficient RPC system

I need to have a server which is able to call functions on the client. I always used RPC's in different networking game API's but I never implemented it by myself.
How would I do it?
My naive approach would be:
connect client to the server:
server
fn update_position_client(){
unique_id = 1;
send.to_client(unique_id);
}
client
while recv_messages {
if id == 1
update_position();
}
Is this how I would do it?
This works if you only have a few messages that you want to send, and if the data basically known. To be more robust, you would want to have the ability to dynamically add/remove messages that can be called, and figure out how to look up the methods to be called when RPC is called.
Assuming you want this to be completely transparent to the user, what typically happens in this case is that when the a message is sent, the RPC library will wait until there's a response back. Assuming bi-directional capabilities, what normally happens is that there's a single thread that listens for data. If an RPC message comes in, this thread will figure out what to do with the message, i.e. what method to call in your(local) address space and with what parameters you want to call it with. When you send an RPC message out, the thread that you sent the message out on is blocked(probably with a semaphore) until the return message comes back, at which point your local thread is unblocked and allowed to continue.
A Linux-specific RPC library you could look at would be DBus.

Synchronous Jersey Rest service that initiates a background task?

This is the issue I encounter, which is design and implementation related :
I have a REST web service that accepts POST requests. Nothing special about it. It currently responds synchronously.
However, this web service is going to initiate a background process that may take some long time.
I do not want this service to respond 30 minutes later.
Instead, it should immediately return an ack response to the client, and nothing more (even after 30 minutes, there will be no more information to send).
How do I implement such behavior with Jersey ?
I read the page https://jersey.java.net/nonav/documentation/2.0/async.html#d0e6914.
Though it was an interesting reading, I did not find the way to only send an ACK typed response (something like an HTTP 200 code).
Maybe i am confused with asynchronous and the behavior I want to implement.
I just understood that I could create a new Thread within my #POST method to handle the background process, and just returns immediately the ACK response.
But does this newly thread live after the response has been sent back to the client ?
How would you implement this WS ?
I hope you will help me clarifying this point.
I think the Jersey 2 Asynchronous Server API you linked would still hold the client connection until the processing completes. The asynchronous processing is really internal to Jersey and does not affect the client experience.
If you want to return an ACK, you can use a regular Jersey method, delegate the work to another thread and then return immediately. I'd recommend HTTP 202 for this use case.
You may create a Thread to do so just like in the Jersey 2 example and it would survive the execution of the Jersey resource method invocation:
#POST
public Response asyncPost(String data) {
new Thread(...).start();
return Response.status(Response.Status.ACCEPTED).build();
}
This being said, creating threads is generally not recommended within app servers.
If you're using EE7, I'd recommend you look at JSR-236 http://docs.oracle.com/javaee/7/api/javax/enterprise/concurrent/package-summary.html
If you're using EE6, you can consider sending a message to a queue to be processed by a Message-Driven Beans (MDB) in the background.

Tornado Web & Persistent Connections

How can I write Http server in TornadoWeb that will support persistent Connections.
I mean will be able to receive many requests and answer to them without closing connection.
How does it actually work in async?
I just want to know how to write handler to handle persistent connection.
How actually would it work?
I have handler like that:
class MainHandler(RequestHandler):
count = 0
#asynchronous
def post(self):
#get header content type
content_type = self.request.headers.get('Content-Type')
if not content_type in ACCEPTED_CONTENT:
raise HTTPError(403, 'Incorrect content type')
text = self.request.body
self.count += 1
command = CommandObject(text, self.count, callback = self.async_callback(self.on_response))
command.execute()
def on_response(self, response):
if response.error: raise HTTPError(500)
body = response.body
self.write(body)
self.flush()
execute calls callback when finishes.
is my asumption right that with things that way post will be called many times
and for one connection count will increase with each httprequest from client?
but for each connection I will have separate count value?
I don't think that your assumption is correct. My understanding of the way the Tornado server works is that each request from the client will produce a new RequestHandler. The purpose of the #tornado.web.asynchronous decorator is to prevent the server from automatically closing the connection when your handler function (post, get, etc.) returns. But at the end of the day, I think there is just one response for each request.
I don't believe additional requests from the client will go to the same instance of the RequestHandler class. Instead, my understanding is that Tornado is set up to allow for the long-polling paradigm. Here is an example of the flow of communications:
Client makes a POST request to the Tornado server
Tornado server checks to see if a response is ready, if not you could add the RequestHandler to some kind of stack or queue (depending on your application architecture)
Server comes up with a response (maybe another user added a message to the queue that needs to be distributed to open connections, etc.) and distributes the response back to the RequestHandler and then calls the finish() function to close the connection
Client makes another POST request to repeat the process
I think if you want to implement true persistent connections you'll want to look into tornado.websocket (http://www.tornadoweb.org/documentation/websocket.html). I haven't experimented with that module yet so I'm afraid I can't give any input there.
Best of luck!
The Tornado web framework actually does come with it's own server implementation which supports persistent connections, so there should be no need to write your own server. There is a section in the documentation on how to use it in production (behind nginx).
From the source for tornado.web module, you can see that a new handler is always instantiated, I don't think there is anyway you can have handlers reused.

Resources