Async server does not process requests while a request is stuck - grpc

I am new to GRPC so please let me know if I am doing something wrong here. I am looking at the greeter_async_server.cc example code. This seems to work fine for normal requests but I wanted to simulate a request getting stuck on the server so I added a sleep in the processing loop. I added this right before Finish is called on the responder so that it was in the actual processing logic of the request. While the server thread is sleeping it will not accept any new requests until the thread is free. I attempted to create another client request while the original request on the server is sleeping but the grpc server would not process the request. The client seemed to be stuck until the server came out of the sleep.
I also broke this process into debugger as well but the only request I saw was the one that was sleeping. The other threads were waiting on the completion queue.
I am new to grpc so if I am doing this wrong please let me know what I need to do to handle request while another request is stuck.
void Proceed() {
if (status_ == CREATE) {
// Make this instance progress to the PROCESS state.
status_ = PROCESS;
// As part of the initial CREATE state, we *request* that the system
// start processing SayHello requests. In this request, "this" acts are
// the tag uniquely identifying the request (so that different CallData
// instances can serve different requests concurrently), in this case
// the memory address of this CallData instance.
service_->RequestSayHello(&ctx_, &request_, &responder_, cq_, cq_,
this);
} else if (status_ == PROCESS) {
// Spawn a new CallData instance to serve new clients while we process
// the one for this CallData. The instance will deallocate itself as
// part of its FINISH state.
new CallData(service_, cq_);
// The actual processing.
std::string prefix("Hello ");
reply_.set_message(prefix + request_.name());
Sleep((DWORD)-1);
// And we are done! Let the gRPC runtime know we've finished, using the
// memory address of this instance as the uniquely identifying tag for
// the event.
status_ = FINISH;
responder_.Finish(reply_, Status::OK, this);
} else {
GPR_ASSERT(status_ == FINISH);
// Once in the FINISH state, deallocate ourselves (CallData).
delete this;
}
}

Related

VUE Front end to go server (http) and clients connected to go server (tcp) error

I'm currently creating a go TCP server that handles file sharing between multiple go clients, that works fine. However, I'm also building a front end using vue.js showing some server stats like the number of users, bytes sent, etc.
The problem occurs when I include the 'http.ListenAndServe(":3000", nil)' function handles the requests from the front end of the server. Is it impossible to have a TCP and an HTTP server on the same go file?
If so, how can a link the three (frontend, go-server, clients)
Here is the code of the 'server.go'
func main() {
// Create TCP server
serverConnection, error := net.Listen("tcp", ":8085")
// Check if an error occured
// Note: because 'go' forces you to use each variable you declare, error
// checking is not optional, and maybe that's good
if error != nil {
fmt.Println(error)
return
}
// Create server Hub
serverHb := newServerHub()
// Close the server just before the program ends
defer serverConnection.Close()
// Handle Front End requests
http.HandleFunc("/api/thumbnail", requestHandler)
fs := http.FileServer(http.Dir("../../tcp-server-frontend/dist"))
http.Handle("/", fs)
fmt.Println("Server listening on port 3000")
http.ListenAndServe(":3000", nil)
// Each client sends data, that data is received in the server by a client struct
// the client struct then sends the data, which is a request to a 'go' channel, which is similar to a queue
// Somehow this for loop runs only when a new connection is detected
for {
// Accept a new connection if a request is made
// serverConnection.Accept() blocks the for loop
// until a connection is accepted, then it blocks the for loop again!
connection, connectionError := serverConnection.Accept()
// Check if an error occurred
if connectionError != nil {
fmt.Println("1: Woah, there's a mistake here :/")
fmt.Println(connectionError)
fmt.Println("1: Woah, there's a mistake here :/")
// return
}
// Create new user
var client *Client = newClient(connection, "Unregistered_User", serverHb)
fmt.Println(client)
// Add client to serverHub
serverHb.addClient(client)
serverHb.listClients()
// go client.receiveFile()
go client.handleClientRequest()
}
}

How to Perform Concurrent Request-Reply for Asynchronous Tasks with ZeroMQ?

Intention
I want to allow a client to send a task to some server at a fixed address.
The server may take that task and perform it at some arbitrary point in the future, but may still take requests from other clients before then.
After performing the task, the server will reply to the client, which may have been running a blocking wait on the reply.
The work and clients come dynamically, so there can't be a fixed initial number.
The work is done in a non-thread-safe context, so workers can't exist on different threads, so all work should take place in a single thread.
Implementation
The following example1 is not a complete implementation of the server, only a compilable section of the sequence that should be able to take place (but is in reality hanging).
Two clients send an integer each, and the server takes one request, then the next request, echo replies to the first request, then echo replies to the second request.
The intention isn't to get the responses ordered, only to allow for the holding of multiple requests simultaneously by the server.
What actually happens here is that the second worker hangs waiting on the request - this is what confuses me, as DEALER sockets should route outgoing messages in a round-robin strategy.
#include <unistd.h>
#include <stdio.h>
#include <zmq.h>
#include <sys/wait.h>
int client(int num)
{
void *context, *client;
int buf[1];
context = zmq_ctx_new();
client = zmq_socket(context, ZMQ_REQ);
zmq_connect(client, "tcp://localhost:5559");
*buf = num;
zmq_send(client, buf, 1, 0);
*buf = 0;
zmq_recv(client, buf, 1, 0);
printf("client %d receiving: %d\n", num, *buf);
zmq_close(client);
zmq_ctx_destroy(context);
return 0;
}
void multipart_proxy(void *from, void *to)
{
zmq_msg_t message;
while (1) {
zmq_msg_init(&message);
zmq_msg_recv(&message, from, 0);
int more = zmq_msg_more(&message);
zmq_msg_send(&message, to, more ? ZMQ_SNDMORE : 0);
zmq_msg_close(&message);
if (!more) break;
}
}
int main(void)
{
int status;
if (fork() == 0) {
client(1);
return(0);
}
if (fork() == 0) {
client(2);
return 0;
}
/* SERVER */
void *context, *frontend, *backend, *worker1, *worker2;
int wbuf1[1], wbuf2[1];
context = zmq_ctx_new();
frontend = zmq_socket(context, ZMQ_ROUTER);
backend = zmq_socket(context, ZMQ_DEALER);
zmq_bind(frontend, "tcp://*:5559");
zmq_bind(backend, "inproc://workers");
worker1 = zmq_socket(context, ZMQ_REP);
zmq_connect(worker1, "inproc://workers");
multipart_proxy(frontend, backend);
*wbuf1 = 0;
zmq_recv(worker1, wbuf1, 1, 0);
printf("worker1 receiving: %d\n", *wbuf1);
worker2 = zmq_socket(context, ZMQ_REP);
zmq_connect(worker2, "inproc://workers");
multipart_proxy(frontend, backend);
*wbuf2 = 0;
zmq_recv(worker2, wbuf2, 1, 0);
printf("worker2 receiving: %d\n", *wbuf2);
zmq_send(worker1, wbuf1, 1, 0);
multipart_proxy(backend, frontend);
zmq_send(worker2, wbuf2, 1, 0);
multipart_proxy(backend, frontend);
wait(&status);
zmq_close(frontend);
zmq_close(backend);
zmq_close(worker1);
zmq_close(worker2);
zmq_ctx_destroy(context);
return 0;
}
Other Options
I have looked at CLIENT and SERVER sockets and they appear to be capable on paper, however in practice they're sufficiently new that the system version of ZeroMQ that I have doesn't yet support them.
If it is not possible to perform this in ZeroMQ, any alternative suggestions are very welcome.
1 Based on the Shared Queue section of the ZeroMQ guide.
Let me share a view on how ZeroMQ could meet the above defined Intention.
Let's rather use ZeroMQ Scalable Formal Communication Pattern Archetypes ( as they are RTO now, not as we may wish them to be at some, yet unsure, point in (a just potentially happening) future evolution state ).
We need not hesitate to use many more ZeroMQ-based connections among a herd of coming/leaving client-instance(s) and the server
For example :
Client .connect()-s a REQ-socket to Server-address:port-A to ask for a "job"-ticket processing over this connection
Client .connect()-s a SUB-socket to Server-address:port-B to listen ( if present ) about published announcements about already completed "job"-tickets that are Server-ready to deliver results for
Client exposes another REQ-socket to request upon an already broadcast "job"-ticket completion announcement message, it has just heard about over the SUB-socket, to get "job"-ticket results finally delivered, if proving itself, by providing a proper / matching job-ticket-AUTH-key to proof its right to receive the publicly announced results' availability, using this same socket to deliver a POSACK-message to Server upon client has correctly received this "job"-ticket results "in hands"
Server exposes REP-socket to respond each client ad-hoc upon a "job"-ticket request, notifying this way about having "accepted"-job-ticket, delivering also a job-ticket-AUTH-key for later pickup of results
Server exposes PUB-socket to announce any and all not yet picked-up "finished"-job-tickets
Server exposes another REP-socket to receive any possible attempt to request to deliver "job"-ticket-results. Upon verifying there delivered job-ticket-AUTH-key, Server decides whether the respective REQ-message had matching job-ticket-AUTH-key to indeed deliver a proper message with results, or whether a match did not happen, in which case a message will carry some other payload data ( logic is left for further thoughts, so as to prevent potential bruteforcing or eavesdropping and similar, less primitive attacks on stealing the results )
Clients need not stay waiting for results live/online and/or can survive certain amounts of LoS, L2/L3-errors or network-storm stresses
Clients need just to keep some kind of job-ticket-ID and job-ticket-AUTH-key for later retrieving of the Server-processes/maintained/auth-ed results
Server will keep listening for new jobs
Server will accept new job-tickets with providing a privately added job-ticket-AUTH-key
Server will process job-tickets as it will to do so
Server will maintain a circular-buffer of completed job-tickets to be announced
Server will announce, in due time and repeated as decided in public, job-tickets, that are ready for client-initiated retrieval
Server will accept new retrieval requests
Server will verify client-requests for matching any announced job-ticket-ID and testing if job-ticket-AUTH-key match either
Server will respond to either matching / non-matching job-ticket-ID results retrieval request(s)
Server will remove a job-ticket-ID from a circular-buffer only upon both POSACK-ed AUTH-match before a retrieval and a POSACK-message re-confirmed delivery to client

Sending TCP data without recieving (boost asio)

I'm working my way through boost's asio tutorial. I'm looking into their chat example. More specifically, I'm trying to split their chat client from a sender+receiver, to just a sender and just a receiver, but I'm seeing some behaviour that I can't explain.
The setup consists of:
boost::asio::io_service io_service;
tcp::resolver::iterator endpoint = resolver.resolve(...);
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
boost::asio::async_connect(socket, endpoint, bind(handle_connect, ... ));
The sending portion effectively conisists of:
while (std::cin.getline(str))
io_service.post( do_write, str );
and
void do_write (string str)
{
boost::asio::async_write(socket, str, bind( handle_write, ... ));
}
The receive section consists of
void handle_connect(...)
{
boost::asio::async_read(socket, read_msg_, bind(handle_read, ...));
}
void handle_read(...)
{
std::cout << read_msg_;
boost::asio::async_read(socket, read_msg_, bind(handle_read, ...));
}
If I comment out the content of handle_connect to isolate the send portion, my other client (compiled using the original code) does not receive anything. If I revert, then comment out the content of handle_read, my other client only receives the first message.
Why is it necessary to call async_read() in order to be able to post() an async_write()?
The full unmodified code is linked above.
The problem here is that, your io_service is running out of work and stops processing requests even before you start sending your chat messages.
If you comment out the body of handle_connect, then the only work it had to do was to dispatch the handle_connect handler and then execute it once the connection was done.
std::size_t scheduler::run(asio::error_code& ec)
{
.....
mutex::scoped_lock lock(mutex_);
std::size_t n = 0;
for (; do_run_one(lock, this_thread, ec); lock.lock())
if (n != (std::numeric_limits<std::size_t>::max)())
++n;
return n;
}
So, you have to provide it with something in it's operation queue. This was done with handle_read_header handler in the original code as this handler would always be in the need of servicing till the client gets something from the server.
You can do what you want to do by providing work to the io_service.
asio::io_context io_context;
asio::io_context::work wrk(io_context); // make `run` run forever
tcp::resolver resolver(io_context);
tcp::resolver::results_type endpoints = resolver.resolve(argv[1], argv[2]);
chat_client c(io_context, endpoints);
asio::thread t(boost::bind(&asio::io_context::run, &io_context));

Wait for async thrift requests to complete

I am invoking multiple async calls of thrift from my code. I would like to wait
for all of them to complete before going on with my next stage.
for (...) {
TNonblockingTransport transport = new TNonblockingSocket(host, port);
TAsyncClientManager clientManager = new TAsyncClientManager();
TProtocolFactory protocolFactory = new TBinaryProtocol.Factory();
AsyncClient c = new AsyncClient(protocolFactory, clientManager, transport);
c.function(params, callback);
}
// I would like to wait for all the calls to be complete here.
I can have a countdown in the callback like wait/notify and get this done. But does the thrift system allow a way for me to wait on my async function call, preferably with a timeout ?
I didnt see any in the TAsyncClientManager or in the AsyncClient. Please help.
Given that it was not possible to do this, I used the sync api client and managed the launch and wait using executors and launchAll. I am leaving this as my answer for people to have an alternative.

When I close a http server, why do I get a socket hang up?

I implemented a graceful stop to our node.js server. Basically something like this:
var shutDown = function () {
server.on('close', function () {
console.log('Server ' + process.pid + ' closed.');
process.exit();
});
console.log('Shutting down ' + process.pid + '...');
server.close();
}
However, when I close the server like this, I get a Error: socket hang up error in my continuous requests.
I thought that server.close() would make the server stop listening and accepting new requests, but keep processing all pending/open requests. However, that should result in an Error: connect ECONNREFUSED.
What am I doing wrong?
Additional info: The server consists of a master and three forked children/workers. However, the master is not listening or binding to a port, only the children are, and they are shut down as stated above.
Looking at the docs, it sounds like server.close() only stops new connections from coming in, so I'm pretty sure the error is with the already-open connections.
Maybe your shutDown() can check server.connections and wait until there are no more?
var shutDown = function(){
if(server.connections) return setTimeout(shutDown, 1000);
// Your shutDown code here
}
Slightly uglier (and much less scalable), but without the wait: you can keep track of connections as they occur and close them yourself
server.on('connection', function(e){
// Keep track of e.connection in a list.
// You'll want to remove e.connection
// from the list if it closes on its own
}
var shutDown = function(){
// Close all remaining connections in the list
// Your shutDown code here
}

Resources