MPI and request/reply - mpi

Does the MPI standard implement the request-reply communication pattern?
Reading about MPI I found that there are point-to-point routines like:
Synchronous send
Blocking send / blocking receive
Non-blocking send /non-blocking receive
Buffered send
Combined send/receive
"Ready" send
Maybe a developer can implement the request-reply communication pattern using these routines, but it seems that MPI does not directly implement it.
Edit: For clarifying purposes the request-reply (request-response) is a message exchange pattern in which a requestor sends a request message to a replier system which receives and processes the request, ultimately returning a message in response. This is a simple, but powerful messaging pattern which allows two applications to have a two-way conversation with one another over a channel. This pattern is especially common in client–server architectures. It may be synchronous or asynchronous.

This is not available as-is.
That being said, this is trivial to implement.
The requestor can MPI_Sendrecv() and the replier can MPI_Recv() the request and then MPI_Send() the answer.

Related

Difference between ZeroMQ asynchronous http requests and Messages?

How is using asynchronous HTTP Requests different from using Messages when it comes to sending data in ZeroMQ?
A http request is simply the use of the hypertext transport protocol used over IP between two machines, client and server. It can be used for moving data in either direction. There's no particular restrictions as to what that data can be. An asynchronous request is simply one where the requester isn't bothering to wait for the reply having made the request; it'll use some mechanism to later rendezvous with the request, whenever that happens to come in.
Sending a message through ZeroMQ can be somewhat similar, specifically the REQ/REP pattern (request, reply). Similar to a http request, the requester will send some sort of message and the replier will reply in some way, and strictly in this pattern.
ZeroMQ uses its own protocol, zmtp, to move messages around. Again, there's nothing really limiting what data is in a message. ZeroMQ is inherently asynchronous - it's implementing the Actor programming model (though I notice that the way some implementations in some languages have eroded ZeroMQ's simplicity w.r.t. that, fitting into the language's own way of being asynchronous rather than use a poll funcion provided by ZeroMQ).
However, ZeroMQ builds many more data distribution patterns than req/rep on top of zmtp, like pub/sub, dealer/router, that http simply has no equivalent of. Further differences are that ZeroMQ can use IP, interprocess comms, or in-memory transports; this makes it highly suited for both in-application use, and for inter-machine distributed applications. I guess that a webserver could be contacted over ipc too, but I've never heard of anyone bothering to do that. Http is expected to be used over specific ports (e.g. port 80), whereas ZMQ gets used on whatever ports the developer wants (obeying the normal port allocation rules if they want a quiet life).

MPI asynchronous sending

I need asynchronous message sending. I read about non-blocking sending on MPI (e.g. MPI_Isend or MPI_Ibcast) and functions to wait/test that sending request (MPI_Wait*/MPI_Test*). However I realize (through testing and web info) that non-blocking send might never get progress on receiver unless that sender invokes functions MPI_Wait*/MPI_Test* after it sends. How can I get a real (and optimal) asynchronous sending?
Regards

Is this an intention of bidirectional streaming RPC in gRPC?

I am maxing out my CPU on my gRPC client doing unary RPC. I wonder if it makes any sense to try replacing unary RPC with a bidirectional streaming RPC (or set of them?) that basically lasts throughout the life of the application? I can't tell if bidirectional streaming RPC is intended for standard 1 request/1 response communication like this. The motivation would be to avoid creating new TCP connections.
I can't tell if bidirectional streaming RPC is intended for standard 1 request/1 response communication
If you are going to use a request-reply message pattern, just use unary request-reply (RPC). It is designed for that pattern and semantics for e.g. retry is well known.
The motivation would be to avoid creating new TCP connections.
gRPC uses HTTP/2, so all unary RPC requests already use the same TCP connection - since HTTP/2 multiplexes all requests over the same TCP connection.
I am maxing out my CPU on my gRPC client doing unary RPC.
This sounds a bit rare. Can you change your communication pattern, e.g. stream multiple requests before waiting for a response? Alternative batch data and send bigger requests more seldom? Or can you elaborate more about your message pattern?
Like #Jonas suggested, using a bidirectional stream just for higher throughput would be a bad idea.
Google’s gRPC team advises against it, but nevertheless, few seem to argue that theoretically, streams should have lower overhead. But that does not seem to be true.
Probably because streams ensure the messages are delivered in the order they were sent, thus creating some kind of bottleneck when there are concurrent messages.
For lower concurrent requests, both have comparable latencies. However, for higher loads, unary calls are much more performant.
A detailed analysis here: https://nshnt.medium.com/using-grpc-streams-for-unary-calls-cd64a1638c8a.

What is the difference between DEALER and ROUTER socket archetype in ZeroMQ?

What is the difference between the ROUTER and the DEALER socket archetypes in zmq?
And which should I use, if I have a server, which is receiving messages and a client, which is sending messages? The server will never send a message to a client.
EDIT: I forgot to say that there can be several instances of the client.
For details on ROUTER/DEALER Formal Communication Pattern, do not hesitate to consult the API documentation. There are many features important for ROUTER/DEALER ( XREQ/XREP ) that have nothing beneficial for your indicated use-case.
Many just send, just one just listens?
Given N-clients purely .send() messages to 1-server, which exclusively .recv() messages, but never sends any message back,
the design may benefit from a PUB/SUB Formal Communication Pattern.
In case some other preferences outweight the trivial approach, one may setup a more complex "wireing", using another one-way type of infrastructure, based on PUSH/PULL, and use a reverse setup PUB/SUB, where each new client, the PUB side, .connect()-s to the SUB-side, given a server-side .bind() access-point is on a known, static IP address and the client self-advertises on this signalling channel, that it is alive ( keep-alive with IP-address:port#, where the server-side ought initiate a new PUSHtoPULL.connect() setup onto the client-advertised, .bind()-ready PULL-side access point.
Complex? Rather a limitless tool, only our imagination is our limit.
After some time, one realises all the powers of multi-functional SIG/MSG-infrastructure, so do not hesitate to experiment and re-use the elementary archetypes in more complex, mutually-cooperating distributed systems computing.

How to implement a bidirectional "mailbox service" over tcp?

The idea is to allow to peer processes to exchange messages (packets) over tcp as much asynchronously as possible.
The way I'd like it to work is each process to have an outbox and an inbox. The send operation is just a push on the outbox. The receive operation is just a pop on the inbox. Underlying protocol would take care of the communication details.
Is there a way to implement such mechanism using a single TCP connection?
How would that be implemented using BSD sockets and modern OO Socket APIs (like Java or C# socket API)?
Yes, it can be done with a single TCP connection. For one obvious example, (though a bit more elaborate than you really need) you could take a look at the NNTP protocol (RFC 3977). What you seem to want would be similar to retrieving and posting articles.

Resources