Confusion about synchronous socket in ZeroMQ - asynchronous

This may appear as a silly question, but I am really confused about the terminology of the ZeroMQ regarding synchronous sockets like REQ and REP.
By my understanding a synchronous communication occurs when a client sends a message an then it blocks, until the response arrives. If ZeroMQ implemented a synchronous communication then only a .send() method would be enough for a synchronous socket.
I think that synchronous sockets terminology of ZeroMQ refers only to the inability of sending more messages until the response of the last message arrives, but the "sender" can still continue its processing ( doing more stuff ) asynchronously.
Is this true?
In that case, is there any straightforward way to implement a synchronous communication using ZeroMQ?
EDIT: Synchronous communication makes sense when I want to invoke a method in a remote process (like RPC). If I want to execute a series of commands in a remote process and each command needs the result of the previous one to do its job then asynchronous communication is not the best option.

To use ZMQ for implementing a synchronous framework, you can very nearly do it using just ZMQ; you can set the high water mark to 1. Unfortunately that's not quite it; what you want is an out going queue length of 0. Even more unfortunately, setting the high water mark to 0 is interpretted by ZMQ as infinity...
So the only option is to implement a synchronous transfer protocol on top of ZMQ. That's not very difficult to do. The conversation between the two ends will be something like "can I send?", "yes you can send now", "ok here it is", "ok I have received it" (both ends return to caller) (or at least the programatic version of that). This sets up what is called an execution rendevous - both ends know that they both reached a certain point of execution.
Technically speaking what you're doing is taking ZeroMQ (Actor Model) and turning it into something more like Communicating Sequential Processes.
RPC
Having said all that, from your edit I think you might like to consider Cap'n Proto. This is a C++ serialisation technology that has a neat RPC trick. If the return from one RPC call is the input to another, you can chain those all together somehow in advance (see here).

Let's start with a first stepforget everything you know about sockets.
ZeroMQ is more a concept of thinking about distributed-systems ( multi-agent like ) and how to design a software, with a use of such a smart signalling / messaging framework.
This is the core aim of the ZeroMQ, to allow designers remain thinking in the application domain and let all the low level dirty work to be operated actually without much of the designers' need to care of.
If have just recently started with ZeroMQ, one may enjoy a short read about a ZeroMQ global view first, before discussing details.
Having read and understood the concept of the ZeroMQ hierarchy, it is way simpler to start on details:
given a local Context() instance is a data-pumping engine and having a REQ/REP Scalable Formal Communications Archetype pattern in mind, the story is now actually a story about a network of distributed-Finite-State-Automata.
local process, operating just one side of the distributed REQ/REP communication archetype has zero power to influence the remote process to receive or not the message that was passed from the local process over to the ZeroMQ delivery services towards the indended recipient(s) in a fair belief. The less the local process can influence the remote process' intent to respond at all or not, so welcome to the realms of distributed multi-agent games.
Both the REQ and the REP formal behaviour has to meet its both the { local | distributed-mode }-expected sort of behaviour -- REQ asks first, REP answers then, so as to keep the contracted promise. The point is, that this behaviour is distributed and split among a pair of nodes, plus there are cases, when network incidents may throw the distributed-FSA into an unsalvageable mutual deadlock ( one may find more posts on this here zeromq quite often ).
So, your local-side REQ code imperatively .send()-s and has no obligation to stop without doing anything reasonable until REP-side .recv( zmq.NOBLOCK )-s or not ( no one has any kind of warranty a remote node exists at all, similarly, one has to set oneselves ready to anticipate and handle all cases, where a remote side will never respond, so many "new" challenges appear from the nature of a distributed multi-agent ecosystem ).
There are smart ways to handle this new breed of distributed chaos and uncertainties, using, best using .poll() and non-blocking forms of either the .send() and .recv()-methods, as these let user-code to remain capable of handling all expected and un-expected events in due time and fashion.
One may also operate rather many co-existent ZeroMQ connections, so as to prioritise and specialise each and any form of the multi-agents' interactions in a distributed system design, even for designing in fault-resilience and similar high-level robustness concept, where asynchronous nature of each of the interactions avoids a need of any sort of coordination or synchronisation with a remote ( possibly even not yet present ) agent, which is principally an autonomous entity, having it's own domain of control, so again, being principally asynchronous to what local-side agent might "expect", the less "influence" in any other form but by an attempt to send "there" a message "telegram".
So yes,ZeroMQ is asynchronous brokerless signalling / messaging framework.
For (almost) synchronous communications, one may take steps and measures to trim down the ( principally distributed ) asynchronous control loops -- best update your post with an MCVE example and details about what are your particular goals for being achieved.

Related

Asynchronous GRPC?

I am working on designing a new system that will take a an array of hashes of car data and then use this data to call a separate API that returns a Boolean, after which I will return to the original caller the car model and either true or false.
The system needs to be callable from other applications so I am looking into GRPC to solve the problem. My question revolves around how to implement this solution in GRPC and whether or not something like RabbitMQ would be better?
Would it make sense to make a bidirectional streaming GRPC solution where the client streams in the list of cars and then on the servers end I spawn off say a delayed job for each request on the server? And then when each delayed job finishes processing I return that value to the original caller in a stream?
Is this an elegant solution or are there better ways to achieve my goal? Thanks.
The streaming system of gRPC is typically designed for asynchronous communication, so it should fit your usage case neatly.
The general design philosophy in this case is to consider each individual message sent in the stream as independent. Basically, make sure your proto message contains all the information it needs to be parsed and processed by your application without needing any context from previous calls.

bind zmq dealer to multiple repliers

I want to create a proxy server which routes incoming packets from REQ type sockets to one of the REP sockets on one of the computers in a cluster. I have been reading the guide and I think the proper structure is a combination of ROUTER and DEALER on the proxy server. Where the ROUTER passes messages to the dealer to be distributed. However, I cannot figure out how to create this connection scheme. Is this the correct architecture? If so how to I bind a dealer to multiple addresses. The flow I envision is like this REQ->ROUTER|DEALER->[REP, REP, ...] where only one REP socket would handle a single request.
NB: forget about packets -- think in terms of "Behaviour", that's the key
ZeroMQ is rather an abstract layer for certain communication-behavioral patterns, so while terms alike socket do sound similar to what one has read/used previously, the ZeroMQ-world is by far different from many points of view.
This very formalism allows ZeroMQ Formal-Communication-Patterns to grow in scale, to get assembled in higher-order-patterns ( for load-balancing, for fault-tolerance, for performance-scaling ). Mastering this style of thinkign, you forget about packets, thread-sync-issues, I/O-polling and focus on your higher-abstraction-based design -- on Behaviour -- rather than on underlying details. This makes your design both free from re-inventing wheel & very powerful, as you re-use a highly professional tools right for your problem-domain tasks.
DEALER->[REP,REP,...] Segment
That said, your DEALER-node ( in fact a ZMQsocket-access-node, having The Behaviour called a "DEALER" to resemble it's queue/buffering-style, it's round-robin dispatcher, it's send-out&expect-answer-in model ) may .bind() to multiple localhost address:port-s and these "service-points" may also operate over different TransportClass-es -- one working over tcp://, another over inproc://, if that makes sense for your Design Architecture -- ZeroMQ empowers you to use this transparently abstracted from all the "awfull&dangerous" lower level gritty-nitties.
ZeroMQ also allows to reverse .connect() / .bind()
In principle, where helpfull, one may reverse the .bind() and .connect() from DEALER to a known target address of the respective REP entity.
You leave a couple details out that are important to determining the correct architecture.
When you say "from REQ type sockets to one of the REP sockets on one of the computers in a cluster", how do you determine which computer gets the message? Is it addressed to a specific computer? Does a computer announce its availability before it can receive a message? Does each message just get passed to the next one in line in a round-robin fashion? (if it's not the last one, you probably don't want a DEALER socket)
When you say "how do I bind a dealer to multiple addresses", it's not clear what you mean by "addresses"... Do you mean to say that the proxy has a unique IP address that it uses to communicate with each computer in the cluster? Or are you just wondering how to manage the connection to multiple different peers with the same socket? The former is a special case, the latter is simple.
I'm going to work with the following assumptions:
You want a worker computer from the cluster to announce its availability for work before it receives any work, and any computer in the cluster can handle any job. A faster worker, or a worker working on a smaller job, will not have to wait behind some slow worker to finish their job and get a new job first.
The proxy/broker uses a single ip interface to communicate with all workers.
If those are true, then what you want will be closer to this:
REQ->ROUTER|ROUTER->[REQ, REQ, ...]
A worker will create a request to the backend router socket to announce its availability, and await a reply with work. Once it is finished, it will create a new request with the finished work, which again announces its availability. The other half of the pattern you've already worked out.
This is the Simple Pirate Pattern from the ZMQ guide. It's a good place to start, but it's not very robust. This is in the Reliable Request-Reply Patterns section of the guide, and I suggest you read or reread that section carefully as it will guide you well. In particular, they keep refining this pattern into more and more reliable implementations and wind up with the Majordomo pattern, which is very robust and fault tolerant. You should see if you need all the features that provides or if you can scale it back a little. Either way, you should learn and understand what these patterns are doing and why before you make the choice to do something different.

How do I create a memory bound message queue in Erlang?

I want the speed of asynchronous messages but still have some flow control. How can I accomplish this in Erlang?
There is no process memory limit right now -- it is discussed on mailing list etc. You can look at those threads.
On the up side, when you use OTP patterns implementation like gen_server you have a lot of freedom in retrieving messages from process queue and measuring the length of the queue.
gen_server2 used in rabbitmq used to optimize that by moving messages to internal data structure.
Having that you can discard any new incoming message when internal queue is too long.
You can do it silently or notify sender that the message rejected.
All of that is on very low level.
RabbitMQ will provide this functionality on AMQP level.
A common and quite good way of enforcing flow control is to make well selected messages into calls which limits how much load each client can load the server to one, effectively providing force feed back in an extremely simple way. The trick is of course to pick which communications uses synchronous calls :-)

{ ProcessName, NodeName } ! Message VS rpc:call/4 VS HTTP/1.1 across Erlang Nodes

I have a setup in which two nodes are going to be communicating a lot. On Node A, there are going to be thousands of processes, which are meant to access services on Node B. There is going to be a massive load of requests and responses across the two nodes. The two Nodes, will be running on two different servers, each on its own hardware server.
I have 3 Options: HTTP/1.1 , rpc:call/4 and Directly sending a message to a registered gen_server on Node B. Let me explain each option.
HTTP/1.1 Suppose that on Node A, i have an HTTP Client like Ibrowse, and on Node B, i have a web server like Yaws-1.95, the web server being able to handle unlimited connections, the operating system settings tweaked to allow yaws to handle all connections. And then make my processes on Node A to communicate using HTTP. In this case each method call, would mean a single HTTP request and a reply. I believe there is an overhead here, but we are evaluating options here. The erlang Built in mechanism called webtool, may be built for this kind of purpose.
rpc:call/4 I could simply make direct rpc calls from Node A to Node B. I am not very susre how the underlying rpc mechanism works , but i think that when two erlang nodes connect via net_adm:ping/1, the created connection is not closed but all rpc calls use this pipe to transmit requests and pass responses. Please correct me on this one.Sending a Message from Node A to Node B I could make my processes on Node A to just send message to a registered process, or a group of processes on Node B. This too seems a clean option.
Q1. Which of the above options would you recommend and why, for an application in which two erlang nodes are going to have enormous communications between them all the time. Imagine a messaging system, in which two erlang nodes are the routers :) ? Q2. Which of the above methods is cleaner, less problematic and is more fault tolerant (i mean here that, the method should NOT have single point of failure, that could lead to all processes on Node A blind) ? Q3. The mechanism of your choice: how would you make it even more fault tolerant, or redundant? Assumptions: The Nodes are always alive and will never go down, the network connection between the nodes will always be available and non-congested (dedicated to the two nodes only) , the operating system have allocated maximum resources to these two nodes. Thank you for your evaluations
HTTP is definitely out. Just the round-trip overhead of creating a new connection is a problem.
As for Erlang connections and using Pids, you have the advantage that you can subscribe to node-down messages and handle the case where a node goes down. A single TCP connection should be able to give you very fast speeds, however, be aware that it works like a long pipe: messages are muxed and demuxed on a pipe which can affect latency on the line. It also means that large messages will block small messages from getting through.
How much bandwidth are you aiming for, and at what latency? What is the 95th and 99th percentile of answering messages? It is better to put up some rough numbers and then try to target these than just having "as fast as possible". Set your success criteria first.
Q1: HTTP will add additional overhead and will give you nothing in my opinion. HTTP would be useful if you were designing a REST API. Directly sending messages and rpc:call look about the same as far as overhead is regarded.
Q2: Sending messages is much much clearer. It's the way erlang is designed. With RPC calls you must always track which call is executed where and under which circumstances which can be a huge issue if the two servers have state. Also RPC calls are synchronous.
Q3: I would use UBF if I can afford minor overhead, otherwise I would directly send messages between the erlang nodes. If the bandwidth is an issue other trickery would be needed as well. Like encoding the messages in same way and then using some compression algorithm to reduce the size of the message, alternatively I may ditch the erlang message passing altogether and use UDP sockets.
It is not obvious that ! is the best way to go. Definitely, it is the easiest and the code will be the most elegant.
In terms of scalability, take under consideration that to use rpc/! you have to maintain an erlang cluster. I found it painful having just 10-20 nodes even in private cloud. I would never recommend bigger deployments on e.g. EC2, where io/latency/network is not deterministic.
I recommend to structure the project in a way that will let you exchange communication engine in the future. Also HTTP is pretty heavy, but there are options:
socket-socket (tcp/udp/sctp)
amqp (many benefits connected to load balancing)
zeromq (even nicer than amqp)
Betting on !/rpc and OTP cluster is risky. You will fight with full mesh overhead, master election algos and quorum/partition detection.

Given TCP, is the State Design Pattern of little use when IO is non-blocking?

In my TCP application, the State design pattern seemed useful as long as IO was blocking.
My SwingWorker's doInBackground() could loop through read, write, and accept states in the TCP connection by reference to the one object. See the example on wikipedia's talk page: http://en.wikipedia.org/wiki/Talk%3AState_pattern .
However, when I refactored the server to non-blocking IO, it no longer seemed useful. Select() returned a group of channels ready for IO and these were dealt with by reference to SelectionKey states in a series of if-statements.
Can anyone confirm from experience or understanding whether State Design Pattern still has utility when IO is non-blocking?
I ask because I am unsure if I have grasped State design pattern and TCP's relationship correctly.
Still very useful, you just have state machine per connection. select(2) (or poll(2), or epoll(7)) just give you a way of waiting on multiple channels, and dispatching events to those state machines.

Resources