So, there we've got a client-server interaction via ZMQ and have stucked into an architectural arguing about the proper pattern fitting our needs. I hope that someone wise and experienced may help resolve it.
It's not essential, but I need to mention that ZMQ is not being used directly, it's rather a Qt binding with C++ over the library itself, so low-level customizations are possible but undesirable (they would increase implementation efforts significantly).
Current architecture
There was a need of some reliable, convenient & robust API broker, the one's been implemented via REQ <-> REP: statuses, error codes, encryption, etc. Encryption 's been implemented via separate authorization SSL channel, providing API with session keys, it's mentioned here to empasize that as far as SSL has not been provided at ZMQ's socket level (looked too complex), "session keys" exist (symmetric encryption key per client), it limits pattern solutions somehow.
So, there exist requests (client) + responses (server), it works. But we've recently met a need in some notification mechanizm for clients. Let's say the current broker API provides some types of data: X, Y, Z (lists of something). The client can get any of them but it has to be notified when any changes in X or Y or Z occur in order to know that new requests are to be done.
The problem
Obviously, clients should receive either data updates or notifications that such updates exist. It could be a mere PUB-SUB problem, but it seems almost impossible to make this solution encrypted, or at least authorization-aware (not mentioning really "crutchy" ways to do it).
After some discussion two opinions appeared, describing two different workarounds:
Still use PUB-SUB, but only send notification type to the subscribers, like "hey, there's new X present". Clients-subscribers would have to perform already implemented API requests (REP-REQ) with session keys and all. Advantages: easy and working. Disadvantages: client logic complication.
Just rewrite API to use couples of ZMQ_PAIR sockets. Results in client-server behavior similar to plain sockets, but notifications can be "sent back" from server. Advantages: simple scheme. Disadvantages: rewriting, also broker won't differ much from a simple socket solution.
Question
What would you adwise? Any of the descibed solutions or something better, maybe? Possibly X-Y problem exists here? Maybe something is considered a common way of solving problems like that?
Thanks in advance for any bright ideas.
ZMQ_PAIR socket are mainly used for communication between threads so I do not think they are a good solution for a client/server setup, if it all possible.
You could use ROUTER/DEALER instead of REQ/REP as they allow other patterns than just request/reply. I think newer version of ZeroMQ provide SERVER/CLIENT as a better alternative but I have not used them myself.
However, I would prefer a solution with a separate PUB/SUB channel because:
You don't have to change the existing REQ/REP protocol.
It allows only clients that are interested in notifications to connect to the PUB socket and process notifications. Other clients can use just the existing REQ/REP protocol.
PUB/SUB can automatically send notification to multiple clients.
PUB/SUB allows subscribing to specific notifications only.
Related
I have a scenario where I need to deliver realtime firehoses of events (<30-50/sec max) for dashboard and config-screen type contexts.
While I'll be using WebSockets for some of the scenarios that require bidirectional I/O, I'm having a bit of a bikeshed/architecture-astronaut/analysis paralysis lockup about whether to use Server-Sent Events or Fetch with readable streams for the read-only endpoints I want to develop.
I have no particular vested interest in picking one approach over the other, and the backends aren't using any frameworks or libraries that express opinionation about using one or the other approach, so I figure I might as well put my hesitancy to use and ask:
Are there any intrinsic benefits to picking SSE over streaming Fetch?
The only fairly minor caveat I'm aware of with Fetch is that if I'm manually building the HTTP response (say from a C daemon exposing some status info) then I have to manage response chunking myself. That's quite straightforward.
So far I've discovered one unintuitive gotcha of exactly the sort I was trying to dig out of the woodwork:
When using HTTP/1.1 (aka plain HTTP), Chrome specially allows up to 255 WebSocket connections per domain, completely independently of its maximum of 15 normal (Fetch)/SSE connections.
Read all about it: https://chromium.googlesource.com/chromium/src/+/e5a38eddbdf45d7563a00d019debd11b803af1bb/net/socket/client_socket_pool_manager.cc#52
This is of course irrelevant when using HTTP2, where you typically get 100 parallel streams to work with (that (IIUC) are shared by all types of connections - Fetch, SSE, WebSockets, etc).
I find it remarkable that almost every SO question about connection limits doesn't talk about the 255-connection WebSockets limit! It's been around for 5-6 years!! Use the source, people! :D
I do have to say that this situation is very annoying though. It's reasonably straightforward to "bit-bang" (for want of a better term) HTTP and SSE using printf from C (accept connection, ignore all input, immediately dprintf HTTP header, proceed to send updates), while WebSockets requires handshake processing, and SHA1, and MD5, and input XORing. For tinkering, prototyping and throwaway code (that'll probably stick around for a while...), the simplicity of SSE is hard to beat. You don't even need to use chunked-encoding with it.
This may appear as a silly question, but I am really confused about the terminology of the ZeroMQ regarding synchronous sockets like REQ and REP.
By my understanding a synchronous communication occurs when a client sends a message an then it blocks, until the response arrives. If ZeroMQ implemented a synchronous communication then only a .send() method would be enough for a synchronous socket.
I think that synchronous sockets terminology of ZeroMQ refers only to the inability of sending more messages until the response of the last message arrives, but the "sender" can still continue its processing ( doing more stuff ) asynchronously.
Is this true?
In that case, is there any straightforward way to implement a synchronous communication using ZeroMQ?
EDIT: Synchronous communication makes sense when I want to invoke a method in a remote process (like RPC). If I want to execute a series of commands in a remote process and each command needs the result of the previous one to do its job then asynchronous communication is not the best option.
To use ZMQ for implementing a synchronous framework, you can very nearly do it using just ZMQ; you can set the high water mark to 1. Unfortunately that's not quite it; what you want is an out going queue length of 0. Even more unfortunately, setting the high water mark to 0 is interpretted by ZMQ as infinity...
So the only option is to implement a synchronous transfer protocol on top of ZMQ. That's not very difficult to do. The conversation between the two ends will be something like "can I send?", "yes you can send now", "ok here it is", "ok I have received it" (both ends return to caller) (or at least the programatic version of that). This sets up what is called an execution rendevous - both ends know that they both reached a certain point of execution.
Technically speaking what you're doing is taking ZeroMQ (Actor Model) and turning it into something more like Communicating Sequential Processes.
RPC
Having said all that, from your edit I think you might like to consider Cap'n Proto. This is a C++ serialisation technology that has a neat RPC trick. If the return from one RPC call is the input to another, you can chain those all together somehow in advance (see here).
Let's start with a first stepforget everything you know about sockets.
ZeroMQ is more a concept of thinking about distributed-systems ( multi-agent like ) and how to design a software, with a use of such a smart signalling / messaging framework.
This is the core aim of the ZeroMQ, to allow designers remain thinking in the application domain and let all the low level dirty work to be operated actually without much of the designers' need to care of.
If have just recently started with ZeroMQ, one may enjoy a short read about a ZeroMQ global view first, before discussing details.
Having read and understood the concept of the ZeroMQ hierarchy, it is way simpler to start on details:
given a local Context() instance is a data-pumping engine and having a REQ/REP Scalable Formal Communications Archetype pattern in mind, the story is now actually a story about a network of distributed-Finite-State-Automata.
local process, operating just one side of the distributed REQ/REP communication archetype has zero power to influence the remote process to receive or not the message that was passed from the local process over to the ZeroMQ delivery services towards the indended recipient(s) in a fair belief. The less the local process can influence the remote process' intent to respond at all or not, so welcome to the realms of distributed multi-agent games.
Both the REQ and the REP formal behaviour has to meet its both the { local | distributed-mode }-expected sort of behaviour -- REQ asks first, REP answers then, so as to keep the contracted promise. The point is, that this behaviour is distributed and split among a pair of nodes, plus there are cases, when network incidents may throw the distributed-FSA into an unsalvageable mutual deadlock ( one may find more posts on this here zeromq quite often ).
So, your local-side REQ code imperatively .send()-s and has no obligation to stop without doing anything reasonable until REP-side .recv( zmq.NOBLOCK )-s or not ( no one has any kind of warranty a remote node exists at all, similarly, one has to set oneselves ready to anticipate and handle all cases, where a remote side will never respond, so many "new" challenges appear from the nature of a distributed multi-agent ecosystem ).
There are smart ways to handle this new breed of distributed chaos and uncertainties, using, best using .poll() and non-blocking forms of either the .send() and .recv()-methods, as these let user-code to remain capable of handling all expected and un-expected events in due time and fashion.
One may also operate rather many co-existent ZeroMQ connections, so as to prioritise and specialise each and any form of the multi-agents' interactions in a distributed system design, even for designing in fault-resilience and similar high-level robustness concept, where asynchronous nature of each of the interactions avoids a need of any sort of coordination or synchronisation with a remote ( possibly even not yet present ) agent, which is principally an autonomous entity, having it's own domain of control, so again, being principally asynchronous to what local-side agent might "expect", the less "influence" in any other form but by an attempt to send "there" a message "telegram".
So yes,ZeroMQ is asynchronous brokerless signalling / messaging framework.
For (almost) synchronous communications, one may take steps and measures to trim down the ( principally distributed ) asynchronous control loops -- best update your post with an MCVE example and details about what are your particular goals for being achieved.
I want to set up a server for broadcasting same message to multiple users at the same time. I went through the GCM official documentation, it says that - "XMPP" does not support "multicasting" (sending same message to more than one user). HTTP could be used for that.
If that is the case, why there are a lot of articles on XMPP implementation and none on HTTP.
It makes me think that XMPP might be used as well.
Please suggest which one to use. If HTTP is the answer, share some links that explain the implementation.
GCM XMPP interface does not support putting a list of recipient for a single push, but you can still send several pushes in parallel (on the multiple XMPP connections you may have).
For sending push notifications, what is typically more efficient depend on your usage pattern:
If you send a lot of notifications to many users, XMPP may be better, as you can have several parallel streams.
If you typically send the same notification to many users, then HTTP may be more efficient as a single notification can reach 1000 recipients at once.
As suggested, if your usage pattern is multiple, you can use both and select the most efficient approach dynamically.
However, it may not be worth the effort as you really need to send a lot of notifications to see a difference. Give you mention sending notifications to multiple users, my own personal suggestion would be to use the simpler HTTP approach and try the XMPP connector of GCM if you feel this becomes a bottleneck for some part of your usage.
Part of this question is I'm not even sure what exactly I'll need to ask, so I'll start with the situation and work out from there.
One project I'm working on involves use of COMET via the aspComet library. The use case of the program is somewhat of a collaborative slideshow. One person runs the bulk of it, with one or more participants able to perform certain actions. Low latency between when an action is performed on screen
Previously, it was just running on one server. Now, we're wanting to scale it out a bit, more for reliability than performance reasons. So, we have some boxes out in Rackspace's cloud and all that fun stuff.
I knew from the start of this that I was going to need to make some changes to the way the COMET stuff works since different people in the same "show" might be on different servers, and I have no way of knowing what "show" they belong to until after they have already arrived on the site.
I initially tackled this using the WCF Mesh provider, which wasn't well documented to start with, now I'm running into issues with dispatching messages to it sometimes get lost, or delayed (I'm not 100% sure what is going on there), but it screws up the long poll for COMET and breaks things in rather strange ways (Clicking a button may trigger an event, or it'll hang for 10 seconds {long poll duration} and not actually do anything).
More research leads me to believe one of the .Net service bus providers may do what I need. However, I can't find examples that would cover what I need:
No single point of failure (outside of a database)
No hardcoding of peers.
Near-realtime (no polling, event based would be best)
My ideal solution would involve that when a server comes up, it lets other servers know of its existence (Even if it's just a row in a table somewhere), and they can start sending broadcast messages among each other, with each server being both a publisher and subscriber. This is what I somewhat had in the WCF Mesh provider, but I'm not overly confident in that code.
Can anyone point me in the right direction with this? Even proper terms to look for in the docs for service bus providers would be good at this point. Or are service buses not what I want? At this point I would settle for setting up a Jabber server on each web server and use that, if it could fit within my constraints.
I can't speak a ton to NServiceBus, but I expect the answers will be similar.
Single point of failure: MSMQ can use multicasting, which means each endpoint will broadcast it's existence and no DB table is needed. RabbitMQ uses this Exchange-to-Queue binding process which means as long as the Rabbit instance or cluster is up then messages still exist. RabbitMQ can be clustered, MSMQ cannot be. *Note: You might have issues with multicasting with Rackspace, no idea how they work. If so, you'll have to fall back on the runtime services for MSMQ (not RabbitMQ), that would create a single point of failure because everyone has a single point to coordinate control messages through.
Hard coding of peers: discussed above a bit; MSMQ's multicast handles it. Rabbit it can also be done, just bind queues to an exchange you want to listen to. MassTransit takes care of this for you.
Near-realtime: These both use messaging which is near real time. There's no polling in your message consumer code.
I think a service bus seems like a reasonable solution for what you're trying. Some more details would likely be needed, but the general messaging approach is correct. There are other more light weight messaging libraries if you decided you just want something on top of RabbitMQ and configure Rabbit to handle most of the stuff.
To get started with MassTransit, we have documentation up: http://readthedocs.org/projects/masstransit/ and mailing list http://groups.google.com/group/masstransit-discuss. Join the mailing list if you have future questions and someone will try and help you out.
I am sending small messages consisting of xml(about 1-2 KB each) across the internet from a windows application to a asp.net web service.
99% of the time this works fine but sometimes a message will take an inordinate amount of time to arive, 25 - 30 seconds instead of the usual 4 - 5 seconds. This delay also causes the message to arrive out of sequence.
Is there anyway i can solve this issue so that all the messages arrive quickly and in squence or is that not possible to gurantee when using a web service in this manner ?
If its not possible to resolve can i please get recomendations of a low latency messaging framework that can deliver messages in order over the internet.
Thanks.
Is there anyway i can solve this issue so that all the messages arrive quickly and in squence or is that not possible to gurantee when using a web service in this manner ?
Using just webservices this is not possible. You will always run into situations where occasionally something will take much longer that it "should". This is the nature of network programming and you have to work around it.
I would also recommend using XMPP for something like this. Have a look at xmpp.org for info on the standard and jabber-net for a set of client libraries for .Net.
Well this is a little off target, but have you looking into the XMPP (Jabber) protocol.
It's the messaging system that GTalk uses. Quite simple to use. Only downside to it, is that you will need a stateful service to receive and process the messages.
I also agree with #Mat's comment. It was the first solution that came to mind, then i remembered that I used XMPP in the pas to acomplish fast/ small and reliable messages between servers.
http://xmpp.org/about-xmpp/
if you search google you will easily find .net libraries which support this protocol.
and there are plenty of free jabber servers out there.
One way to ensure your messages are sent in sequence and are resolved as a batch together is to make one call to the webservice with all messages that are dependent on each other as a single batch.
Traditionally, when you make a call to a web service you do not expect that other calls to the web service will occur in a specific order. It sounds like you have an implicit sequence the data needs to arrive in the destination application, which makes me think you need to group your messages together and send them together to ensure that order.
No matter the speed of the messaging framework, you cannot guarantee to prevent a race condition that could send messages out of order, unless you send one message that has your data in the correct order.
If you are sending messages in a sequence across internet, you will never know how long will take the message to arrive from one point to another. One possible solution is to include in each message its position in the sequence, and in each endpoint implement the logic to order the messages prior to processing them. If you receive a message out of sequence, you can wait for the missing message, or request to the other endpoint to resend it.