Is it possible to use ZeroMQ as an intermediate point where different programs can send packets to an ip:port combination or alternatively network interface and have ZeroMQ forward any such packets in a specific outbound connection?
So basically I don't want to send data generated by me programmatically but expose some endpoint for other applications to seamlessly communicate as they have always done, but by basically using ZeroMQ as a middleman that will route traffic accordingly.
Q : "... have ZeroMQ forward any such packets in a specific outbound connection?"
Given there is a built-in way to avoid using any of the common ZeroMQ high-level Scalable Formal Communications Pattern Archetypes from the arsenal of { PUB/SUB | XPUB/XSUB | REQ/REP | XREQ/XREP | ROUTER/DEALER | PUSH/PULL | RADIO/DISH | CLIENT/SERVER | ... }, there is a chance to use the "raw"-mode, where indeed a socket-FileDescriptor could start to get handled by the ZeroMQ instrumentation { .poll() | .send() | .recv() }-methods.
For a success in your intended use-case to be achieved, yet there will be necessary to carefully follow the published API specification, as the ZMQ_STREAM sockets have some built-in behaviours, that need to be reflected in your MiTM-proxy-processing ( strip-off prepended identity, added upon arrival, thread-safety, etc. ).
Anyway, while the core ZMQ-RFC documentation is stable, the API documentation reflects recent changes as the core library still keeps evolving, so keep a close look on what you implement (based on current version) and what to watch for potential further version evolutions and/or discontinued features.
There might be interesting to also harness the zmq_socket_monitor() features, as you will be based on connection-oriented tcp://-transport class in this scenario.
Related
We've tried to evaluate gRPC with flatbuffers on an embedded linux system, the resulting executable is ~6 MB for a very basic example with protobuf. We are looking to strip off as much as possible to move to platforms with even less resources. Only a direct channel over a "secure" serial transport; USB CDC, direct UDP/TCP or similar is required.
Is there a way to achieve this with standard gRPC configuration?
Is a custom channel required for this setup? Implementing a custom channel seems rather complicated (very high cyclomatic complexity in the included channels, even the in-memory one)
Is there any other guidance or examples/implementations for a simple custom channel implementation?
I have to connect an old but critical software to RabbitMQ. The software doesn't support AMQP, but it can do HTTP Requests.
Does RabbitMQ support plain HTTP? Or should I use a "proxy" or "app" that actively transforms the HTTP Requests to AMQP 1.0 and pushes it to the RabbitMQ server?
https://www.rabbitmq.com/management.html
The management plugin supports a simple HTTP API to send and receive messages. This is primarily intended for diagnostic purposes but can be used for low volume messaging without reliable delivery.
As mentioned, it's designed for very low loads, but it may be usable. If you need higher loads, then by all means cast around for a library that does the job and create a proxy. Most languages will have something. I've personally created a lightweight API using Lumen and https://github.com/bschmitt/laravel-amqp to tie a few disparate services together in the past, and it seems to work very well.
It is possible not but really recommended depending on load. You have three options really, two of which are web socket based and one that seems like what you're looking for. I'd suggest starting with the rabbitmq docs.
I have a system where embedded devices communicate with each other. Each device may not communicate with every other device in the network. I want to define message types for my system. The messages might be sent with TCP/IP or UDP or another protocol. There are couple of fields in this message, such as from, to and data itself. Are there any well-known approaches or guides to define length of such fields, fields to include and etc? I am not sure whether the question is too broad or not.
Example:
| from (1B) | to (1B) | data (nB) |
Note that B stands for byte.
There are no fixed ways or predefined standards. There are protocols, either use a predefined protocols or design your custom protocols as you mentioned in the last part of your question. It could be application specific protocol structure or based on the channel limitations. Moreover the question should be a bit more specific or detailed for a better suggestion. Have a look at the protocol definitions of few famous protocols like USB, MQTT, HTTP, etc to give you a better idea.
So, there we've got a client-server interaction via ZMQ and have stucked into an architectural arguing about the proper pattern fitting our needs. I hope that someone wise and experienced may help resolve it.
It's not essential, but I need to mention that ZMQ is not being used directly, it's rather a Qt binding with C++ over the library itself, so low-level customizations are possible but undesirable (they would increase implementation efforts significantly).
Current architecture
There was a need of some reliable, convenient & robust API broker, the one's been implemented via REQ <-> REP: statuses, error codes, encryption, etc. Encryption 's been implemented via separate authorization SSL channel, providing API with session keys, it's mentioned here to empasize that as far as SSL has not been provided at ZMQ's socket level (looked too complex), "session keys" exist (symmetric encryption key per client), it limits pattern solutions somehow.
So, there exist requests (client) + responses (server), it works. But we've recently met a need in some notification mechanizm for clients. Let's say the current broker API provides some types of data: X, Y, Z (lists of something). The client can get any of them but it has to be notified when any changes in X or Y or Z occur in order to know that new requests are to be done.
The problem
Obviously, clients should receive either data updates or notifications that such updates exist. It could be a mere PUB-SUB problem, but it seems almost impossible to make this solution encrypted, or at least authorization-aware (not mentioning really "crutchy" ways to do it).
After some discussion two opinions appeared, describing two different workarounds:
Still use PUB-SUB, but only send notification type to the subscribers, like "hey, there's new X present". Clients-subscribers would have to perform already implemented API requests (REP-REQ) with session keys and all. Advantages: easy and working. Disadvantages: client logic complication.
Just rewrite API to use couples of ZMQ_PAIR sockets. Results in client-server behavior similar to plain sockets, but notifications can be "sent back" from server. Advantages: simple scheme. Disadvantages: rewriting, also broker won't differ much from a simple socket solution.
Question
What would you adwise? Any of the descibed solutions or something better, maybe? Possibly X-Y problem exists here? Maybe something is considered a common way of solving problems like that?
Thanks in advance for any bright ideas.
ZMQ_PAIR socket are mainly used for communication between threads so I do not think they are a good solution for a client/server setup, if it all possible.
You could use ROUTER/DEALER instead of REQ/REP as they allow other patterns than just request/reply. I think newer version of ZeroMQ provide SERVER/CLIENT as a better alternative but I have not used them myself.
However, I would prefer a solution with a separate PUB/SUB channel because:
You don't have to change the existing REQ/REP protocol.
It allows only clients that are interested in notifications to connect to the PUB socket and process notifications. Other clients can use just the existing REQ/REP protocol.
PUB/SUB can automatically send notification to multiple clients.
PUB/SUB allows subscribing to specific notifications only.
A little background.
Very big monolithic Django application. All components use the same database. We need to separate services so we can independently upgrade some parts of the system without affecting the rest.
We use RabbitMQ as a broker to Celery.
Right now we have two options:
HTTP Services using a REST interface.
JSONRPC over AMQP to a event loop service
My team is leaning towards HTTP because that's what they are familiar with but I think the advantages of using RPC over AMQP far outweigh it.
AMQP provides us with the capabilities to easily add in load balancing, and high availability, with guaranteed message deliveries.
Whereas with HTTP we have to create client HTTP wrappers to work with the REST interfaces, we have to put in a load balancer and set up that infrastructure in order to have HA etc.
With AMQP I can just spawn another instance of the service, it will connect to the same queue as the other instances and bam, HA and load balancing.
Am I missing something with my thoughts on AMQP?
At first,
REST, RPC - architecture patterns, AMQP - wire-level and HTTP - application protocol which run on top of TCP/IP
AMQP is a specific protocol when HTTP - general-purpose protocol, thus, HTTP has damn high overhead comparing to AMQP
AMQP nature is asynchronous where HTTP nature is synchronous
both REST and RPC use data serialization, which format is up to you and it depends of infrastructure. If you are using python everywhere I think you can use python native serialization - pickle which should be faster than JSON or any other formats.
both HTTP+REST and AMQP+RPC can run in heterogeneous and/or distributed environment
So if you are choosing what to use: HTTP+REST or AMQP+RPC, the answer is really subject of infrastructure complexity and resource usage. Without any specific requirements both solution will work fine, but i would rather make some abstraction to be able switch between them transparently.
You told that your team familiar with HTTP but not with AMQP. If development time is an important time you got an answer.
If you want to build HA infrastructure with minimal complexity I guess AMQP protocol is what you want.
I had an experience with both of them and advantages of RESTful services are:
they well-mapped on web interface
people are familiar with them
easy to debug (due to general purpose of HTTP)
easy provide API to third-party services.
Advantages of AMQP-based solution:
damn fast
flexible
cost-effective (in resources usage meaning)
Note, that you can provide RESTful API to third-party services on top of your AMQP-based API while REST is not a protocol but rather paradigm, but you should think about it building your AQMP RPC api. I have done it in this way to provide API to external third-party services and provide access to API on those part of infrastructure which run on old codebase or where it is not possible to add AMQP support.
If I am right your question is about how to better organize communication between different parts of your software, not how to provide an API to end-users.
If you have a high-load project RabbitMQ is damn good piece of software and you can easily add any number of workers which run on different machines. Also it has mirroring and clustering out of the box. And one more thing, RabbitMQ is build on top of Erlang OTP, which is high-reliable,stable platform ... (bla-bla-bla), it is good not only for marketing but for engineers too. I had an issue with RabbitMQ only once when nginx logs took all disc space on the same partition where RabbitMQ run.
UPD (May 2018):
Saurabh Bhoomkar posted a link to the MQ vs. HTTP article written by Arnold Shoon on June 7th, 2012, here's a copy of it:
I was going through my old files and came across my notes on MQ and thought I’d share some reasons to use MQ vs. HTTP:
If your consumer processes at a fixed rate (i.e. can’t handle floods to the HTTP server [bursts]) then using MQ provides the flexibility for the service to buffer the other requests vs. bogging it down.
Time independent processing and messaging exchange patterns — if the thread is performing a fire-and-forget, then MQ is better suited for that pattern vs. HTTP.
Long-lived processes are better suited for MQ as you can send a request and have a seperate thread listening for responses (note WS-Addressing allows HTTP to process in this manner but requires both endpoints to support that capability).
Loose coupling where one process can continue to do work even if the other process is not available vs. HTTP having to retry.
Request prioritization where more important messages can jump to the front of the queue.
XA transactions – MQ is fully XA compliant – HTTP is not.
Fault tolerance – MQ messages survive server or network failures – HTTP does not.
MQ provides for ‘assured’ delivery of messages once and only once, http does not.
MQ provides the ability to do message segmentation and message grouping for large messages – HTTP does not have that ability as it treats each transaction seperately.
MQ provides a pub/sub interface where-as HTTP is point-to-point.
UPD (Dec 2018):
As noticed by #Kevin in comments below, it's questionable that RabbitMQ scales better then RESTful servies. My original answer was based on simply adding more workers, which is just a part of scaling and as long as single AMQP broker capacity not exceeded, it is true, though after that it requires more advanced techniques like Highly Available (Mirrored) Queues which makes both HTTP and AMQP-based services have some non-trivial complexity to scale at infrastructure level.
After careful thinking I also removed that maintaining AMQP broker (RabbitMQ) is simpler than any HTTP server: original answer was written in Jun 2013 and a lot of changed since that time, but the main change was that I get more insight in both of approaches, so the best I can say now that "your mileage may vary".
Also note, that comparing both HTTP and AMQP is apple to oranges to some extent, so please, do not interpret this answer as the ultimate guidance to base your decision on but rather take it as one of sources or as a reference for your further researches to find out what exact solution will match your particular case.
The irony of the solution OP had to accept is, AMQP or other MQ solutions are often used to insulate callers from the inherent unreliability of HTTP-only services -- to provide some level of timeout & retry logic and message persistence so the caller doesn't have to implement its own HTTP insulation code. A very thin HTTP gateway or adapter layer over a reliable AMQP core, with option to go straight to AMQP using a more reliable client protocol like JSONRPC would often be the best solution for this scenario.
Your thoughts on AMQP are spot on!
Furthermore, since you are transitioning from a monolithic to a more distributed architecture, then adopting AMQP for communication between the services is more ideal for your use case. Here is why…
Communication via a REST interface and by extension HTTP is synchronous in nature — this synchronous nature of HTTP makes it a not-so-great option as the pattern of communication in a distributed architecture like the one you talk about. Why?
Imagine you have two services, service A and service B in that your Django application that communicate via REST API calls. This API calls usually play out this way: service A makes an http request to service B, waits idly for the response, and only proceeds to the next task after getting a response from service B. In essence, service A is blocked until it receives a response from service B.
This is problematic because one of the goals with microservices is to build small autonomous services that would always be available even if one or more services are down– No single point of failure. The fact that service A connects directly to service B and in fact, waits for some response, introduces a level of coupling that detracts from the intended autonomy of each service.
AMQP on the other hand is asynchronous in nature — this asynchronous nature of AMQP makes it great for use in your scenario and other like it.
If you go down the AMQP route, instead of service A making requests to service B directly, you can introduce an AMQP based MQ between these two services. Service A will add requests to the Message Queue. Service B then picks up the request and processes it at its own pace.
This approach decouples the two services and, by extension, makes them autonomous. This is true because:
If service B fails unexpectedly, service A will keep accepting requests and adding them to the queue as though nothing happened. The requests would always be in the queue for service B to process them when it’s back online.
If service A experiences a spike in traffic, service B won’t even notice because it only picks up requests from the Message Queues at its own pace
This approach also has the added benefit of being easy to scale— you can add more queues or create copies of service B to process more requests.
Lastly, service A does not have to wait for a response from service B, the end users don’t also have to wait for long— this leads to improved performance and, by extension, a better user experience.
Just in case you are considering moving from HTTP to AMQP in your distributed architecture and you are just not sure how to go about it, you can checkout this 7 parts beginner guide on message queues and microservices. It shows you how to use a message queue in a distributed architecture by walking you through a demo project.