Is Firebase Cloud Messaging considered a message broker? - firebase

I have a task to implement a message broker at choice in a distributed system. Is Firebase Cloud Messaging considered one?

No. At the very least not identical in a sense, however both are related to interchanging of messages.
Firebase Cloud Messaging (FCM) is a messaging service commonly (if not always) used for Push Notifications:
A push notification is a message that is "pushed" from backend server or application to user interface, e.g. (But not limited to) mobile applications and desktop applications. It is more user experience specific which is different from Push technology, which pushes the requests between components such as server to server communication. A common scenario of push notification is the client application pops up a message in front of application's user information, along with the alert sounds. The notification could also coupled with images and hyper text link in some cases. Via interacting with the push notification it usually brings up the client applications to the front.
The service could be described as a middleware that handles the sending/delivery of the message between the App Server (usually the sender) and the client (the receiver). But for them to communicate accordingly, both the Sender and Receiver must be configured to receive the message itself (i.e. they are the ones that have to adjust to the message).
While a Message Broker is described as:
In computer programming, a message broker is an intermediary program module that translates a message from the formal messaging protocol of the sender to the formal messaging protocol of the receiver. Message brokers are elements in telecommunication or computer networks where software applications communicate by exchanging formally-defined messages. Message brokers are a building block of Message oriented middleware.
From the description itself, the message broker could also be considered as a middleware, but it's task is more on transforming/translating/adjusting the message so that it is would be smoothly received by the receiver.
There is also an available list of Message Broker softwares from the Wikipedia page, containing:
Apache ActiveMQ
Apache Kafka
Apache Qpid
Celery
Cloverleaf (E-Novation Lifeline)
Comverse Message Broker (Comverse Technology)
Enduro/X Transactional Message Queue (TMQ)
Financial Fusion Message Broker (Sybase - acquired by SAP in 2010)
JBoss A-MQ (aka. Fuse Message Broker - enterprise ActiveMQ - acquired by RedHat in 2012)
Gearman
HornetQ (Red Hat) (donated to Apache ActiveMQ community)
IBM Integration Bus
IBM Message Queues
JBoss Messaging (JBoss - moved to HornetQ and now it's in bug-fix mode)
JORAM
Azure Service Bus (Microsoft)
BizTalk Server (Microsoft)
NATS (MIT Open Source License, written in Go)
Open Message Queue
Oracle Message Broker (Oracle Corporation)
QDB (Apache License 2.0, supports message replay by timestamp)
RabbitMQ (Mozilla Public License, written in Erlang)
Redis An open source, in-memory data structure store, used as a database, cache and message broker.
SAP PI (SAP AG)
Solace Systems Message Router
Spread Toolkit
Tarantool, a NoSQL database, with a set of stored procedures for message queues
WSO2 Message Broker

Related

HTTP Server-Push: Service to Service, without Browser

I am developing a cloud-based back-end HTTP service that will be exposed for integration with some on-prem systems. Client systems are custom-made by external vendors, they are back-end systems with their own databases. These systems are deployed in companies of our clients, we don't have access to them and don't control them. We are providing vendors our API specifications and they implement client code.
The data format which my service exchanges with clients is based on XML and follows a certain standard. Vendors implement their client systems in different programming languages and new vendors will appear over time. I want as many of clients to be able to work with my service as possible.
Most of my service API is REST-like: it receives HTTP requests, processes them, and sends back HTTP responses.
Additionally, my service accumulates some data state changes and needs to regularly push this data to client systems. Because of the below limitations, this use-case does not seem to fit the traditional client-server HTTP request-response model.
Due to the nature of the business, the client systems cannot afford to have their own HTTP API endpoints open and so my service can't establish an outbound HTTP connection to them for delivering data state notifications. I.e. use of WebHooks is not an option.
At the same time my service stakeholders need recorded acknowledgment that data state notifications were accepted by the client system, therefore fire-and-forget systems like Amazon SNS don't seem to apply.
I was considering few approaches to this problem but I'm not sure if I'm missing some simple options or some technologies that already address the problem. Hence this question.
The question text updated: options moved to my own answer.
Related questions and resources
REST API with active push notifications from server to client
Is ReST over websockets possible?
Can we use Web-Sockets for Communication between Microservices?
What is difference between grpc and websocket? Which one is more suitable for bidirectional streaming connection?
https://www.smashingmagazine.com/2018/02/sse-websockets-data-flow-http2/
I eventually found answers to my question myself and with some help from my team. For people like me who come here with a question "how do I arrange notifications delivery from my service to its clients" here's an overview of available options.
WebHooks
This is when the client opens endpoint iself. The service calls client's endpoints whenever the service has some notification to deliver. This way the client also acts as a service and so the client and the service swap roles during notification delivery.
With WebHooks the client must be able to open the endpoint with a well-known address. This is complicated if the client's software is working behind NAT or firewall or if the client is Browser or a mobile application.
The service needs to be prepared that client's WebHook endpoints may not always be online and may not always be healthy.
Another issue is flow control: special measures should be taken in the service not to overwhelm the client with high volume of connections, requests and/or data.
Polling
In this case the client is still the client and the service is still the service, unlike WebHooks. The service offers an endpoint where the client can continuously request new notifications. The advantage of this option is that it does not change connection direction and request-response direction and so it works well with HTTP-based services.
The caveat is that polling API should have some rich semantics to be reasonably reliable if loss of notifications is not acceptable. Good examples could be Google Pub/Sub pull and Amazon SQS.
Here are few considerations:
Receiving and deleting notification should be separate operations. Otherwise, if the service deletes notification just before giving it to the client and the client fails to process the notification, the notification will be lost forever. When deletion operation is separate from receiving, the client is forced to do deletion explicitly which normally happens after successful processing.
In case the client received the notification and has not yet deleted it, it might be undesirable to let the same notification to be processed by some other actor (perhaps a concurrent process of the same client). Therefore the notification must be hidden from receiving after it was first received.
In case the client failed to delete the notification in reasonable time because of error, network loss or process crash, the service has to make notification visible for receiving again. This is retry mechanism which allows the notification to be ultimately processed.
In case the service has no notifications to deliver, it should block the client's call for some time by not delivering empty response immediately. Otherwise, if the client polls in a loop and response comes immediately, the loop iteration will be short and clients will make excessive requests to the service increasing network, parsing load and requests counts. A nice-to have feature is for the service to unblock and respond to the client as soon as some notification appears for delivery. This is sometimes called "long polling".
HTTP Server-sent Events
With HTTP Server-sent Events the client opens HTTP connection and sends a request to the service, then the service can send multiple events (notifications) instead of a single response. The connection is long-living and the service can send events as soon as they are ready.
The downside is that the communication is one-way, the client has no way to inform the service if it successfully processed the event. Because this feedback is absent, it may be difficult for the service to control the rate of events to prevent overwhelming the client.
WebSockets
WebSockets were created to enable arbitrary two-way communication and so this is viable option for the service to send notifications to the client. The client can also send processing confirmation back to the service.
WebSockets have been around for a while and should be supported by many frameworks and languages. WebSocket connection begins as HTTP 1.1 connection and so WebSockets over HTTPS should be supported by many load balancers and reverse proxies.
WebSockets are often used with browsers and mobile clients and more rarely in service-to-service communication.
gRPC
gRPC is similar to WebSockets in a way that it enables arbitrary two-way communication. The advantage of gRPC is that it is centered around protocol and message format definition files. These files are used for code generation that is essential for client and service developers.
gRPC is used for service-to-service communication plus it is supported for Browser clients with grpc-web.
gRPC is supported on multiple popular programming languages and platforms, yet the support is narrower than for HTTP.
gRPC works on top of HTTP/2 which might cause difficulties with reverse proxies and load balancers around things like TLS termination.
Message queue (PubSub)
Finally, the service and the client can use a message queue as a delivery mechanism for notifications. The service puts notifications on the queue and the client receives them from the queue. A queue can be provided by one of many systems like RabbitMQ, Kafka, Celery, Google PubSub, Amazon SQS, etc. There's a wide choice of queuing systems with different properties and choosing one is a challenge on its own. The queue can also be emulated by using database for example.
It has to be decided between the service and the client who owns the queue, i.e. who pays for it. Either way, the queuing system and the queue should be available whenever the service needs to push notifications to it otherwise notifications will be lost (unless the service buffers them internally, with another queue).
Queues are typically used for service-to-service communication but some technologies also allow Browsers as clients.
It is worth noting that an "implicit" internal queue might be used on the service side in other options listed above. One reason is to prevent loss of notifications when there's no client available to receive them. There are many other good reasons like letting clients handle notifications at their pace, allowing to maximize processing throughput, allowing to handle spiky traffic with fixed capacity.
In this option the queue is used "explicitly" as delivery mechanism, i.e. the service does not put any other mechanism (HTTP, gRPC or WebSocket endpoint) in front of the queue and lets the client receive notifications from the queue directly.
Message passing is popular in organizing microservice communications.
Common considerations
In all options it has to be decided whether the loss of notifications is tolerable for the service, the client and the business. Some simpler technical choices are possible if it is ok to lose notifications due to processing errors, unavailability, etc.
It is valuable to have a monitoring for client processing errors from the service side. This way service owners know which clients are more broken without having to ask them.
If the queue is used (implicitly or explicitly) it is valuable to monitor the length of the queue and the age of the oldest notifications. It lets service owners judge how stale data may be in the client.
In case the delivery of notification is organized in a way that notification gets deleted only after a successful processing by the client, the same notification could be stuck in infinite receive loop when the client fails to process it. Such notification is sometimes called "poison message". Poison messages should be removed by the service or the queuing system to prevent clients being stuck in infinite loop. A common practice is to move poison messages to a special place, sometimes called "dead letter queue", for the later human intervention.
One alternative to WebSockets for the problem of server→client notifications with acks from the client seems to be gRPC.
It supports bidirectional communication between server and client in bidirectional streaming mode.
It works on top of HTTP 2.0. In our case functioning over HTTP ports is essential.
There are client and server generators for multiple popular languages and platforms. A nice thing is that I can share protocol definition file with vendors and can be sure my service and their clients will talk the same language.
Drawbacks:
Not as many languages and platforms are supported compared to HTTP. Alternative C from the question will be more accessible if based on HTTP 1.1. WebSockets have also been around longer and I would expect broader adoption than gRPC.
Not all gRPC implementations seem to currently support XML format for data according to FAQ. In order to transport XML my service and its clients will have to transfer XML message as byte arrays inside of gRPC protobuf message.
With gRPC, TLS termination cannot be done on general-purpose HTTP 1.1 load balancer. An application-layer HTTP/2-aware reverse proxy (load balancer) such as Traefik is required.
There are approaches like this and this to allow HTTP 1.1 compatible protocols but they have their own restrictions like limited amount of available clients or necessary client customizations.

In IBM Event Streams SaaS on Cloud, where can I look for failing client connection attempts?

I am trying to configure a Kafka client (it happens to be a DataPower appliance running V10) to my IBM Event Streams SaaS instance in IBM Cloud. But the Kafka client keeps throwing an error:
Broker transport failure Initialization failed
Where can I go, in IBM Event Streams SaaS, to determine if I can see my Kafka client trying to make connection ? And, ideally, see some useful error messages like "pwd is wrong" !

Event Timeout At Kaa-Client

How long does kaa-client stores the event on the device that it has to send to the other clients, in case it is not able to successfully deliver in several attempts during the kaa-node server outage.
Is there a way I can set timeout in kaa-client untill it has to retry to send event on failure attempts?
Thanks
-Rizwan
The behaviour of the Kaa SDK on the Kaa client side in case of Operations server outage or inability to communicate depends on the failover model used on the client side. The default failover mechanism depends on the Kaa SDK type and platform.
See Kaa Documentation for more information.

Is it possible to broadcast messages in a production PWA using FCM for Web without having a dedicated XMPP server?

This is an architectural question. I haven't implemented FCM yet, but as far as I understand someone needs to deploy an XMPP server in a real world scenario which provisions the inventory of the registered device tokens.
In my use case I'd like to just broadcast short messages about important update information, like "XY presenter's session at 15:00 got cancelled" and I'm not interested in the device tokens. My application is a Progressive Web App, so I would use FCM for Web.
The demos I saw so far showed a client receiving the device token, then that specific device token was picked up from the debug environment and used to send the demo message to the client - thus bypassing the need of a deployed stand-alone XMPP server, but just for demo purposes.
I'd want to avoid the use of an XMPP server, I'm not interested in dealing with the device tokens at all - if possible. Firebase's FCM/GCM server have them anyway. My plan is to pick a single topic name for that channel (the only topic what my app would use actually at this point), and push messages to the devices who listen to that topic. Is this a viable plan? I haven't found any mention of this whatsoever. Firebase knows all the tokens internally and it would make the architecture simpler if I don't have to deploy a server.
I don't know how the decomission/expiration of the device tokens would happen on Firebase's side, but that's another issue I'd have to deal with if I'll run my own XMPP server and provision tokens.
To send messages to a device (so-called downstream messages), you need to specify the server key. As its name implies, this key should only be present on a server or in some other trusted environment. So to send messages to devices you will need to run code in a trusted environment.
The server doesn't have to speak the XMPP protocol however. You can also just use HTTP to call the FCM servers. But a server will be needed, simply because sending downstream messages can only be done from a server.
For a simple example of sending device-to-device messages with this approach, see my blog post Sending notifications between Android devices with Firebase Database and Cloud Messaging. It's about Android, but the same approach of using the Firebase Database as a message queue will work across all platforms.
The tricky bit to map will be (as you already mention) the fact that topics are not available to FCM for the web yet. Last time I tested, you could call a server-side end-point to subscribe to a topic, like described in this answer: GCM: How do you subscribe a device to a topic?.

Is it possible to improve this zmq architecture?

Intro:
In the below architecture, there are three key components.
Users - Machines where user application is running.
Applications - which are running inside the remote server.
Gateway/Broker - Required for isolation between user devices and server applications.
Message flow between user device and server application should happen as below
User shall transmit message to remote server, which will be used by
the one or more server applications.
Application shall broadcast/publish message to all connected
users.
Application shall send message to a particular user device
(Unicast).
In addition, one or more users will be connected or disconnected to the server arbitrarily and one or more application will be spawned or terminated arbitrarily.
For the above problem statement, I have designed the below zmq architecture.
The Gateway/Broker handles arbitrary assignments of users and applications and also provides the required isolation. It publishes user messages to all applications. It also aggregates all messages needed to be sent to users from applications via a SUB socket.
The application sends a two part message, the first part is the user identity and the second part is the actual message. The Gateway/Broker transmits that message to a user, based on identity. A special identity for a broadcast will be created, the gateway, if has received broadcast identity, will publish the message to all users via PUB socket.
The user connects to both ROUTER and PUB sockets in gateway. Fair queued data will be received from both sockets. While sending, the message will be sent to only gateway's ROUTER socket, not PUB socket.
Questions:
Q1: Is there any flaw with above architecture?
Q2: Is it possible to improve it more?
Metric assumed for the Q2:
The users and applications are dynamic in nature, they connect and disconnect on their own, the design should withstand that,
User reports its status periodically to server, design should facilitate latency of less than 333 [ms] ( a user, connected to server over internet, WAN connectivity btw user and server provides a latency much less than 333 [ms] )
Lossless transmission between server and users ( ACKing at backend, retransmission if lost )
You can try Malamute, which gives you what you need and more like credit-flow, keep-alive, tracking.
Malamute is small broker based on zeromq and part of the zeromq community. You can run Malamute as a component inside your application and don't need a dedicate service or daemon for it.
If you are using C or C++ that is no brainer as it integrate naturally. It also has binding for a lot more languages.
https://github.com/zeromq/malamute

Resources