Does MQTT brokers support persistent subscriptions? - asynchronous

I work on my first IOT POC, the device will usually generate sensor data once per hour/day. I planned to have architecture like this:
- 1 shared topic for sensor data input (device to backend direction)
- Each device will subscribe initially to its own specific topic aka /device/{id}/notification
Now, after sensor data submitted to shared topic, I plan to put device into deep sleep (device can only be waked-up by wifi packet or timer), in this state TCP connection to broker is lost.
Question: After device is back waked-up and TCP connection to MQTT broker is re-established, will the device receive all messages which were generated by server during out-of-service period, or these messages won't be available?

During client connecting to the broker, the CleanSession flag enables the broker to queue up missed messages of QoS 1 or QoS 2 (storing QoS 0 messages is implementation-dependent).
The MQTT 3.1.1 Standard Section 3.1.2.4 specifies that:
If CleanSession is set to 0, the Server MUST resume communications with the Client based on state from the current Session (as identified by the Client identifier). If there is no Session associated with the Client identifier the Server MUST create a new Session. The Client and Server MUST store the Session after the Client and Server are disconnected [MQTT-3.1.2-4]. After the disconnection of a Session that had CleanSession set to 0, the Server MUST store further QoS 1 and QoS 2 messages that match any subscriptions that the client had at the time of disconnection as part of the Session state [MQTT-3.1.2-5]. It MAY also store QoS 0 messages that meet the same criteria
The problem with a persistent session is that it may queue up large numbers of messages, so upon re-connection the client is bombarded with missed messages. This may be desirable if you require to know the full sequence of readings, or highly undesirable if the client is running on a low-power, battery-fed embedded device.
To address this, MQTT provides another feature: retained flag in publication messages.
The MQTT 3.1.1 Standard Section 3.3.1.3 specifies that:
If the RETAIN flag is set to 1, in a PUBLISH Packet sent by a Client to a Server, the Server MUST store the Application Message and its QoS, so that it can be delivered to future subscribers whose subscriptions match its topic name [MQTT-3.3.1-5]. When a new subscription is established, the last retained message, if any, on each matching topic name MUST be sent to the subscriber [MQTT-3.3.1-6]. If the Server receives a QoS 0 message with the RETAIN flag set to 1 it MUST discard any message previously retained for that topic. It SHOULD store the new QoS 0 message as the new retained message for that topic, but MAY choose to discard it at any time - if this happens there will be no retained message for that topic
This ensures that upon re-connection the client receives only the latest message on a given topic.

Very quickly I found an answer by myself. Persistent Session is the anwer. I was looking for persistent subscription and wasn't initially successful...
Here is finally great article about my case:
http://www.hivemq.com/blog/mqtt-essentials-part-7-persistent-session-queuing-messages
So yes, persistent subscriptions is called persistent sessions and yes it is possible.

Related

TCP PSH flag is not set for packets for which it should be

As far as I understand from Is it possible to handle TCP flags with TCP socket? and what I've read so far, a server application does not handle and cannot access TCP flags at all.
And from what I've read in RFCs the PSH flag tells the receiving host's kernel to forward data from receive buffer to the application.
I've found this interesting read https://flylib.com/books/en/3.223.1.209/1/ and it mentions that "Today, however, most APIs don't provide a way for the application to tell its TCP to set the PUSH flag. Indeed, many implementors feel the need for the PUSH flag is outdated , and a good TCP implementation can determine when to set the flag by itself."
"Most Berkeley-derived implementations automatically set the PUSH flag if the data in the segment being sent empties the send buffer. This means we normally see the PUSH flag set for each application write, because data is usually sent when it's written. "
If my understanding is correct and TCPStack decides by itself using different conditions,etc. when to set the PSH flag, then what can I do if TCPStack doesn't set the PSH flag when it should?
I have a server application written in Java and client written in C, there are 1000 clients each on a separate host and they all connect to server. A mechanism which acts as a keep-alive involves server sending each 60 seconds a request to each client that requests some info. The response is always less than MTU(1500bytes) so all the time response frames should have PSH flag set.
It happened at some point that client was sending 50 replies to only one request and all of them with PSH flag not set. Buffer got full probably before the client even sent the 3rd or 4th time the same reply and receiving app thrown an exception because it received more data than it was expecting from receive buffer of host.
My question is, what can I do in such a situation if I cannot communicate at all with TCPStack?
P.S. - I know that client should not send more than 1 reply but still in normal operation all the replies have PSH flag set and in this certain situation they didn't, which is not application fault

What message persistence guarantee NATS streaming provides in cluster and FT modes?

I'm looking for a streaming server with the message persistence guarantee, i.e. where the messages published by producers are guaranteed to be durably stored before the server acknowledges publishing to the producer.
My use case requires that we reduce the possibility of losing any produced messages. Producers are able to replay messages if required but they need to be sure that the ACKed message is durably persisted and will be delivered by the streaming server to the consumers.
NATS Streaming server seems to do something along those lines, but the docs for clustering and fault tolerance don't make it very clear what persistence guarantee is provided in each case. The doc on producer integration confirms that the server will actively ACK the published messages, either synchronously or via callback, but it does not make it clear if the ACK means that the message was durably stored at this point or not yet.
The doc on store configuration, specifically SQL options briefly mentions that the ACK from the server means a durable storage guarantee, but it's not clear still how exactly that applies in cases of Clustering and Fault Tolerance and different persistence backends (files or SQL).
NATS Streaming will have persisted the message before sending the publisher ACK back. The store implementations (filestore/SQL) may use some caching, but regardless, the writes are sync'ed (unless disabled) before the ACK is sent back.
However, in cluster mode, the filestore sync'ing is disabled because we rely on the fact that the data is replicated to each node of the cluster and so you would need multiple failures at once to lose the message. (note that there is an option for file store implementation to perform auto-sync at regular interval: see auto_sync here

How to we monitor transactions in transit in the MQ on both sending and receiving side in Corda?

We understand that there are port tear-down during transactions and different ports may be used when sending messages over to the counterparties. When a node goes down, the messages are still sent but they are being queued in the MQ, is there a recommended way how could we monitor these transactions/messages?
Unfortunately, you can't currently monitor these messages.
This is because Artemis does not store its queued messages in a human-readable/queryable format. Instead, the queued messages are stored in the form of a high-performance journal that contains a lot of information that is required in case the message queue's state needs to be restored from a hard bounce.
I approached this by finding the documents here: https://docs.corda.net/node-administration.html#monitoring-your-node
where it illustrates Corda flow metrics visualized using hawtio.
I just needed to download and startup hawt.io and connect it to any ( or the specified node PID ) net.corda.node.Corda and by going to the JMX tab we could see the messages in queue.

AMQP, RabbitMQ Push API how works?

I'm trying to get a deep understand how works the Push API communication between the client and the RabbitMQ server.
As I know - but correct me in case - the client open a TCP connenction to the broker (RabbitMQ) and keep this connenction alive until the client decision to close it. But during this connection the client can get messages immediately.
My question is, during this connection, do the client monitor the Broker to ask him for messages, or when the Broker forward a message to the Queue, where the client subscribed, just take that connencion and push the data to the client?
first case: client monitor the broker for messages
last case: client don't need to monitor the broker, broker just push the data
or other?
There are two options to receive messages
The client registers a consumer callback (basicConsume) on the channel; the broker then "pushes" messages to the consumer.
The client sends the broker a basicGet and receives one message (if present).
The first use case is the most common.
Since you tagged the question with spring-amqp I assume you are interested in Spring. For the first case, Spring AMQP has a listener container (and #RabbitListener annotation); for the second case, one of the RabbitTemplate receive operations can be used.
I suggest you look at the tutorials to get a basic understanding. They cover several languages including pure java and Spring AMQP.
You can also look at the Spring AMQP Reference Manual.

Reliable WCF Service with MSMQ + Order processing web application. One way calls delivery

I am trying to implement Reliable WCF Service with MSMQ based on this architecture (http://www.devx.com/enterprise/Article/39015)
A message may be lost if queue is not available (even cluster doesn't provide zero downtime)
Take a look at the simple order processing workflow
A user enters credit card details and makes a payment
Application receives a success result from payment gateway
Application send a message as “fire and forget”/”one way” call to a backend service by WCF MSMQ binding
The user will be redirected on the “success” page
Message is stored in a REMOTE transactional queue (windows cluster)
The backend service dequeue and process the message, completes complex order processing workflow and, as a result, sends an as email confirmation to the user
Everything looks fine as excepted.
What I cannot understand how can we guarantee that all “one way” calls will be delivered in the queue?
Duplex communication is not a case due to the user should be redirected at the result web page ASAP.
Imagine the case when a user received “success” page with language “… Your payment was made, order has been starting to process, and you will email notifications later…” but the message itself is lost.
How durability can be implemented for step 3?
One of the possible solutions that I can see is
3a. Create a database record with a transaction details marked as uncompleted, just to have any record about the transaction. This record may be used as a start point to process the lost message in case of the message will not be saved in the queue.
I read this post
The main thing to understand about transactional MSMQ is that there
are three distinct transactions involved in a transactional send to a
remote queue.
The sender writes the message to a local queue.
The queue manager on the senders machine transmits the message across the wire to the queue manager on the recipient machine
The receiver service processes the queue message and then removes the message from the queue.
But it doesn’t solve described issue - as I know WCF netMsmqBinding‎ doesn’t use local queue to send messages to remote one.
But it doesn’t solve described issue - as I know WCF netMsmqBinding‎
doesn’t use local queue to send messages to remote one.
Actually this is not correct. MSMQ always sends to a remote queue via local queue, regardless of whether you are using WCF or not.
If you send a message to a remote queue then look in Message Queuing in Server Management you will see in Outbound queues that a queue has been created with the address of the remote queue. This is a temporary queue which is automatically created for you. If the remote queue was for some reason unavailable, the message would sit in the local queue until it became available, and then it would be transmitted.
So durability is provided because of the three-phase commit:
transactionally write message locally
transactionally transmit message
transactionally receive and process message
There are instances where you may drop messages, for example, if your message processing happens outside the scope of the dequeue transaction, and also instances where it is not possible to know if the processing was successful (eg back-end web service call times out), and of course you could have a badly formed message which will never succeed processing, but in all cases it should be possible to design for these.
If you're using public queues on a clustered environment then I think there may be more scope for failure as clustering msmq introduces complexity (I have not really used so I don't know) so try to avoid if possible.

Resources