Where is R3 Corda Flow AMQP durable queue storage? - corda

Given a payload that needs to be transmitted to another node during a Corda flow via send and sendAndReceive
Does it get persisted anywhere in any form (durable or transient?) during the transmission between AMQP over TLS to the peer node (encrypted or otherwise)?
Asking this due to regulations related to GDPR/HIPAA/PDPA in terms of data storage.
What are the technical approach we can employ/consider regarding Corda's Flow AMQP/Artemis/Checkpointing when transmitting sensitive payload if we want to avoid storing sensitive data in durable state.

Related

What message persistence guarantee NATS streaming provides in cluster and FT modes?

I'm looking for a streaming server with the message persistence guarantee, i.e. where the messages published by producers are guaranteed to be durably stored before the server acknowledges publishing to the producer.
My use case requires that we reduce the possibility of losing any produced messages. Producers are able to replay messages if required but they need to be sure that the ACKed message is durably persisted and will be delivered by the streaming server to the consumers.
NATS Streaming server seems to do something along those lines, but the docs for clustering and fault tolerance don't make it very clear what persistence guarantee is provided in each case. The doc on producer integration confirms that the server will actively ACK the published messages, either synchronously or via callback, but it does not make it clear if the ACK means that the message was durably stored at this point or not yet.
The doc on store configuration, specifically SQL options briefly mentions that the ACK from the server means a durable storage guarantee, but it's not clear still how exactly that applies in cases of Clustering and Fault Tolerance and different persistence backends (files or SQL).
NATS Streaming will have persisted the message before sending the publisher ACK back. The store implementations (filestore/SQL) may use some caching, but regardless, the writes are sync'ed (unless disabled) before the ACK is sent back.
However, in cluster mode, the filestore sync'ing is disabled because we rely on the fact that the data is replicated to each node of the cluster and so you would need multiple failures at once to lose the message. (note that there is an option for file store implementation to perform auto-sync at regular interval: see auto_sync here

HTTP Server-Push: Service to Service, without Browser

I am developing a cloud-based back-end HTTP service that will be exposed for integration with some on-prem systems. Client systems are custom-made by external vendors, they are back-end systems with their own databases. These systems are deployed in companies of our clients, we don't have access to them and don't control them. We are providing vendors our API specifications and they implement client code.
The data format which my service exchanges with clients is based on XML and follows a certain standard. Vendors implement their client systems in different programming languages and new vendors will appear over time. I want as many of clients to be able to work with my service as possible.
Most of my service API is REST-like: it receives HTTP requests, processes them, and sends back HTTP responses.
Additionally, my service accumulates some data state changes and needs to regularly push this data to client systems. Because of the below limitations, this use-case does not seem to fit the traditional client-server HTTP request-response model.
Due to the nature of the business, the client systems cannot afford to have their own HTTP API endpoints open and so my service can't establish an outbound HTTP connection to them for delivering data state notifications. I.e. use of WebHooks is not an option.
At the same time my service stakeholders need recorded acknowledgment that data state notifications were accepted by the client system, therefore fire-and-forget systems like Amazon SNS don't seem to apply.
I was considering few approaches to this problem but I'm not sure if I'm missing some simple options or some technologies that already address the problem. Hence this question.
The question text updated: options moved to my own answer.
Related questions and resources
REST API with active push notifications from server to client
Is ReST over websockets possible?
Can we use Web-Sockets for Communication between Microservices?
What is difference between grpc and websocket? Which one is more suitable for bidirectional streaming connection?
https://www.smashingmagazine.com/2018/02/sse-websockets-data-flow-http2/
I eventually found answers to my question myself and with some help from my team. For people like me who come here with a question "how do I arrange notifications delivery from my service to its clients" here's an overview of available options.
WebHooks
This is when the client opens endpoint iself. The service calls client's endpoints whenever the service has some notification to deliver. This way the client also acts as a service and so the client and the service swap roles during notification delivery.
With WebHooks the client must be able to open the endpoint with a well-known address. This is complicated if the client's software is working behind NAT or firewall or if the client is Browser or a mobile application.
The service needs to be prepared that client's WebHook endpoints may not always be online and may not always be healthy.
Another issue is flow control: special measures should be taken in the service not to overwhelm the client with high volume of connections, requests and/or data.
Polling
In this case the client is still the client and the service is still the service, unlike WebHooks. The service offers an endpoint where the client can continuously request new notifications. The advantage of this option is that it does not change connection direction and request-response direction and so it works well with HTTP-based services.
The caveat is that polling API should have some rich semantics to be reasonably reliable if loss of notifications is not acceptable. Good examples could be Google Pub/Sub pull and Amazon SQS.
Here are few considerations:
Receiving and deleting notification should be separate operations. Otherwise, if the service deletes notification just before giving it to the client and the client fails to process the notification, the notification will be lost forever. When deletion operation is separate from receiving, the client is forced to do deletion explicitly which normally happens after successful processing.
In case the client received the notification and has not yet deleted it, it might be undesirable to let the same notification to be processed by some other actor (perhaps a concurrent process of the same client). Therefore the notification must be hidden from receiving after it was first received.
In case the client failed to delete the notification in reasonable time because of error, network loss or process crash, the service has to make notification visible for receiving again. This is retry mechanism which allows the notification to be ultimately processed.
In case the service has no notifications to deliver, it should block the client's call for some time by not delivering empty response immediately. Otherwise, if the client polls in a loop and response comes immediately, the loop iteration will be short and clients will make excessive requests to the service increasing network, parsing load and requests counts. A nice-to have feature is for the service to unblock and respond to the client as soon as some notification appears for delivery. This is sometimes called "long polling".
HTTP Server-sent Events
With HTTP Server-sent Events the client opens HTTP connection and sends a request to the service, then the service can send multiple events (notifications) instead of a single response. The connection is long-living and the service can send events as soon as they are ready.
The downside is that the communication is one-way, the client has no way to inform the service if it successfully processed the event. Because this feedback is absent, it may be difficult for the service to control the rate of events to prevent overwhelming the client.
WebSockets
WebSockets were created to enable arbitrary two-way communication and so this is viable option for the service to send notifications to the client. The client can also send processing confirmation back to the service.
WebSockets have been around for a while and should be supported by many frameworks and languages. WebSocket connection begins as HTTP 1.1 connection and so WebSockets over HTTPS should be supported by many load balancers and reverse proxies.
WebSockets are often used with browsers and mobile clients and more rarely in service-to-service communication.
gRPC
gRPC is similar to WebSockets in a way that it enables arbitrary two-way communication. The advantage of gRPC is that it is centered around protocol and message format definition files. These files are used for code generation that is essential for client and service developers.
gRPC is used for service-to-service communication plus it is supported for Browser clients with grpc-web.
gRPC is supported on multiple popular programming languages and platforms, yet the support is narrower than for HTTP.
gRPC works on top of HTTP/2 which might cause difficulties with reverse proxies and load balancers around things like TLS termination.
Message queue (PubSub)
Finally, the service and the client can use a message queue as a delivery mechanism for notifications. The service puts notifications on the queue and the client receives them from the queue. A queue can be provided by one of many systems like RabbitMQ, Kafka, Celery, Google PubSub, Amazon SQS, etc. There's a wide choice of queuing systems with different properties and choosing one is a challenge on its own. The queue can also be emulated by using database for example.
It has to be decided between the service and the client who owns the queue, i.e. who pays for it. Either way, the queuing system and the queue should be available whenever the service needs to push notifications to it otherwise notifications will be lost (unless the service buffers them internally, with another queue).
Queues are typically used for service-to-service communication but some technologies also allow Browsers as clients.
It is worth noting that an "implicit" internal queue might be used on the service side in other options listed above. One reason is to prevent loss of notifications when there's no client available to receive them. There are many other good reasons like letting clients handle notifications at their pace, allowing to maximize processing throughput, allowing to handle spiky traffic with fixed capacity.
In this option the queue is used "explicitly" as delivery mechanism, i.e. the service does not put any other mechanism (HTTP, gRPC or WebSocket endpoint) in front of the queue and lets the client receive notifications from the queue directly.
Message passing is popular in organizing microservice communications.
Common considerations
In all options it has to be decided whether the loss of notifications is tolerable for the service, the client and the business. Some simpler technical choices are possible if it is ok to lose notifications due to processing errors, unavailability, etc.
It is valuable to have a monitoring for client processing errors from the service side. This way service owners know which clients are more broken without having to ask them.
If the queue is used (implicitly or explicitly) it is valuable to monitor the length of the queue and the age of the oldest notifications. It lets service owners judge how stale data may be in the client.
In case the delivery of notification is organized in a way that notification gets deleted only after a successful processing by the client, the same notification could be stuck in infinite receive loop when the client fails to process it. Such notification is sometimes called "poison message". Poison messages should be removed by the service or the queuing system to prevent clients being stuck in infinite loop. A common practice is to move poison messages to a special place, sometimes called "dead letter queue", for the later human intervention.
One alternative to WebSockets for the problem of server→client notifications with acks from the client seems to be gRPC.
It supports bidirectional communication between server and client in bidirectional streaming mode.
It works on top of HTTP 2.0. In our case functioning over HTTP ports is essential.
There are client and server generators for multiple popular languages and platforms. A nice thing is that I can share protocol definition file with vendors and can be sure my service and their clients will talk the same language.
Drawbacks:
Not as many languages and platforms are supported compared to HTTP. Alternative C from the question will be more accessible if based on HTTP 1.1. WebSockets have also been around longer and I would expect broader adoption than gRPC.
Not all gRPC implementations seem to currently support XML format for data according to FAQ. In order to transport XML my service and its clients will have to transfer XML message as byte arrays inside of gRPC protobuf message.
With gRPC, TLS termination cannot be done on general-purpose HTTP 1.1 load balancer. An application-layer HTTP/2-aware reverse proxy (load balancer) such as Traefik is required.
There are approaches like this and this to allow HTTP 1.1 compatible protocols but they have their own restrictions like limited amount of available clients or necessary client customizations.

When does AkkaHttp backpressure kick in?

.. when the http response entity is not consumed, or the client tcp buffer becomes full, or when the rate of client taking from its tcp buffer is lower then the rate of server pushing data to it?
I am looking for a way for to achieve the following:
Let's assume that there is a backpressure-able source of data on the server, such as an Apache Kafka topic.
If I consume this source from a remote location it may be possible that the rate at which that remote location can consume is lower - this is solved if Kafka client or consumer is used.
However let's assume that the client is a browser and that exposing direct Kafka protocol / connectivity is not a possibility.
Further, let's assume that there is a possibility of getting all the value even if jumping over some messages.
For instance in case of compacted topics, getting only the latest values for each key is enough for a client, no need to go through intermediate values.
This would be equivalent to Flowable.onBackpressureLatest() or AkkaStreams.aggregateOnBackpressure or onBackpressureAggregate.
Would it be a way to expose the topic over HTTP REST (e.g. Server Side Events / chunked transfer-encoding) or over web-sockets, that would achieve this effect of skipping over intermediate values for each key?
Please advise, thanks
Akka http supports back pressure based on TCP protocol very well and you can read about using it in combination with streaming here
Kafka consumption and exposure via http with back pressure can be easily achieved in combination of akka-http, akka-stream and alpakka-kafka.
Kafka consumers need to do polling and alpakka covers back pressure with reduction of polling requests.
I don't see the necessity of skipping over the messages when back pressure is fully supported. Kafka will keep track of the offset consumed by a consumer group (the one you pick for your service or http connection) and this will guarantee eventual consumption of all messages. Of course, if you produce messages way faster in a topic, the consumer will never catch up. Let me know if this is your case.
As a final note, you may check out Confluent REST Proxy API, which allows you to read Kafka messages in a restful manner.

How to we monitor transactions in transit in the MQ on both sending and receiving side in Corda?

We understand that there are port tear-down during transactions and different ports may be used when sending messages over to the counterparties. When a node goes down, the messages are still sent but they are being queued in the MQ, is there a recommended way how could we monitor these transactions/messages?
Unfortunately, you can't currently monitor these messages.
This is because Artemis does not store its queued messages in a human-readable/queryable format. Instead, the queued messages are stored in the form of a high-performance journal that contains a lot of information that is required in case the message queue's state needs to be restored from a hard bounce.
I approached this by finding the documents here: https://docs.corda.net/node-administration.html#monitoring-your-node
where it illustrates Corda flow metrics visualized using hawtio.
I just needed to download and startup hawt.io and connect it to any ( or the specified node PID ) net.corda.node.Corda and by going to the JMX tab we could see the messages in queue.

Does MQTT brokers support persistent subscriptions?

I work on my first IOT POC, the device will usually generate sensor data once per hour/day. I planned to have architecture like this:
- 1 shared topic for sensor data input (device to backend direction)
- Each device will subscribe initially to its own specific topic aka /device/{id}/notification
Now, after sensor data submitted to shared topic, I plan to put device into deep sleep (device can only be waked-up by wifi packet or timer), in this state TCP connection to broker is lost.
Question: After device is back waked-up and TCP connection to MQTT broker is re-established, will the device receive all messages which were generated by server during out-of-service period, or these messages won't be available?
During client connecting to the broker, the CleanSession flag enables the broker to queue up missed messages of QoS 1 or QoS 2 (storing QoS 0 messages is implementation-dependent).
The MQTT 3.1.1 Standard Section 3.1.2.4 specifies that:
If CleanSession is set to 0, the Server MUST resume communications with the Client based on state from the current Session (as identified by the Client identifier). If there is no Session associated with the Client identifier the Server MUST create a new Session. The Client and Server MUST store the Session after the Client and Server are disconnected [MQTT-3.1.2-4]. After the disconnection of a Session that had CleanSession set to 0, the Server MUST store further QoS 1 and QoS 2 messages that match any subscriptions that the client had at the time of disconnection as part of the Session state [MQTT-3.1.2-5]. It MAY also store QoS 0 messages that meet the same criteria
The problem with a persistent session is that it may queue up large numbers of messages, so upon re-connection the client is bombarded with missed messages. This may be desirable if you require to know the full sequence of readings, or highly undesirable if the client is running on a low-power, battery-fed embedded device.
To address this, MQTT provides another feature: retained flag in publication messages.
The MQTT 3.1.1 Standard Section 3.3.1.3 specifies that:
If the RETAIN flag is set to 1, in a PUBLISH Packet sent by a Client to a Server, the Server MUST store the Application Message and its QoS, so that it can be delivered to future subscribers whose subscriptions match its topic name [MQTT-3.3.1-5]. When a new subscription is established, the last retained message, if any, on each matching topic name MUST be sent to the subscriber [MQTT-3.3.1-6]. If the Server receives a QoS 0 message with the RETAIN flag set to 1 it MUST discard any message previously retained for that topic. It SHOULD store the new QoS 0 message as the new retained message for that topic, but MAY choose to discard it at any time - if this happens there will be no retained message for that topic
This ensures that upon re-connection the client receives only the latest message on a given topic.
Very quickly I found an answer by myself. Persistent Session is the anwer. I was looking for persistent subscription and wasn't initially successful...
Here is finally great article about my case:
http://www.hivemq.com/blog/mqtt-essentials-part-7-persistent-session-queuing-messages
So yes, persistent subscriptions is called persistent sessions and yes it is possible.

Resources