Pushing files over QUIC? - grpc

I would like to create a mechanism to allow clients to subscribe to the contents of a blob-storage bucket.
Traditionally (HTTP 1.1) this would involve polling for new items, then issuing a GET requests for each item.
Mechanisms such as gRPC allow responses to be streamed, however that particular mechanism relies on messages being loaded into memory, limiting incoming messages to a few MB.
Does a "push" mechanism exist that would allow a server to transmit larger payloads to clients without requiring the client to request that specific payload?

Related

UI Design pattern / technology for monitoring when an async process finishes

My frontend kicks off an async process with an http POST.
By whatever means, (kafka, threading, database insert and another system monitoring the table), some process is completed after an unknown amount of time and finishes in some quantifiable way (you can make a http call and determine if its done or not).
Are there any design patterns/technologies for notifying the frontend without it having to make repeated requests to some service?
You can take a look on WebSockets, a bi-directional data channel that is generally used for real-time web applications.
The way you can use would be straight-forward: when you make the HTTP Post request and the async process is started on the backend, you also open a websocket connection with the front-end, for that particular request. When the async process is finished, the backend will notify the front-end through the websocket.
You can even use the same websocket connection to transport data for multiple requests (initiated by the same user), which is a kind of a multiplexing technique.
If you need the overall system to be scalable, you should think about having a cluster of VMs that manage the websocket connections (fully separated from the backend of your application).

Sending multiple 5mb binary files over WS vs Http

Does sending large files over a websocket "block" websocket for other messages while the large files are being sent?
Does sending the files via independent Http requests while the other messages continue to be sent over WS have any distinct advantage "in keeping the WS unblocked"?
Assume 1 network card.
In case of WebSocket over HTTP/1.1, yes, the upload of a large file (in the form of a large WebSocket message) blocks the WebSocket connection.
In case of WebSocket over HTTP/2 (if supported by both the client and server), one HTTP/2 stream will upload the large file, and another HTTP/2 stream is be used to carry WebSocket messages. In this case, the problem becomes the HTTP/2 flow control window, which may be exhausted by the large upload stream, leaving the WebSocket message stream stalled (so that messages are queued and delayed). Unfortunately, the details of this queueing/delay depend on the client and on the server implementations, so you have to try.
Typically implementations do a good job at interleaving streams, so rarely the possible stalls are a problem.
For WebSocket over HTTP/1.1, if you open multiple WebSocket connections, you may be able to send files and messages in parallel, using N WebSocket connections for the files, and 1 WebSocket connection for the messages, for example.
Some non-browser clients allow you to open multiple HTTP/2 connections to the same domain, so again you will have the chance to send files and messages in parallel. However, to my knowledge, browsers do not allow more than 1 HTTP/2 connection to the same domain, so the parallelism is there, but constrained by the HTTP/2 flow control window.
Not sure what you mean by "keeping the WS unblocked", but HTTP/1.1 works in the same way as WebSocket for what pertains its usage of connections.
If you are in a browser environment, browsers allow 6-8 HTTP connections to the same domain, and typically unlimited (or at least many more) WebSocket connections.
So if you want to send, say, 10 large files, 6-8 of them will be uploaded via HTTP, but the remaining will be queued waiting for one of the HTTP connections to finish the previous upload.
Meanwhile, you can use the WebSocket connection to send messages.
In case of HTTP/2, browsers only open 1 connection, so you may use HTTP/2 streams for the uploads and a WebSocket over HTTP/2 stream for the messages, but they will all share the same HTTP/2 flow control window, potentially stalling each other.
All in all, WebSocket has not been designed for large uploads.
I would not be surprised if you hit WebSocket message size limits, as servers cannot allow clients to upload messages of arbitrary size (as it will blow up the server memory). The same is true for clients; browsers have typically small limits for the size of WebSocket messages that they receive, independently of whether HTTP/1.1 or HTTP/2 is used.
If you really need to upload large files, I think a solution where you upload via HTTP (which allow larger sizes, for example when using multipart/form-data), and keep small messaging via WebSocket is optimal.
The use of HTTP/2 may hit the HTTP/2 flow control window limit, but you have a limit in 6-8 connections in HTTP/1.1 too, so again you have to try and see if you hit any limit, and if you do, which limit it is in what case.
Using HTTP for uploads makes less likely that you hit WebSocket message size limits that are not known in advance and possibly different from client to client (browser to browser), and you don't want to implement your own splitting and merging of large uploads via WebSocket to respect those limits.

How to download image via gRPC Gateway?

I want to serve files(images) via gRPC Gateway from gRPC server. Since protocol buffers messages have sctructure, I don't see how I could ensure the gateway to send content of the bytes field of the response message instead of the entire json-encoded message. Is there a native solution for this or does one simply have to write a dedicated http muxer to handle these requests?
Rather than sending arbitrarily-sized files (probably as bytes), it would probably be better to include a URL to the file in the message and then host/serve the file over HTTP (e.g. from S3, Google Cloud Storage etc. from which you could generate signed URLs to limit access).
I think the max message size is 2GB (source?) and the recommendation (Large Data Sets) is to consider alternative techniques once messages sizes exceed few MBs.

HTTP Server-Push: Service to Service, without Browser

I am developing a cloud-based back-end HTTP service that will be exposed for integration with some on-prem systems. Client systems are custom-made by external vendors, they are back-end systems with their own databases. These systems are deployed in companies of our clients, we don't have access to them and don't control them. We are providing vendors our API specifications and they implement client code.
The data format which my service exchanges with clients is based on XML and follows a certain standard. Vendors implement their client systems in different programming languages and new vendors will appear over time. I want as many of clients to be able to work with my service as possible.
Most of my service API is REST-like: it receives HTTP requests, processes them, and sends back HTTP responses.
Additionally, my service accumulates some data state changes and needs to regularly push this data to client systems. Because of the below limitations, this use-case does not seem to fit the traditional client-server HTTP request-response model.
Due to the nature of the business, the client systems cannot afford to have their own HTTP API endpoints open and so my service can't establish an outbound HTTP connection to them for delivering data state notifications. I.e. use of WebHooks is not an option.
At the same time my service stakeholders need recorded acknowledgment that data state notifications were accepted by the client system, therefore fire-and-forget systems like Amazon SNS don't seem to apply.
I was considering few approaches to this problem but I'm not sure if I'm missing some simple options or some technologies that already address the problem. Hence this question.
The question text updated: options moved to my own answer.
Related questions and resources
REST API with active push notifications from server to client
Is ReST over websockets possible?
Can we use Web-Sockets for Communication between Microservices?
What is difference between grpc and websocket? Which one is more suitable for bidirectional streaming connection?
https://www.smashingmagazine.com/2018/02/sse-websockets-data-flow-http2/
I eventually found answers to my question myself and with some help from my team. For people like me who come here with a question "how do I arrange notifications delivery from my service to its clients" here's an overview of available options.
WebHooks
This is when the client opens endpoint iself. The service calls client's endpoints whenever the service has some notification to deliver. This way the client also acts as a service and so the client and the service swap roles during notification delivery.
With WebHooks the client must be able to open the endpoint with a well-known address. This is complicated if the client's software is working behind NAT or firewall or if the client is Browser or a mobile application.
The service needs to be prepared that client's WebHook endpoints may not always be online and may not always be healthy.
Another issue is flow control: special measures should be taken in the service not to overwhelm the client with high volume of connections, requests and/or data.
Polling
In this case the client is still the client and the service is still the service, unlike WebHooks. The service offers an endpoint where the client can continuously request new notifications. The advantage of this option is that it does not change connection direction and request-response direction and so it works well with HTTP-based services.
The caveat is that polling API should have some rich semantics to be reasonably reliable if loss of notifications is not acceptable. Good examples could be Google Pub/Sub pull and Amazon SQS.
Here are few considerations:
Receiving and deleting notification should be separate operations. Otherwise, if the service deletes notification just before giving it to the client and the client fails to process the notification, the notification will be lost forever. When deletion operation is separate from receiving, the client is forced to do deletion explicitly which normally happens after successful processing.
In case the client received the notification and has not yet deleted it, it might be undesirable to let the same notification to be processed by some other actor (perhaps a concurrent process of the same client). Therefore the notification must be hidden from receiving after it was first received.
In case the client failed to delete the notification in reasonable time because of error, network loss or process crash, the service has to make notification visible for receiving again. This is retry mechanism which allows the notification to be ultimately processed.
In case the service has no notifications to deliver, it should block the client's call for some time by not delivering empty response immediately. Otherwise, if the client polls in a loop and response comes immediately, the loop iteration will be short and clients will make excessive requests to the service increasing network, parsing load and requests counts. A nice-to have feature is for the service to unblock and respond to the client as soon as some notification appears for delivery. This is sometimes called "long polling".
HTTP Server-sent Events
With HTTP Server-sent Events the client opens HTTP connection and sends a request to the service, then the service can send multiple events (notifications) instead of a single response. The connection is long-living and the service can send events as soon as they are ready.
The downside is that the communication is one-way, the client has no way to inform the service if it successfully processed the event. Because this feedback is absent, it may be difficult for the service to control the rate of events to prevent overwhelming the client.
WebSockets
WebSockets were created to enable arbitrary two-way communication and so this is viable option for the service to send notifications to the client. The client can also send processing confirmation back to the service.
WebSockets have been around for a while and should be supported by many frameworks and languages. WebSocket connection begins as HTTP 1.1 connection and so WebSockets over HTTPS should be supported by many load balancers and reverse proxies.
WebSockets are often used with browsers and mobile clients and more rarely in service-to-service communication.
gRPC
gRPC is similar to WebSockets in a way that it enables arbitrary two-way communication. The advantage of gRPC is that it is centered around protocol and message format definition files. These files are used for code generation that is essential for client and service developers.
gRPC is used for service-to-service communication plus it is supported for Browser clients with grpc-web.
gRPC is supported on multiple popular programming languages and platforms, yet the support is narrower than for HTTP.
gRPC works on top of HTTP/2 which might cause difficulties with reverse proxies and load balancers around things like TLS termination.
Message queue (PubSub)
Finally, the service and the client can use a message queue as a delivery mechanism for notifications. The service puts notifications on the queue and the client receives them from the queue. A queue can be provided by one of many systems like RabbitMQ, Kafka, Celery, Google PubSub, Amazon SQS, etc. There's a wide choice of queuing systems with different properties and choosing one is a challenge on its own. The queue can also be emulated by using database for example.
It has to be decided between the service and the client who owns the queue, i.e. who pays for it. Either way, the queuing system and the queue should be available whenever the service needs to push notifications to it otherwise notifications will be lost (unless the service buffers them internally, with another queue).
Queues are typically used for service-to-service communication but some technologies also allow Browsers as clients.
It is worth noting that an "implicit" internal queue might be used on the service side in other options listed above. One reason is to prevent loss of notifications when there's no client available to receive them. There are many other good reasons like letting clients handle notifications at their pace, allowing to maximize processing throughput, allowing to handle spiky traffic with fixed capacity.
In this option the queue is used "explicitly" as delivery mechanism, i.e. the service does not put any other mechanism (HTTP, gRPC or WebSocket endpoint) in front of the queue and lets the client receive notifications from the queue directly.
Message passing is popular in organizing microservice communications.
Common considerations
In all options it has to be decided whether the loss of notifications is tolerable for the service, the client and the business. Some simpler technical choices are possible if it is ok to lose notifications due to processing errors, unavailability, etc.
It is valuable to have a monitoring for client processing errors from the service side. This way service owners know which clients are more broken without having to ask them.
If the queue is used (implicitly or explicitly) it is valuable to monitor the length of the queue and the age of the oldest notifications. It lets service owners judge how stale data may be in the client.
In case the delivery of notification is organized in a way that notification gets deleted only after a successful processing by the client, the same notification could be stuck in infinite receive loop when the client fails to process it. Such notification is sometimes called "poison message". Poison messages should be removed by the service or the queuing system to prevent clients being stuck in infinite loop. A common practice is to move poison messages to a special place, sometimes called "dead letter queue", for the later human intervention.
One alternative to WebSockets for the problem of server→client notifications with acks from the client seems to be gRPC.
It supports bidirectional communication between server and client in bidirectional streaming mode.
It works on top of HTTP 2.0. In our case functioning over HTTP ports is essential.
There are client and server generators for multiple popular languages and platforms. A nice thing is that I can share protocol definition file with vendors and can be sure my service and their clients will talk the same language.
Drawbacks:
Not as many languages and platforms are supported compared to HTTP. Alternative C from the question will be more accessible if based on HTTP 1.1. WebSockets have also been around longer and I would expect broader adoption than gRPC.
Not all gRPC implementations seem to currently support XML format for data according to FAQ. In order to transport XML my service and its clients will have to transfer XML message as byte arrays inside of gRPC protobuf message.
With gRPC, TLS termination cannot be done on general-purpose HTTP 1.1 load balancer. An application-layer HTTP/2-aware reverse proxy (load balancer) such as Traefik is required.
There are approaches like this and this to allow HTTP 1.1 compatible protocols but they have their own restrictions like limited amount of available clients or necessary client customizations.

Why HTTP was designed to be a pull protocol?

I was watching many presentations about Html 5 WebSockets , where server can initialize connection with client and push the data without the request from the client.
We don't need Polling etc.
And , I am curious , why Http was designed as a "pull" and not full duplex protocol in the first place ? What where the reasons behind that kind of decision ?
Because when http was first designed it was meant to be used to retrieve documents from a server. And the easiest way to do is when the client asks the server for a document and gets it delivered as response (or an error in case it does not exist). When you have push protocol that means the server would need to keep client connections around for potentially a long time creating more resource management problems - remember we are talking about early 1990s here.
Http was designed for simply retrieving hypertext documents from a server. There were no reasons to push anything to the client when the pages were just pure, static html without scripting capabilities.
Since there was no need at the time for pushing things back to the client, the protocol was kept simple.
HTTP is mainly a pull protocol—someone loads information on a Web server and
users use HTTP to pull the information from the server at their convenience. In particular,
the TCP connection is initiated by the machine that wants to receive the file.

Resources