How are Server-Sent Events supposed to be used in a "update list" scenario? - server-sent-events

What I want to achieve is the following:
There is a list of things and the server wants to notify the client about changes to that list.
Which of the following setups are recommended and why:
I send an SSE request from the client to the server and the server sends updates of the list in an event. The server doesn't close the connection, it is closed when client leaves the page the list is on.
I send an SSE request from the client to server. The server sends an event when the list is updated, then closed the connection. The client then (after processing the event) sends a new request to the server, waiting for another event from the server.

Related

Process Golang HTTP requests if client closes the connection or cancel requests

Problem Statement:
Whenever Customer closes browser or client closes connection , golang mux HTTP handler cancels the requests with context cancelled error.
Expectation:
On certain routes if customer closes browser or drops off, we want to process requests and do not want to cancel the request.

How to queue APIs requests with RabbitMQ

I would like to queue requests made by mobile application that uses API to send some data to the server.
The scenario for now is like this:
Mobile app sends a request with some data
I need to get the data, validate it (a few DB queries) and save to a few tables in DB.
I need to return OK response to mobile app or bad request with erros list in case the validation has failed.
Now if I have 1 000 requests like this in 3 seconds my server will collapse.
I would like to use RabbitMQ to queue those requests. But what should I do with a response? I cannot send OK after RabbitMQ has received the message cause I don't know if the validation will pass. So mobile app will wait until RabbitMQ message will be properly consumed?
This could be a solution to your problem:
The client sends a request
The server queues the request and generate a unique identifier that belongs to the queued request, and then sends a response containing the generated identifier with 202 (Accepted) status code that means the request has been queued or submitted on the server but there is no response yet.
The client subscribes to the generated identifier on a message broker
After a queued request finished on the server it will publish a response to the message broker based on the generated identifier for a request
The client will receive published response on the subscribing identifier
Tips:
I use EMQTT for the message broker. Another option would be Rabbitmq MQTT plugin

Why and how SSE (Server-Sent Events) are unidirectional

https://developer.mozilla.org/en-US/docs/Web/API/EventSource
The EventSource interface is web content's interface to server-sent events. An EventSource instance opens a persistent connection to an HTTP server, which sends events in text/event-stream format. The connection remains open until closed by calling EventSource.close().
From what I understand server-sent events require persistent HTTP connection (Connection: keep-alive) so similarly to keeping the connection alive like in case of web sockets.
If the connection is persistent, why server-sent events are unidirectional? Web socket connections are persistent as well.
In this case, what happens if I send a request to my HTTP service and I have persistent connection opened due to EventSource. Will it re-use HTTP connection opened by EventSource or open a new connection?
If it re-uses the connection opened by EventSource how is it considered unidirectional?
Might be trivial, but I had to ask because it is not clear. Because nothing mentions what happens to subsequent HTTP requests when there's existing connection opened by EventSource.
For example, it seems possible to me to implement centralized chat app using SSE:
User 1 sends message to User 2(by sending it to HTTP server). Server sends event to user 2 with a new message, user 2 sends another request to HTTP server with message for User 1, server sends event to user 1.
How is that not considered bi-directional?
Related:
What's the behavioral difference between HTTP Stay-Alive and Websockets?
SSE is unidirectional because when you open a SSE connection, only the server can send data to the client (browser, etc.). The client cannot send any data. SSE is a bit older than WebSockets, hence may be the difference between the unidirectional and bi-directional support between these two technos.
In your use-case, if you open a SSE connection (which is an HTTP connection), only the server will be able to send data. If you wish to send a request to your HTTP service, you will need to open a new "classical" HTTP connection. You will see your browser opening two HTTP connections: 1 for the SSE connection and 1 for the classical HTTP request (short live).
You can implement a chat with SSE. You can have a SSE connection (hence HTTP) to let the user receives the messages from the server. And you can use POST HTTP requests to enable the user to send his/her messages.
Note that most of the browsers can open around 6 HTTP/1.x connections to the same host. So, if you use 1 SSE connection, it will remain potentially 5 HTTP/1.x connections. This is only true with HTTP/1.x. With HTTP 2.x, the connections to the same host are multiplexed: so, in theory, you can send as many HTTP requests at the same time as you wish or you can open as many SSE connections as you wish and thus, by passing the limitation of the 6 connections.
You can have a look at this article (https://streamdata.io/blog/push-sse-vs-websockets/) and this video (https://www.youtube.com/watch?v=NDDp7BiSad4) to get an insight about this technology and whether it could fit your needs. They summarize pros & cons of both SSE and WebSockets.

Does the communication channel breaks in between client and server if we introduce a load balancer in SignalR?

I am new to SignalR and I have a question on SignalR communication when we introduce a load balancer.
Lets assume we want to execute a void method on server side which receives some data as a parameter from client. Server takes that data and processes further. Lets say after processing for a while, it identifies that it has to send the notification back to client.
Case 1(Between client and server): Client calls void method on server side(Hub) by passing some data. Connection gets disconnected. Server processes the client data further. When it identifies that it has to push the notification back to client, it recreates the connection and pushes back the data to client.
Case 2(Between client and server with load balancer in between): How does the above scenario(Case 1) work here?. When server sends the push notification back to load balancer after processing client data, how does it know to which client it has to send the notification back?
You should read the scaleout docs. Short version: messages get sent to all servers, so if the client reconnects (it's not the server that establishes the connection!) before the connection times out , it will get the message.
Quote from the docs:
The cursor mechanism works even if a client is routed to a different
server on reconnect. The backplane is aware of all the servers, and it
doesn’t matter which server a client connects to.

Doubt in browser server interaction

Suppose I click on a link to website A on a page and just before the current page gets replaced, I click on a different link to a different website say B.
What happens to the request that was sent to website A? Does the webserver of site A reply back and the browser just rejects the HTTP reply?
There is no specific HTTP provision for canceling a request. I would expect this to happen at the socket level.
I would expect the associated TCP socket to be closed immediately upon canceling the request. Since http uses only 1 socket, the server will get the close after the request. If the close was processed before the data is generated, generated data down won't be sent to the client. Otherwise the data is sent to the client and ignored since the socket is closed. There may be wasted work, but a special http message to "cancel" would have the same effect.

Resources