Socket.IO and http concurrency - http

In our app, we're using socket.io to get realtime notification.
Before connecting to the socket, we first get all of our previous notifications with a http request and then connect with the socket.
The problem was then what happen if a notification is created in the time between the end of the http answer and the connection to the socket.
But in fact, even if we connect the socket before get all of our notifications. If we don't have any possibility to get the status of the http answers, we can't know if it possiblity contains a notification received in the socket, example with our socket to get the number of notifications.
--> connecting to the socket
--> http get notifications number
<-- new notification emit in socket
<-- http answer of the notification number
How do you know if the http answer contains the new notification ?
is there a name for this "paradox" ?
Maybe we don't get it right and should use a different pattern ?

The only way I know of to solve a race condition like this is to have some identifying tag in each notification. The most common one is a tag number that increases with each notification or a time/date stamp (but you may want to make sure you don't get dups with the same time/date stamp).
Then, you can connect with socket.io and it will send you a message with the current last tag number. You can then do an http request or a socket.io message (either one will work) that asks for all messages from the last tag number and earlier (a type of query). Then, you know that you will get all the messages reliably.
So, the sequence of events would be this:
Connect on socket.io
Server sends you over socket.io the last timestamp used
You send message to the server asking for all messages from last timestamp and earlier
Server sends you all messages from last timestamp and earlier
Server sends client any new messages as they are created
It's possible to combine steps 2-4 so the server automatically sends you all earlier messages as soon as you connect, but you have to make sure this doesn't happen on an auto-reconnect when the existing web page already has all those messages. If you wanted to implement that auto-send behavior, you could use a query parameter on the socket.io connect that tells the server you want it to send you messages that arrived before your connection.

Related

HTTP/2 Push promise behavior

I am working on writing a resilient client for HTTP/2.
I am wondering what should be the behavior of the client, if the server sent a PUSH_PROMISE and then failed to send the PUSH_RESPONSE, related to that PUSH_PROMISE ?
I went through the HTTP/2 spec, about the Push Response, but it does not state what should we do in such scenarios.
Should we send the original request again, if the push response is not received ? If the original request sent successfully, sending it again may cause issues, isn't it ?
Or should we ignore the PUSH_PROMISE and continue ? In that case, say server promised to send a file, and did not send it, what will happen ?
Is there a defined way to resolve this ?
The client is certainly free to request the same resource again. Consider, for example, that the server has no way to know if the client is making a simultaneous request for the same resource when the server sends the PUSH_PROMISE.
Client Server
------ ------
HEADERS[sid:1, GET /]
HEADERS[sid:1, /], DATA [sid:1], PUSH_PROMISE[sid:2]
HEADERS[sid:3, GET /css] HEADERS[sid:2, /css], DATA[sid:2]
HEADERS[sid:3, /css], DATA[sid:3]
The standard way for the client to then cancel the push would be to reset the promised stream via a RST_STREAM.
PUSH PROMISE - All server push streams are initiated via PUSH-PROMISE frames, which signal the server’s intent to push the described resources to the client and need to be delivered ahead of the response data that requests the pushed resources. The simplest strategy to satisfy this requirement is to send all PUSH-PROMISE frames, which contain just the HTTP headers of the promised resource, ahead of the parent’s response.
PUSH_PROMISE method used to apply HTTP/2 server push because the server creates the PUSH_PROMISE frame to the response part of a normal browser- initiated stream. Response objects with the context of a request which has a HTTP connection is used to server push. for example, under the Page_load method of application which has HTTP connection can be used to apply Response.PUSHPROMISE for push all the relevant scripts, styles and images without the client having to request each one explicitly

How to receive TCP packets without using terminator or fixed length message

I am using Spring Integration 2.0.3 with TCP. Application behavior is, it is acting as the TCP client and sending a message to the third-party tool using TCP. So application makes the connection to a third party tool using TCP, sends the message, waits for the reply and when that is received (again acting as the client) will close the connection. Now the issue is third-party tool can neither add any terminator nor make fixed length message.
As per my understanding, there are three ways to make a packet and send it to application
1)Always send fixed-sized messages
2)Send the message size with each message
3)Use a marker system to separate messages
But I cannot use any way mentioned above, I want to know how my application can receive the response message in this scenario, Is it possible?
Is your program supposed to close the connection once you have received the message? Or is the other program supposed to close the connection once it has sent the message to you?
If the latter then it's no problem since you just read until the connection is closed.
If the former, and you can't alter the application protocol and it doesn't already specify these things (is there a specification anywhere?), then wait with a timeout. If you haven't received anything within X seconds consider the full message received and close the connection.

Will the server raise an exception if its HTTP response can't get to the client?

We have an application which creates an order and sends to the server via HTTP post.
Client sends the order as an HTTP request
Server processes it
Server sends the response
Server does some further operation on this order
The client receives the response and processes it.
I've been asked about what about in step3, the response won't get to the client and get lost on the way. Then the client will try to re-send the same order. And this will introduce a duplicate order problem. And how to tackle this.
I came up with the idea that the client generates a unique ID and send to the server so when the client sends it the 2nd time, the server could know that it's a duplicate order, and will only return the previous response.
But I soon remember that HTTP is built upon TCP which should have a three-way handshaking thing for the data connection. Which means:
From the client perspective, if the client doesn't receive any response from the server, the connection will be maintained until timeout, then an exception will be thrown to let the client know.
My questions are:
From the server perspective, after it sends the response, how could it determine the response has reached the client?
There should be a three-way handshaking connection termination at the transportation layer to ensure that the connection will only be closed after the client received the messages, right? So if the message gets lost on the way, the server should trigger an exception, am I right?
If this is the case, the problem could simply be solved by ensure the server only does step4 if there is no exception in step3? Any other solution for this problem if my whole above idea is wrong?
Thanks
The whole idea is wrong. You need to look up idempotence. Basically every transaction needs to be idempotent, which means that applying it twice or more has no more effect than applying it once. This is generally implemented via unique transaction sequence numbers which are recorded at the server when the transaction has been completed.

Does SignalR provide message integrity mechanisms which ensure that no messages are lost during client reconnect

Abstract
Hi, I was pondering whether it is possible to loose a message with SignalR. Suppose client disconnects but eventually reconnects in a short amount of time, for example 3 seconds. Will the client get all of the messages that were sent to him while he was disconnected?
For example let's consider LongPolling transport. As far as I'm aware long polling is a simple http request that is issued in advance by the client in order to wait a server event.
As soon as server event occurs the data getting published on the http request which leads to closing connection on issued http request. After that, client issues new http request that repeats the whole loop again.
The problem
Suppose two events happened on the server, first A then B (nearly instantly). Client gets message A which results with closing http connection. Now to get message B client has to issue second http request.
Question
If the B event happened while the client was disconnected from the server and was trying to reconnect.
Will the client get the B message automatically, or I have to invent some sort of mechanisms that will ensure message integrity?
The question applies not only to long-polling but to general situation with client reconnection.
P.S.
I'm using SignalR Hubs on the server side.
EDIT:
I've found-out that the order of messages is not guaranteed, I was not able to make SignalR loose messages
The answer to this question lies in the EnqueueOperation method here...
https://github.com/SignalR/SignalR/blob/master/src/Microsoft.AspNet.SignalR.Core/Transports/TransportDisconnectBase.cs
protected virtual internal Task EnqueueOperation(Func<object, Task> writeAsync, object state)
{
if (!IsAlive)
{
return TaskAsyncHelper.Empty;
}
// Only enqueue new writes if the connection is alive
Task writeTask = WriteQueue.Enqueue(writeAsync, state);
_lastWriteTask = writeTask;
return writeTask;
}
When the server sends a message to a client it calls this method. In your example above, the server would enqueue 2 messages to be sent, then the client would reconnect after receiving the first, then the second message would be sent.
If the server queues and sends the first message and the client reconnects, there is a small window where the second message could attempt to be enqueued where the connection is not alive and the message would be dropped at the server end. Then after reconnect the client wouldn't get the second message.
Hope this helps

how to determine if the client received the message using SignalR

If I send a messag using SignalR, is it possible that the client does not receive the message? How can you verify if any errors apeared in the communication? Iam thinking of sending a message back to server after the server notification was sent, but is there any better way?
Yes, it's possible that the client doesn't receive the message. SignalR keeps messages in memory for 30 seconds (by default, you can tweak that or use a persistent message bus), so if the client isn't connected for whatever reason and this timeout passes the client will miss the message. Note that if he reconnects within this period he receives all messages he hasn't got yet, including those that were sent when he was disconnected.
I don't know if SignalR provides a way of telling you when a broadcast failed, so it might be safer to just send an acknowledgement back to the server.
As long as the client is connected, it will get the messages. You can subscribe to connection state changes in client side code. In server side code you can implement IConnected and IDisconnect interfaces to handle the Connect, Disconnect, and Reconnect events.

Resources