In Synchronous, the server sends a message to a client and waits for a response. The program is blocked until a response message is returned to server.
In Asynchronous, the server sends a message to a client and waits for a response. While waiting, the server program is not blocking and can continue to execute. However, when a response is returned back to server, server will receive and sends message again.
My question here is synchronous communication can only have one server communicate with one client? However, asynchronous communication allows one server to communicate with multiple clients?
I am read a few articles and try to understand but I am still confused with synchronous and asynchronous communication. Hope anyone can help me, thanks in advance.
Related
I understand an HTTP request will result in a response with a code and optional body.
If we call the originator of the request the 'client' and the recipient of the request the 'server'.
Then the sequence is
Client sends request
Server receives request
Server sends response
Client receive response
Is it possible for the Server to complete step 3 but step 4 does not happen (due to dropped connection, application error etc).
In other words: is it possible for the Server to 'believe' the client should have received the response, but the client for some reason has not?
Network is inherently unreliable. You can only know for sure a message arrived if the other party has acknowledged it, but you never know it did not.
Worse, with HTTP, the only acknowledge for the request is the answer and there is no acknowledge for the answer. That means:
The client knows the server has processed the request if it got the response. If it does not, it does not know whether the request was processed.
The server never knows whether the client got the answer.
The TCP stack does normally acknowledge the answer when closing the socket, but that information is not propagated to the application layer and it would not be useful there, because the stack can acknowledge receipt and then the application might not process the message anyway because it crashes (or power failed or something) and from perspective of the application it does not matter whether the reason was in the TCP stack or above it—either way the message was not processed.
The easiest way to handle this is to use idempotent operations. If the server gets the same request again, it has no side-effects and the response is the same. That way the client, if it times out waiting for the response, simply sends the request again and it will eventually (unless the connection was torn out never to be fixed again) get a response and the request will be completed.
If all else fails, you need to record the executed requests and eliminate the duplicates in the server. Because no network protocol can do that for you. It can eliminate many (as TCP does), but not all.
There is a specific section on that point on the HTTP RFC7230 6.6 Teardown (bold added):
(...)
If a server performs an immediate close of a TCP connection, there is
a significant risk that the client will not be able to read the last
HTTP response.
(...)
To avoid the TCP reset problem, servers typically close a connection
in stages. First, the server performs a half-close by closing only
the write side of the read/write connection. The server then
continues to read from the connection until it receives a
corresponding close by the client, or until the server is reasonably
certain that its own TCP stack has received the client's
acknowledgement of the packet(s) containing the server's last
response. Finally, the server fully closes the connection.
So yes, this response sent step is a quite complex stuff.
Check for example the Lingering close section on this Apache 2.4 document, or the complex FIN_WAIT/FIN_WAIT2 pages for Apache 2.0.
So, a good HTTP server should maintain the socket long enough to be reasonably certain that it's OK on the client side. But if you really need to acknowledge something in a web application, you should use a callback (image callback, ajax callback) asserting the response was fully loaded in the client browser (so another HTTP request). That means it's not atomic as you said, or at least not transactional like you could expect from a relational database. You need to add another request from the client, that maybe you'll never get (because the server had crash before receiving the acknowledgement), etc.
I am using HTTP long polling for pushing server events to a client.
On the client side, I send a long polling request to the server and block there waiting for a event from the server.
On the server side, we used the cometd framework (I am on the client side, do not really know much about the server side).
The problem is, after sometime, the connection is broken and the client can not detect this, so it blocks there forever. We are trying to implement some kind of heartbeat message, which will be sent every N minutes to keep the connection active. But this does not seem to work.
My question is: does HTTP long polling support heartbeat messages? As far as I understand, HTTP long polling only allows the server to send one event and will close the connection immediately thereafter. The client must reconnect and send a new request in order to receive the next event. Is it possible that the server sends heartbeat messages every N minutes while still keep the connection open until a real server event happens?
If you use the CometD framework, then it takes care of notifying the application (both on client and on server) about when the connection is broken, and it does send heartbeat messages.
What you call "HTTP long polling" is just a normal HTTP request, so in itself does not support heartbeat messages.
You can use HTTP long polling requests to implement heartbeat messages, and this is what CometD does for you under the covers.
In CometD, the response to a HTTP long poll request may deliver multiple messages, and the connection will not be closed afterwards. The client will send another HTTP long poll request without the need to reconnect, possibly reusing the previous connection.
CometD offers to your application a higher level API that is independent from the transport, so you can use WebSocket rather than HTTP, which is way more efficient, without changing a single line in your application.
You need to use the CometD libraries both on client (javascript and java) and on server, and everything will just work.
Abstract
Hi, I was pondering whether it is possible to loose a message with SignalR. Suppose client disconnects but eventually reconnects in a short amount of time, for example 3 seconds. Will the client get all of the messages that were sent to him while he was disconnected?
For example let's consider LongPolling transport. As far as I'm aware long polling is a simple http request that is issued in advance by the client in order to wait a server event.
As soon as server event occurs the data getting published on the http request which leads to closing connection on issued http request. After that, client issues new http request that repeats the whole loop again.
The problem
Suppose two events happened on the server, first A then B (nearly instantly). Client gets message A which results with closing http connection. Now to get message B client has to issue second http request.
Question
If the B event happened while the client was disconnected from the server and was trying to reconnect.
Will the client get the B message automatically, or I have to invent some sort of mechanisms that will ensure message integrity?
The question applies not only to long-polling but to general situation with client reconnection.
P.S.
I'm using SignalR Hubs on the server side.
EDIT:
I've found-out that the order of messages is not guaranteed, I was not able to make SignalR loose messages
The answer to this question lies in the EnqueueOperation method here...
https://github.com/SignalR/SignalR/blob/master/src/Microsoft.AspNet.SignalR.Core/Transports/TransportDisconnectBase.cs
protected virtual internal Task EnqueueOperation(Func<object, Task> writeAsync, object state)
{
if (!IsAlive)
{
return TaskAsyncHelper.Empty;
}
// Only enqueue new writes if the connection is alive
Task writeTask = WriteQueue.Enqueue(writeAsync, state);
_lastWriteTask = writeTask;
return writeTask;
}
When the server sends a message to a client it calls this method. In your example above, the server would enqueue 2 messages to be sent, then the client would reconnect after receiving the first, then the second message would be sent.
If the server queues and sends the first message and the client reconnects, there is a small window where the second message could attempt to be enqueued where the connection is not alive and the message would be dropped at the server end. Then after reconnect the client wouldn't get the second message.
Hope this helps
I understand the concept of Synchronous and Asynchronous in the context of threading in a program, but I'm not sure what that means in communication.
More specifically, I'm confused about what it means to have an asynchronous communication between a server and a client...
In synchronous communication, and please correct me if I'm wrong, one side sends a message, then waits to receive a response, and when the response has arrived, it again sends a message and so on...
What happens in asynchronous mode?
I'm always imagining a two way pipe where there are no rules or protocols about whose turn is it to transmit information, and both sides just shoots bytes into the pipe whenever the feel like, and in both sides, the reading and writing to the pipe happens in two different threads. Is that the case?
That is, again, just a wild guess, if anyone have an explanation I'd love to read.
You are right about the synchronous communication. For Asynchronous communication it works like this:
The client sends a message to the server and optionally specifies what to do upon receiving a response from the server. In the mean time the client can go on doing other stuff, however when the server sends the response, the client knows what to do with that response and handles the response. This is typically done through a "callback" function.
Try to imagine this as sending and receiving email, you can send an email, but because you do not know how long it will take before the addressee sends you an email back you go on with your daily life. The addressee receives your email and sends you a response back. Upon receiving the email you decide the next step.
I hope this explanation helps you conceptualize synchronous communication between client and server.
If I send a messag using SignalR, is it possible that the client does not receive the message? How can you verify if any errors apeared in the communication? Iam thinking of sending a message back to server after the server notification was sent, but is there any better way?
Yes, it's possible that the client doesn't receive the message. SignalR keeps messages in memory for 30 seconds (by default, you can tweak that or use a persistent message bus), so if the client isn't connected for whatever reason and this timeout passes the client will miss the message. Note that if he reconnects within this period he receives all messages he hasn't got yet, including those that were sent when he was disconnected.
I don't know if SignalR provides a way of telling you when a broadcast failed, so it might be safer to just send an acknowledgement back to the server.
As long as the client is connected, it will get the messages. You can subscribe to connection state changes in client side code. In server side code you can implement IConnected and IDisconnect interfaces to handle the Connect, Disconnect, and Reconnect events.