Synchronous and Asynchronous data transmission between client and server - asynchronous

I understand the concept of Synchronous and Asynchronous in the context of threading in a program, but I'm not sure what that means in communication.
More specifically, I'm confused about what it means to have an asynchronous communication between a server and a client...
In synchronous communication, and please correct me if I'm wrong, one side sends a message, then waits to receive a response, and when the response has arrived, it again sends a message and so on...
What happens in asynchronous mode?
I'm always imagining a two way pipe where there are no rules or protocols about whose turn is it to transmit information, and both sides just shoots bytes into the pipe whenever the feel like, and in both sides, the reading and writing to the pipe happens in two different threads. Is that the case?
That is, again, just a wild guess, if anyone have an explanation I'd love to read.

You are right about the synchronous communication. For Asynchronous communication it works like this:
The client sends a message to the server and optionally specifies what to do upon receiving a response from the server. In the mean time the client can go on doing other stuff, however when the server sends the response, the client knows what to do with that response and handles the response. This is typically done through a "callback" function.
Try to imagine this as sending and receiving email, you can send an email, but because you do not know how long it will take before the addressee sends you an email back you go on with your daily life. The addressee receives your email and sends you a response back. Upon receiving the email you decide the next step.
I hope this explanation helps you conceptualize synchronous communication between client and server.

Related

Determine when HTTP(S) POST have reached receiver without waiting for full response

I want to invoke an HTTP POST with a request body and wait until it has reached the receiver, but NOT wait for any full response if the receiving server is slow to send the response.
Is this possible at all to do reliably? It's been years since I studied the internals of TCP/IP so I don't really remember the entire state machine here.
I guess that if I simply incur a timeout of say 1 seconds and then close the socket, there's no guarantee that the request has reached the remote server. Is there any signalling at all happening when the receiving server has received the entire request, but before it starts sending its response?
In practical terms I want to call a webhook URL without having to wait for a potentially slow server implementation of that webhook - I want to make the webhook request as "fire and forget" and simply ignore the responses (even if they are intermediate errors in gateways etc and the request actually didn't reach its final destination), but I'm hesitant to simply setting a low timeout (if so, how low would be "sufficient", etc)?

Bluetooth LE Characteristic Write Response

I have an embedded device running BT5 with GATT server setup. On the server I have setup a service with various characteristics to allow a client (PC or Mobile Device) to adjust various parameters of the device by writing to the characteristics.
I would like, for the device to send a response back from the application level for each write. It's not clear to me what the recommended way would be.
I thought about having the client read or subscribe to a general status characteristic, but I want to make sure I am not missing an easier way to do this. I looked at the BT write with response command, but it seems the acknowledgement for that may happen lower than the application.
You should be able to use the Write Response as "application level response". I have not seen any Bluetooth stack where this response is sent at a lower level before the application has processed the request. The reason is probably because the application can even send an Application Error code instead of a Write Response, so it would be stupid to move the Write Response handling to a lower level. Even in Android (if you set up a GATT server) you send the Write Response from the application.
The situation is different with Indications, though, where the Bluetooth stack sometimes sends the Confirmation at a lower level than the application, before it even informs the application that an Indication has arrived, which I find a bit strange and makes Indications kind of pointless compared to Notifications.
I solved this using a Notification characteristic. The client first subscribes to notification events on that CCD, and then every command sent to the host/device is acknowledged by the host firing the notification. To better synchronize command-and-response, you could add an incremental command-id with every command, and have the command-id be part of the notification data that is sent back to the client.
However I implemented this because I needed a response after the device has processed the command, with the results sent back to the client. If all you want to know is whether or not the host has received the command, a Write-With-Response CCD is the way to go.
I looked at the BT write with response command, but it seems the acknowledgement for that may happen lower than the application.
Indeed, the Write-With-Response-Handler is almost always implemented on the BLE stack, not on application level. However I don't see why this would be a problem; you should get error reports by your BLE stack in some form when a Write-with-Response fails. If it's a blocking call it might even return a success-value.

Can HTTP request fail half way?

I am talking about only one case here.
client sent a request to server -> server received it and returned a response -> unfortunately the response dropped.
I have only one question about this.
Is this case even possible? If it's possible then what should the response code be, or will client simply see it as read timeout?
As I want to sync status between client/server and want 100% accuracy no matter how poor the network is, the answer to this question can greatly affect the client's 'retry on failure' strategy.
Any comment is appreciated.
Yes, the situation you have described is possible and occurs regularly. It is called "packet loss". Since the packet is lost, the response never reaches the client, so no response code could possibly be received. Web browsers will display this as "Error connecting to server" or similar.
HTTP requests and responses are generally carried inside TCP packets. If a TCP packet carrying the HTTP response does not arrive in the expected time window, the request is retransmitted. The request will only be retransmitted a certain number of times before a timeout error will occur and the connection is considered broken or dead. (The number of attempts before TCP timeout can be configured on both the client and server sides.)
Is this case even possible?
Yes. It's easy to see why if you picture a physical cable between the client and the server. If I send a request down the cable to the server, and then, before the server has a chance to respond, unplug the cable, the server will receive the request, but the client will never "hear" the response.
If it's possible then what should the response code be, or will client simply see it as read timeout?
It will be a timeout. If we go back to our physical cable example, the client is sitting waiting for a response that will never come. Hopefully, it will eventually give up.
It depends on exactly what tool or library you're using how this is wrapped up, however - it might give you a specific error code for "timeout" or "network error"; it might wrap it up as some internal 5xx status code; it might raise an exception inside your code; etc.

Is an HTTP request 'atomic'

I understand an HTTP request will result in a response with a code and optional body.
If we call the originator of the request the 'client' and the recipient of the request the 'server'.
Then the sequence is
Client sends request
Server receives request
Server sends response
Client receive response
Is it possible for the Server to complete step 3 but step 4 does not happen (due to dropped connection, application error etc).
In other words: is it possible for the Server to 'believe' the client should have received the response, but the client for some reason has not?
Network is inherently unreliable. You can only know for sure a message arrived if the other party has acknowledged it, but you never know it did not.
Worse, with HTTP, the only acknowledge for the request is the answer and there is no acknowledge for the answer. That means:
The client knows the server has processed the request if it got the response. If it does not, it does not know whether the request was processed.
The server never knows whether the client got the answer.
The TCP stack does normally acknowledge the answer when closing the socket, but that information is not propagated to the application layer and it would not be useful there, because the stack can acknowledge receipt and then the application might not process the message anyway because it crashes (or power failed or something) and from perspective of the application it does not matter whether the reason was in the TCP stack or above it—either way the message was not processed.
The easiest way to handle this is to use idempotent operations. If the server gets the same request again, it has no side-effects and the response is the same. That way the client, if it times out waiting for the response, simply sends the request again and it will eventually (unless the connection was torn out never to be fixed again) get a response and the request will be completed.
If all else fails, you need to record the executed requests and eliminate the duplicates in the server. Because no network protocol can do that for you. It can eliminate many (as TCP does), but not all.
There is a specific section on that point on the HTTP RFC7230 6.6 Teardown (bold added):
(...)
If a server performs an immediate close of a TCP connection, there is
a significant risk that the client will not be able to read the last
HTTP response.
(...)
To avoid the TCP reset problem, servers typically close a connection
in stages. First, the server performs a half-close by closing only
the write side of the read/write connection. The server then
continues to read from the connection until it receives a
corresponding close by the client, or until the server is reasonably
certain that its own TCP stack has received the client's
acknowledgement of the packet(s) containing the server's last
response. Finally, the server fully closes the connection.
So yes, this response sent step is a quite complex stuff.
Check for example the Lingering close section on this Apache 2.4 document, or the complex FIN_WAIT/FIN_WAIT2 pages for Apache 2.0.
So, a good HTTP server should maintain the socket long enough to be reasonably certain that it's OK on the client side. But if you really need to acknowledge something in a web application, you should use a callback (image callback, ajax callback) asserting the response was fully loaded in the client browser (so another HTTP request). That means it's not atomic as you said, or at least not transactional like you could expect from a relational database. You need to add another request from the client, that maybe you'll never get (because the server had crash before receiving the acknowledgement), etc.

How does Rebus HTTP Gateway acknowledge message delivery

I would like to ask how does Rebus HTTP Gateway acknowledge message delivery so when OutboundService sends the message how it knows it can commit or rollback the transaction.
Intuitive answer would be that HTTP response acknowledges it however looking at the code
https://github.com/rebus-org/Rebus/blob/5fef6b400feaf569e0d6517ad9ee3f6da2f31820/src/Rebus.HttpGateway/Outbound/OutboundService.cs#L139
it seems no action is taken after reading the response.
Thanks in advance :)
It does a very simple "acknowledge" in the way that if no error occurs, then the message is assumed to have been delivered safely to the destination queue.
This means that the ubiquitous at least once-delivery guarantee holds across gateways as well, although the risk of receiving the same message twice will of course be greatly increased.
If it's important for you to process each message only once, you need to make your receiver idempotent - but that's generally the rule when you're doing messaging without distributed transactions, so it's no different from scenarios where there's no HTTP gateway involved.

Resources