Since i had bad introduction that confused people, I am editing the question and removing the introduction I previously made.
Now, here is the business case for which now I have concerns. C# pseudocode:
Array.ForEach(files, filename =>
{
try
{
WcfServiceClient wcfClient = new WcfServiceClient();
wcfClient.SomeMethodWhichPostsFile(filename);
}
catch (Exception ex)
{
LogException(ex)
}
}
);
I am confused because of existence of WSBinding which is reliable, and basicHTTPBinding is not. I know that WSBinding with reliable sessions guarantees delivery, order, content is encrypted etc. But in the case I described with pseudocode, according to my opinion I have all of these supported even with basicHttpBinding and HTTPS over TCP. TCP provides me reliability, order guarantees and HTTPS encryption.
(1. is removed) Am I right related to previous? Or to rephrase: is there an example to show that basicHttpBindind under specified conditions can not provide the same features as WS binding with reliable sessions?
My business case requires to accept WCF calls by the order they are issued. If I send them synchronously from the client in a foreach loop (as shown in pseudo code), I assume the order at the server is guaranteed regardless if those are sent within one TCP connection or not, since I am waiting for the response and then I send another request. Even loadbalancer can not disorder messages here since there is no parallelization, messages are sent one by one synchronously.
I assume disordering could happen only if I send messages without waiting for response in fire and forget manner and I use different TCP connections.
So, am I right here? :)
There are different meanings of the term reliability and the interpretation also depends on the context. Your interpretation of reliability in your question is message is delivered. The interpretation of reliability in the source you cite is instead message is delivered exactly once. Your confusion comes from taking the statement "HTTP is not reliable" which was meant for one interpretation of reliability and using it in the your different interpretation of reliability.
HTTP cannot guarantee that the message gets only delivered once, it can at most guarantee that the message is delivered at least once. It can happen that the underlying TCP connection breaks while sending the request or receiving the response. In this case the client might ignore the problem or retry, which might result in no message delivered (ignoring errors while sending request) but also the same message delivered multiple times (retrying if connections breaks during response). By retrying until the response is received successfully one can guarantee that the message is received at least once, which is your interpretation of reliability but not the one from the statement you cite.
Related
I ask this question because I had a very weird puzzling experience that I am about to tell.
I am instrumenting an HTTP API server to observe it's behavior in the presence of latency between the server and the clients. I had a setup consisting of a single server and a dozen of clients connected with a 10Gbps Ethernet fabric. I measured the time it took to serve certain API requests in 5 scenarios. In each scenario, I set the latency between the server and the clients to one of the values: No latency (I call this baseline), 25ms, 50ms, 250ms or 400ms using the tc-netem(8) utility.
Because I am using histogram buckets to quantify the service time, I observed that all the requests were processed in less than 50ms whatever the scenario is, which clearly doesn't make any sense as, for example, in the case of 400ms, it should be at least around 400ms (as I am only measuring the duration from the moment the request hits the server to the moment the HTTP Write()function returns). Note that the response objects are between 1Kb to 10Kb in size.
Initially, I had doubts that the *http.ResponsWriter's Write() function was asynchronous and returns immediately before data is received by the client. So, I decided to test this hypothesis by writing a toy HTTP server that services the content of a file that is generated using dd(1) and /dev/urandom to be able to reconfigure the response size. Here is the server:
var response []byte
func httpHandler(w http.ResponseWriter, r * http.Request) {
switch r.Method {
case "GET":
now: = time.Now()
w.Write(response)
elapsed: = time.Since(now)
mcs: = float64(elapsed / time.Microsecond)
s: = elapsed.Seconds()
log.Printf("Elapsed time in mcs: %v, sec:%v", mcs, s)
}
}
func main() {
response, _ = ioutil.ReadFile("BigFile")
http.HandleFunc("/hd", httpHandler)
http.ListenAndServe(":8089", nil)
}
Then I start the server like this:
dd if=/dev/urandom of=BigFile bs=$VARIABLE_SIZE count=1 && ./server
from the client side, I issue time curl -X GET $SERVER_IP:8089/hd --output /dev/null
I tried with many values of $VARIABLE_SIZE from the range [1Kb, 500Mb], using an emulated latency of 400ms between the server and each one of the clients. To make long story short, I noticed that the Write() method blocks until the data is sent when the response size is big enough to be visually noticed (on the order of tens of megabytes). However, when the response size is small, the server doesn't report a mentally sane servicing time compared to the value reported by the client. For a 10Kb file, the client reports 1.6 seconds while the server reports 67 microseconds (which doesn't make sense at all, even me as a human I noticed a little delay on the order of a second as it is reported by the client).
To go a little further, I tried to find out starting from which response size the server returns a mentally acceptable time. After many trials using a binary search algorithm, I discovered that the server always returns few microseconds [20us, 600us] for responses that are less than 86501 bytes in size and returns expected (acceptable) times for requests that are >= 86501 bytes (usually half of the time reported by the client). As an example, for a 86501 bytes response, the client reported 4 seconds while the server reported 365 microseconds. For 86502 bytes, the client reported 4s and the sever reported 1.6s. I repeated this experience many times using different servers, the behavior is always the same. The number 86502 looks like magic !!
This experience explains the weird observations I initially had because all the API responses were less than 10Kb in size. However, this opens the door for a serious question. What the heck on earth is happening and how to explain this behavior ?
I've tried to search for answers but didn't find anything. The only thing I can think about is maybe it is related to Linux's sockets size and whether Go makes the system call in a non-blocking fashion. However, AFAIK, TCP packets transporting the HTTP responses should all be acknowledged by the receiver (the client) before the sender (the server) can return ! Breaking this assumption (as it looks like in this case) can lead to disasters ! Can someone please provide an explanation for this weird behavior ?
Technical details:
Go version: 12
OS: Debian Buster
Arch: x86_64
I'd speculate the question is stated in a wong way in fact: you seem to be guessing about how HTTP works instead of looking at the whole stack.
The first thing to consider is that HTTP (1.0 and 1.1, which is the standard version since long time ago) does not specify any means for either party to acknowledge data reception.
There exists implicit acknowledge for the fact the server received the client's request — the server is expected to respond to the request, and when it responds, the client can be reasonably sure the server had actually received the request.
There is no such thing working in the other direction though: the server does not expect the client to somehow "report back" — on the HTTP level — that it had managed to read the whole server's response.
The second thing to consider is that HTTP is carried over TCP connections (or TLS, whcih is not really different as it uses TCP as well).
An oft-forgotten fact about TCP is that it has no message framing — that is, TCP performs bi-directional transfer of opaque byte streams.
TCP only guarantees total ordering of bytes in these streams; it does not in any way preserve any occasional "batching" which may naturally result from the way you work with TCP via a typical programming interface — by calling some sort of "write this set of bytes" function.
Another thing which is often forgotten about TCP is that while it indeed uses acknowledgements to track which part of the outgoing stream was actually received by the receiver, this is a protocol detail which is not exposed to the programming interface level (at least not in any common implementation of TCP I'm aware of).
These features mean that if one wants to use TCP for message-oriented data exchange, one needs to implement support for both message boundaries (so-called "framing") and acknowledgement about the reception of individual messages in the procotol above TCP.
HTTP is a protocol which is above TCP but while it implements framing, it does not implement explicit acknowledgement besides the server responding to the client, described above.
Now consider that most if not all TCP implementations employ buffering in various parts of the stack. At least, the data which is submitted by the program gets buffered, and the data which is read from the incoming TCP stream gets buffered, too.
Finally consider that most commonly used TCP implementations provide for sending data into an active TCP connection through the use of a call allowing to submit a chunk of bytes of arbitrary length.
Considering the buffering described above, such a call typically blocks until all the submitted data gets copied to the sending buffer.
If there's no room in the buffer, the call blocks until the TCP stack manages to stream some amount of data from that buffer into the connection — freeing some room to accept more data from the client.
What all of the above means for net/http.ResponseWriter.Write interacting with a typical contemporary TCP/IP stack?
A call to Write would eventially try to submit the specified data into the TCP/IP stack.
The stack would try to copy that data over into the sending buffer of the corresponding TCP connection — blocking until all the data manages to be copied.
After that you have essentially lost any control about what happens with that data: it may eventually be successfully delivered to the receiver, or it may fail completely, or some part of it might succeed and the rest will not.
What this means for you, is that when net/http.ResponseWriter.Write blocks, it blocks on the sending buffer of the TCP socket underlying the HTTP connection you're operating on.
Note though, that if the TCP/IP stack detects an irrepairable problem with the connection underlying your HTTP request/response exchange — such as a frame with the RST flag coming from the remote part meaning the connection has been unexpectedly teared down — this problem will bubble up the Go's HTTP stack as well, and Write will return a non-nil error.
In this case, you will know that the client was likely not able to receive the complete response.
How is using asynchronous HTTP Requests different from using Messages when it comes to sending data in ZeroMQ?
A http request is simply the use of the hypertext transport protocol used over IP between two machines, client and server. It can be used for moving data in either direction. There's no particular restrictions as to what that data can be. An asynchronous request is simply one where the requester isn't bothering to wait for the reply having made the request; it'll use some mechanism to later rendezvous with the request, whenever that happens to come in.
Sending a message through ZeroMQ can be somewhat similar, specifically the REQ/REP pattern (request, reply). Similar to a http request, the requester will send some sort of message and the replier will reply in some way, and strictly in this pattern.
ZeroMQ uses its own protocol, zmtp, to move messages around. Again, there's nothing really limiting what data is in a message. ZeroMQ is inherently asynchronous - it's implementing the Actor programming model (though I notice that the way some implementations in some languages have eroded ZeroMQ's simplicity w.r.t. that, fitting into the language's own way of being asynchronous rather than use a poll funcion provided by ZeroMQ).
However, ZeroMQ builds many more data distribution patterns than req/rep on top of zmtp, like pub/sub, dealer/router, that http simply has no equivalent of. Further differences are that ZeroMQ can use IP, interprocess comms, or in-memory transports; this makes it highly suited for both in-application use, and for inter-machine distributed applications. I guess that a webserver could be contacted over ipc too, but I've never heard of anyone bothering to do that. Http is expected to be used over specific ports (e.g. port 80), whereas ZMQ gets used on whatever ports the developer wants (obeying the normal port allocation rules if they want a quiet life).
I have two machines, A and B.
A sends an HTTP request to B and asks for some document.
B responds back and sends the requested document and gives a 200 OK message, but machine A is complaining that the document is not received because of a network failure.
Does HTTP code 200 also work as acknowledgment that the document is received?
Does the HTTP 200 code also work as an acknowledgment that document has been received?
No. Not at all.
It is not even a guarantee that the document was completely transmitted.
The response code is in the first line of the response stream. The server could fail, or be disconnected from the client anywhere between sending the first line and the last byte of the response. The server may not even know this has happened.
In fact, there is no way that the server can know if the client received a complete (or partial) HTTP response. There is no provision for an acknowledgment in the HTTP protocol.
Now you could implement an application protocol over the top of HTTP in which the client is required to send a second HTTP request to the server to say "yes, I got the document". But this would involve some "application logic" implemented in the user's browser; e.g. in Javascript.
Absolutely not.
HTTP 200 is generated by the server, and only means that it understood the request and thinks it is able to fulfill it (e.g. the file is actually there).
All sorts of errors may occur during the transmission of the full response document (network connection breaking, packet loss, etc) which will not show up in the HTTP response, but need to be detected separately.
A pretty good guide to the HTTP protocol is found here: http://blog.catchpoint.com/2010/09/17/anatomyhttp/
You should make a distinction between the HTTP protocol and the underlying stream transport protocol, which should be reliable for HTTP purposes. The stream transport protocol will ACKnowledge all data transmission, including the response, so that both ends of exchange will affirm that the data is transmitted correctly. If the transport stream fails, then you will get a 'network failure' or similar error. When this happens, the HTTP protocol cannot continue; the data is no longer reliable or even complete.
What a 200 OK message means, at the HTTP level, is that the server has the document you're after and is about to transmit it to you. Normally you will get a content-length header as well, so you will be able to ascertain if/when the body is complete as an additional check on top of the stream protocol. From the HTTP protocol perspective, a response receives no acknowledgement, so once a response has been sent there is no verification.
However, as the stream transport is reliable, the act of sending the response will either be successful or result in an error. This does verify whether the document has been received by the network target (as noted by TripeHound, in the case of non-direct connection, e.g. a proxy, this is not a guarantee of delivery to the final target).
It's very simple to see that the 200 OK response code can't be a guarantee of anything about the response document. It's sent before the document is transmitted, so only a violation of causality could allow it to be dependent on successful reception of the document. It only serves as an indicator that the request was received properly and the server believes that it's able to fulfill the request. If the request requires extra processing (e.g. running a script), rather than just returning a static document, the response code should generally be sent after this has been completed, so it's normally an indicator that this was successful (but there are situations where this is not feasible, such as requests with persistent connections and push notifications -- the script could fail later).
On a more general level, it's never possible to provide an absolute guarantee that all messages have been received in any protocol, due to the Two Generals Problem. No acknowledgement system can get around this, because at some point there has to be a last acknowledgement; there's no way to know if this is received successfully, because that would require another acknowledgement, contradicting the premise that it was the last one.
HTTP is designed with an awareness of the possibility of various sorts of "middleboxes" - proxies operating with or without the knowledge of the client.
If there is a proxy involved, then even knowing that the server had transmitted all the data and recieved an normal close connection would not tell you anything about whether the document has been received by the machine who generated the HTTP request.
A sends a request to B. There may be all kinds of obstacles in the way that prevent the request from reaching B. In the case of https, the request may be reaching B but be rejected and it counts as if it hadn't reached B. In all these cases, B will not send any status at all.
Once the request reaches B, and there are no bugs crashing B, and no hardware failure etc. B will examine the request and determine what to do and what status to report. If A requested a file that is there and A is allowed access, B will start sending a "status 200" together with the file data.
Again all kinds of things can go wrong. A may receive nothing, or the "status 200" with no data or incomplete data etc. (By "receive" I mean that data arrives on the Ethernet cable, or through WiFi).
Usually the user of A will use some library that handles the ugly bits. With some decent library, the user can expect that they either get some error, or a status complete with the corresponding data. If a status 200 arrives at A with only half the data, the user will (depending on the design of the library) receive an error, not a status, and definitely not a status 200.
Or you may have a library that reports the status 200 and tells you "here's the first 2,000 bytes", "here's the next 2,000 bytes" and so on, and at some point when things go wrong, you might be told "sorry, there was an error, the data is incomplete".
But in general, the case that the user gets a status 200, and no data, will not happen.
I would like to ask how does Rebus HTTP Gateway acknowledge message delivery so when OutboundService sends the message how it knows it can commit or rollback the transaction.
Intuitive answer would be that HTTP response acknowledges it however looking at the code
https://github.com/rebus-org/Rebus/blob/5fef6b400feaf569e0d6517ad9ee3f6da2f31820/src/Rebus.HttpGateway/Outbound/OutboundService.cs#L139
it seems no action is taken after reading the response.
Thanks in advance :)
It does a very simple "acknowledge" in the way that if no error occurs, then the message is assumed to have been delivered safely to the destination queue.
This means that the ubiquitous at least once-delivery guarantee holds across gateways as well, although the risk of receiving the same message twice will of course be greatly increased.
If it's important for you to process each message only once, you need to make your receiver idempotent - but that's generally the rule when you're doing messaging without distributed transactions, so it's no different from scenarios where there's no HTTP gateway involved.
RFC 2616 section 8.1.2.2 states:
A client that supports persistent connections MAY "pipeline" its requests (i.e., send multiple requests without waiting for each response). A server MUST send its responses to those requests in the same order that the requests were received.
Serial responses are often more harm than good, since serial responses actually require the server to do more processing and negates the performance benefits gained by pipelining.
For example, if a HTTP client requests for files 1.jpg, 2.jpg, 3.jpg, 4.jpg, and 5.jpg, it doesn't matter if 3.jpg is returned before 1.jpg, or if 4.jpg is returned before 3.jpg. The client simply want the responses as soon as they are available, in any order.
How can a HTTP client gain the benefits of pipelining, and at the same time not pay for the disadvantages of response queueing?
A client can't circumvent HOL-queueing as it's part of RFC 2616. The only benefit of pipelining (in my opinion) is in extremely specific and narrow cases. Consider:
R1cost = Request A processing cost.
R2cost = Request B processing cost.
TCPcost = Cost of negotiating new TCP connection.
Using pipelining would, therefore, be viable in specific cases where:
R1cost ≥ R2cost ≤ TCPcost
How often is a request more expensive than a previous request and less expensive than negotiating a new TCP connection? Not often. I would add that Websockets are (by far) a more interesting and appropriate solution (as far as parallel back-end processing is concerned).
It can't (in HTTP/1.1). It might be in a future version of HTTP.
There is no default mechanism in the HTTP headers to identify which response would match which request. A response is known to be that to a specific request because of the order in which it's received. If you requested 1.jpg, 2.jpg, 3.jpg, 4.jpg, and 5.jpg and sent the responses in any order, you wouldn't know which one is which.
(You could implement your own markers in client and server headers, but you'd certainly not be compliant with the protocol and most implementations would not know how to deal with that. You would have to do some processing to map, which may negate the anticipated benefits of this parallel implementation too.)
The main benefits you get from the existing HTTP pipeline mechanism are:
Possible reduced communication latency. This may matter depending on your connection.
For request that require some longer server-side computation, the server could start this computation in the background, upon reception of the request, while it's sending a previous response, so as to be able to start sending the second result earlier. (This is also a form a latency, but in terms of response preparation.)
Some of these benefits can also be gained by more modern web-browser techniques, where multiple requests can be sent separately and parts of the page may be updated progressively (via AJAX).