wondering whether any one can provide a convincing explanation about the whether HTTP 1.1 is half duplex or full duplex in the context of pipelining? As far as I understand,multiple requests can be send over the same persistent connection before the client gets the response. So does that mean that server can respond for the previous request while client sends a new request?
HTTP is request-response protocol. The client sends request. The server waits till the complete request is received. Then sends a response. The client and server cannot send simultaneously.
Full Duplex channel implies that client and server can send data simultaneously. Phone lines are example of Full Duplex. To achieve full duplex in Web, Web sockets is the recommended standard. Once a Web socket connection is established, both parties can exchange messages simultaneously. Web sockets work on top of TCP and does not use the HTTP protocol.
Let's have a look at the standard, in this case RFC-2616. There we find in paragraph 8.1.1, Persistent connections:
- HTTP requests and responses can be pipelined on a connection.
Pipelining allows a client to make multiple requests without
waiting for each response, allowing a single TCP connection to
be used much more efficiently, with much lower elapsed time.
and a bit later in the document:
8.1.2.2 Pipelining
A client that supports persistent connections MAY "pipeline" its
requests (i.e., send multiple requests without waiting for each
response). A server MUST send its responses to those requests in the
same order that the requests were received.
As in both cases it's clearly stated that the client can send requests without waiting for a response, I think it's safe to state that HTTP 1.1 supports full-duplex.
EDIT: in RFC-7230, part of the RFC set that replaces RFC-2616, this statement becomes:
A client that supports persistent connections MAY "pipeline" its
requests (i.e., send multiple requests without waiting for each
response). A server MAY process a sequence of pipelined requests in
parallel if they all have safe methods (Section 4.2.1 of [RFC7231]),
but it MUST send the corresponding responses in the same order that
the requests were received.
Most implementations do allow full-duplex HTTP (for 2xx responses).
A formal discussion can be found at
https://datatracker.ietf.org/doc/html/draft-zhu-http-fullduplex
As it is using tcp, that doesn't mean every application protocol on tcp is a full duplex.
HTTP uses a request-response paradigm, not a full-duplex streaming paradigm. Let me repeat it: HTTP is a request-response protocol! This means that the client sends a request, and when the complete request has been sent then the server sends the response. This is the case even if so-called keep-alive is used, i.e. multiple requests are sent over the same TCP connection. Because this behaviour is fundamental to the protocol most implementations make certain (valid) assumptions which make it difficult to create a full-duplex connection.
If you want a full duplex go for websockets, which are designed for an entirely different purpose.
Related
SSEs are advertised as a unidirectional communication tool to be used from server to client. I have a requirement to broadcast data to all clients and so i was wondering how SSEs behave on a low level. I cannot seem to find any low level information about SSEs online.
Primarily i would like to know if, after sending the data, does the server wait for a response from the client to confirm it has received the data before finishing the "send". That would mean that doing a broadcast using a for loop would be quiet dangerous and slow in which case websockets might be the better options.
Perhaps the implementation depends entirely on the language and framework? Is it not standardized?
Broadcast usually uses UDP which does not wait for a response. - - Broadcasting ip:port by socket server
.. says
UDP Packet: First four bytes as a magic number, next four bytes an IPv4 address (and you might want to add other things like a server name).
The magic number is just in case there is a collision with another application using the same port. Check both the length of the packet and the magic number.
Server would broadcast the packet at something like 30 second time intervals. (Alternatively you could have the server send a response only when a client sends a request via broadcast.)
So the client app would have to send a request back to the server app.
Different protocols would get different responses according the the underlying technology. eg HTTP uses responses extnsivly.
SSE and WebSockets are both over TCP, so there could be a wait before the socket could be used to send further data.
However, each client is a dedicated socket. So server-side you would be using threads or async coding (depending on the server-side language and its conventions). So looping through all the sockets to send a message to each client would be fine and quick.
Say i make the web request(www.amazon.com) to amazon web server through browser. Browser makes the connection with Internet through Internet service providers.
Request reaches to amazon server which process it and send back the response. Two questions here :-
Does Amazon server makes new connection with internet to send the response back or incoming request(initiated by me) waits on socket till amazon process the response ?
Once my browser receives the response how does it map the response(sent from amazon) back to particular request . I believe there must be some unique identifier like
requestId must be present in response through which browser must be mapping to request. Is that correct ?
Does Amazon server makes new connection with internet to send the response back or incoming request(initiated by me) waits on socket
till amazon process the response ?
It uses the same connection. Most of the time it's not even possible to connect back to a web browser due to firewall restrictions or Network Address Translation (NAT).
Once my browser receives the request how does it map the response(sent from amazon) back to particular request . I believe
there must be some unique identifier like requestId must be present in
response through which browser must be mapping to request. Is that
correct ?
It receives the response on the same socket. So the socket is the identifier. If HTTP2 multiplexing is used, then each multiplexed stream has a stream identifier, which is used to map the response back to the request.
The client opens a TCP-connection to the server, sends an HTTP-request and the server sends the response using the same connection. So, the browser knows from the connection that the response belongs to a specific request. This applies to basic HTTP 1.
This has to be distinguished from the programming model of an AJAX web application which is asynchronous and not synchronous. The application does not actively wait for a response. It is instead triggered later when the response arrives. The connection handling described above is what happens "under the hood".
Back to the connection handling: There are optimizations of HTTP that make things more complicated. HTTP 1.1 has a feature called "keep alive" and HTTP 2 goes further into this direction. The idea is to send more data over a single TCP-connection because establishing a TCP-connection is expensive (-> three way handshake, slow start). So, multiple requests and responses are sent over a single TCP-connection. Your question arises again in case of this optimization. If e. g. there is a sequence of requests A, B and a sequence of corresponding responses B, A within a single HTTP-connection how does the browser know the request a response belongs to? HTTP 2 introduces the concept of streams (RFC 7540, section 5):
A single HTTP/2 connection can contain multiple concurrently open
streams, with either endpoint interleaving frames from multiple
streams.
The order in which frames are sent on a stream is significant.
Streams are identified by an integer.
So, the stream identifier and the order within a stream can be used by the browser to find out the request a response belongs to.
HTTP 2 introduces another interesting feature which is called "push". The client can proactively send resources to the client that the client has not even requested. So, resources like e. g. images can be already sent when the HTML is requested avoiding another communication roundtrip.
HTTP uses Transfer Control Protocol. This is how it happens-
Does Amazon server makes new connection with internet to send the response back or incoming request(initiated by me) waits on socket till amazon process the response ?
No. Most browsers use HTTP 1.1 so the connection between client and server is established only once until closed (Persistent connection).
Once my browser receives the request how does it map the response(sent from amazon) back to particular request . I believe there must be some unique identifier like requestId must be present in response through which browser must be mapping to request. Is that correct ?
There is a protocol(HTTP) on how the messages are exchanged. HTTP dictates that responses must arrive in the order they were requested. So it goes like-
Request;Response;Request;Response;Request;Response;...
And there is also a specific format of HTTP request (from your browser- HTTP client) and HTTP response message (from amazon HTTP server). There are response status codes that let the browser know if their request has been succeeded, otherwise tell the errors.
A few sample codes-
Is a http end point suppose to respond to requests from a particular client in order that they are received?
What about if it doesn't make sense to in the case of requests handled by cluster behind a proxy or in requests handled with NIO where one request is finished faster than the other?
Is there a standard way of associating a unique id with each http request to associate with the response? How is this handled in clients like http componenets httpclient or curl?
The question comes down to the following case:
Suppose, I am downloading a file from a server and the request is not finished. Is a client capable of completing other requests on the same keep-alive connection?
Whenever a TCP connection is opened, the connection is recognized by the source and destination ports and IP addresses. So if I connect to www.google.com on destination port 80 (default for HTTP), I need a free source port which the OS will generate.
The reply of the web server is then sent to the source port (and IP). This is also how NAT works, remembering which source port belongs to which internal IP address (and vice versa for incoming connections).
As for your edit: no, a single http connection can execute one command (GET/POST/etc) at the same time. If you send another command while you are retreiving data from a previously issued command, the results may vary per client and server implementation. I guess that Apache, for example, will transmit the result of the second request after the data of the first request is sent.
I won't re-write CodeCaster's answer because it is very well worded.
In response to your edit - no. It is not. A single persistent HTTP connection can only be used for one request at once, or it would get very confusing. Because HTTP does not define any form of request/response tracking mechanism, it simply would not be possible.
It should be noted that there are other protocols which use a similar message format (conforming to RFC822), which do allow for this (using mechanisms such as SIP's cSeq header), and it would be possible to implement this in a custom HTTP app, but HTTP does not define any standard mechanism for doing this, and therefore nothing can be done that could be assumed to work everywhere. It would also present a problem with the response for the second message - do you wait for the first response to finish before sending the second response, or try and pause the first response while you send the second response? How will you communicate this in a way that guarantees messages won't become corrupted?
Note also that SIP (usually) operates over UDP, which does not guarantee packet ordering, making the cSeq system more of a necessity.
If you want to send a request to a server while another transaction is still in progress, you will need to create a new connection to the server, and hence a new TCP stream.
Facebook did some research into this while they were building their CDN, and they concluded that you can efficiently have 2 or 3 open HTTP streams at any one time, but any more than that reduces overall transfer time because of the extra packet overhead cost. I would link to the blog entry if I could find the link...
I need a way to detect a missing response to a long running HTTP POST request. This problem arises when the network infrastructure (firewalls, proxies, unplugged cables, etc.) drops the response packets. The server may detect this failure, but the client cannot send additional bytes after the POST to probe the state of the TCP connection. The failure may be limited to a single TCP connection. For example I may be able to subsequently open a new TCP connection to the server.
I'm looking for a solution that still uses HTTP POST and does not change the duration of the server side processing.
Some solutions that I can think of are:
Provide a side channel interface to retrieve request & response history. If the history lists the response as having been send (presumably resulting in a TCP error) but I have not yet received it within a reasonable time I can generate a local error.
Use an X header to request that the server deliver "spurious" 100 Continue provisional responses on a regular interval. If I fail to see an expected 100 Continue or a non-provisional response I can generate a local error.
Is there a state of the art solution for this problem?
It sounds to me like you are using Soap for something that would be much better done using a stateful connection, or a server side push technology.
Recently in an interview I was asked how I would approach an online chat client application. I went through the standard "polling" solution but was cut off because the interviewer was looking for the "HTTP 1.1 keep-alive" method. Having used HTTP for quite a while and remembering that the whole point was to be "stateless", this never occurred to me (also, not to mention that the keep-alive is not consistently implemented).
My question is, is it possible for a web server to broadcast and/or send information to a client when the "keep-alive" header has been set?
With HTTP 1.1, keep-alive is the default behavior. (In HTTP 1.0, the default behavior was to close the connection.) The server must send the 'Connection: close" header to terminate the connection with the first response. So there is still a TCP socket available to push data through, but just pushing data from the server would violate the HTTP protocol in a major way. Even using keep-alive, the client would still have to poll the server.
It is important to distinguish between HTTP Keepalive and TCP Keepalive. HTTP keepalive prevents the connection from being closed by the server or client. TCP keepalive is used when the connection might be idle for an extended period of time and might be dropped by a NAT proxy or firewall. TCP keepalive is activated on a per-socket basis by setsockopt() calls.
When doing a 'long poll' to eliminate the need to re-poll, TCP keepalive might be needed.
Keep-alive simply holds a TCP socket open, so each time you poll, you save the overhead of the TCP setup/teardown packets--but you still have to poll.
However, "long polling" is a strategy for the web server to broadcast notifications to the client. Essentially, the client issues a GET request, but instead of immediately responding, the web server waits until they have a notification to send, at which point they respond to the GET request. This eliminates any need for packets to go across the wire for polling purposes, and keeps the connection stateless, which as you correctly mention is one of the purposes of the protocol.
You might read more about Comet servers. That sounds basically like the approach that the interviewer was asking about. Their effectiveness is disputed by some, but it has been used in several similar situations.
For example, I believe gmail uses comet technologies for some things (but don't quote me on it).
Another example that seems relevant is BOSH, which is a protocol for transmitting chat information using HTTP and XMPP. But I don't believe that using keep-alive is involved in that.