Is there a specific (or agreed upon) HTTP response message (or another action except for disconnection) to clarify that the server does not accept pipelined HTTP requests?
I'm looking for something that will make the client stop pipelining it's requests, and to send each request separately.
If so, what is it? Thank you!
I'm a bit late on this one :-)
For reference the clean way of rejecting a pipelined connection is to add a Connection: close header on the first and unique response.
An HTTP client receiving a close on the first response of a pipeline MUST replay all remaining queries, and will certainly choose to stop pipelining.
i think you should execute some command on your server ..
See Here Look at the comment also
Related
I understand an HTTP request will result in a response with a code and optional body.
If we call the originator of the request the 'client' and the recipient of the request the 'server'.
Then the sequence is
Client sends request
Server receives request
Server sends response
Client receive response
Is it possible for the Server to complete step 3 but step 4 does not happen (due to dropped connection, application error etc).
In other words: is it possible for the Server to 'believe' the client should have received the response, but the client for some reason has not?
Network is inherently unreliable. You can only know for sure a message arrived if the other party has acknowledged it, but you never know it did not.
Worse, with HTTP, the only acknowledge for the request is the answer and there is no acknowledge for the answer. That means:
The client knows the server has processed the request if it got the response. If it does not, it does not know whether the request was processed.
The server never knows whether the client got the answer.
The TCP stack does normally acknowledge the answer when closing the socket, but that information is not propagated to the application layer and it would not be useful there, because the stack can acknowledge receipt and then the application might not process the message anyway because it crashes (or power failed or something) and from perspective of the application it does not matter whether the reason was in the TCP stack or above it—either way the message was not processed.
The easiest way to handle this is to use idempotent operations. If the server gets the same request again, it has no side-effects and the response is the same. That way the client, if it times out waiting for the response, simply sends the request again and it will eventually (unless the connection was torn out never to be fixed again) get a response and the request will be completed.
If all else fails, you need to record the executed requests and eliminate the duplicates in the server. Because no network protocol can do that for you. It can eliminate many (as TCP does), but not all.
There is a specific section on that point on the HTTP RFC7230 6.6 Teardown (bold added):
(...)
If a server performs an immediate close of a TCP connection, there is
a significant risk that the client will not be able to read the last
HTTP response.
(...)
To avoid the TCP reset problem, servers typically close a connection
in stages. First, the server performs a half-close by closing only
the write side of the read/write connection. The server then
continues to read from the connection until it receives a
corresponding close by the client, or until the server is reasonably
certain that its own TCP stack has received the client's
acknowledgement of the packet(s) containing the server's last
response. Finally, the server fully closes the connection.
So yes, this response sent step is a quite complex stuff.
Check for example the Lingering close section on this Apache 2.4 document, or the complex FIN_WAIT/FIN_WAIT2 pages for Apache 2.0.
So, a good HTTP server should maintain the socket long enough to be reasonably certain that it's OK on the client side. But if you really need to acknowledge something in a web application, you should use a callback (image callback, ajax callback) asserting the response was fully loaded in the client browser (so another HTTP request). That means it's not atomic as you said, or at least not transactional like you could expect from a relational database. You need to add another request from the client, that maybe you'll never get (because the server had crash before receiving the acknowledgement), etc.
I want to send the same http request repeatedly unless I get the right response, and the server is slow, sending the request is quick, receiving the response is quick also, but waiting for server to handle the request is slow. So if I send the request, and then waiting the failure should not be acceptable.
I think of the following workflow:
1)Sending the request
2)After sending the data, start a new request to send the same request
repeat 1-2, and the response should be handled asynchronously, and when detecting the right response, it stop sending request.
How to achieve this workflow or any other workflow can solve my problem. Any language and tool which will be fast would be considerable, like C/C++.
This will cause the server to simply respond slower and slower; your first request will be the first to receive any response, all the others will be wasted CPU time and bandwidth - if you did that to my servers you'd get your IP banned automatically.
What you need to consider is
why do you need the response this fast?
can you cache the response so that re-requesting it is no longer needed
perhaps having a caching proxy between your client(s) and the server will cover your needs? (also, prefetching)
I have grid that needs to be auto updated every minute. I want to update grid asynchronously so that web page does not send any request to server. Only the server will know when to sent new lets say JSon data to client. Is this possible? Can I send data to client with out pinging the server?
Thanks.
No. You'd have to use some kind of open socket, which is a very low-level form of pinging anyway. The standard is to simply have a frequent but very short JSON request to check for new data.
Edit- There is WebSocket, but it appears that the implementation on the server side is more advanced & you'd be crippling your audience reach. Just do frequent, short JSON requests.
No, you have to send a HTTP request to get a response. The delay between the request and the response can be as long as you want, however (so please don't aggressively poll for updates):
http://en.wikipedia.org/wiki/Push_technology#Long_polling
You simply make a request, wait for it to complete (when something happens), start another request immediately and then process the response.
This way, the server always has a request ready which it can respond to in order to "push" to the browser (or one will shortly be made).
How is a socket used by an HTTP client properly closed after transmitting the request? Or does it have to remain open (bidirectionally) until the complete response has been received? If so, how is the end of the request body determined by the server?
According to http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.4, closing the socket is not an option for a request. That doesn't sound logical to me - why should a half-closed TCP connection be a problem for the server, if the client doesn't try to transmit anything after closing its half of the socket? The client can still receive data after all.
It seems to me that shutting down the write part of a socket would be a very practical way of letting the server know that the request has been finished. http://docs.python.org/howto/sockets.html#disconnecting even specifically mentions that use case.
If that's really the wrong way to do it, what's the alternative? Do I really always have to send a "Content-length" or use chunked transport to enable the server to properly find the end of a request? How does that work for requests with unknown body length?
Transfer-Encoding: chunked is specifically designed to allow sending data with an unknown body length, for both requests and responses. The end of the data is determined by receiving a chunk whose payload size is 0. If you do not send a chunked request, then you must send a Content-Length instead.
Are you talking about this?:
Closing the connection cannot be used to indicate the end of a request body, since that would leave no possibility for the server to send back a response.
I think the text is talking about full close, you can do a half(write)-close. I'm not sure that's a HTTP compilant way of doing it, but I would think most servers will accept it.
Regarding your second question, simply use chunked encoding:
All HTTP/1.1 applications that receive entities MUST accept the "chunked" transfer-coding (section 3.6), thus allowing this mechanism to be used for messages when the message length cannot be determined in advance.
I have an web application in which after making a HTTP request to the server, the client quits ( or network connection is broken) before the response was completely received by the client.
In this scenario the server side of the application needs to do some cleanup work. Is there a way built into HTTP protocol to detect this condition. How does the server know if the client is still waiting for the response or has quit?
Thanks
Vijay Kumar
No, there is nothing built in to the protocol to do this (after all, you can't tell whether the response has been received by the client itself yet, or just a downstream proxy).
Just have your client make a second request to acknowledge that it has received and stored the original response. If you don't see a timely acknowedgement, run the cleanup.
However, make sure that you understand the implications of the Two Generals' Problem.
You might have a network problem... usualy, when you send a HTTP request to the server, first you send headers and then the content of the POST (if it is a post method). Likewise, the server responds with the headers and document body. The first line in the header is the status. Usually, status 200 is the success status, if you get that, then there should be no problem getting the rest of the document. Check this for details on the HTTP response status headers http://www.w3.org/Protocols/rfc2616/rfc2616-sec6.html
LE:
Sorry, missread your question. Basically, you don't have a trigger for when the user disconnects. If you use OOP, you could use the destructor of a class to clean whatever it is you need to clean.