Is replying to client before receiving complete request allowed for HTTP 1.0 server? - http

I couldn't find RFC that may answer this question. Perhaps you guys can point me to right direction.
I'm implementing strippeddown http server whose only function is to accept big multi-part encoded uploads.
In certain cases, such as file is too big or client is not authorized to upload, I want server to reply with error and close connection immediately.
It looks like Chrome browser doesn't like it because it thinks server returned http code zero.
Could not get any response
This seems to be like an error connecting to http://my_ubuntu:8080/api/upload. The response status was 0.
Check out the W3C XMLHttpRequest Level 2 spec for more details about when this happens.
Therefore question:
Is replying to client before receiving complete request allowed for HTTP server ?
update: Just tested it with iOS 6 client. Same thing, it thinks server abruptly closed connection :(

This is a great question and apparently it is very ambiguous. You will probably enjoy reading this article on the "Million Dollar Bug" - http://jacquesmattheij.com/the-several-million-dollar-bug

I think this is certificate trust issue. Try manually trusting the site and subsequent requests should work.

Related

Tomcat occasionally returns a response without HTTP headers

I’m investigating a problem where Tomcat (7.0.90 7.0.92) returns a response with no HTTP headers very occasionally.
According to the captured packets by Wireshark, after Tomcat receives a request it just returns only a response body. It returns neither a status line nor HTTP response headers.
It makes a downstream Nginx instance produce the error “upstream sent no valid HTTP/1.0 header while reading response header from upstream”, return 502 error to the client and close the corresponding http connection between Nginx and Tomcat.
What can be a cause of this behavior? Is there any possibility which makes Tomcat behave this way? Or there can be something which strips HTTP headers under some condition? Or Wireshark failed to capture the frames which contain the HTTP headers? Any advice to narrow down where the problem is is also greatly appreciated.
This is a screenshot of Wireshark's "Follow HTTP Stream" which is showing the problematic response:
EDIT:
This is a screen shot of "TCP Stream" of the relevant part (only response). It seems that the chunks in the second response from the last looks fine:
EDIT2:
I forwarded this question to the Tomcat users mailing list and got some suggestions for further investigation from the developers:
http://tomcat.10.x6.nabble.com/Tomcat-occasionally-returns-a-response-without-HTTP-headers-td5080623.html
But I haven’t found any proper solution yet. I’m still looking for insights to tackle this problem..
The issues you experience stem from pipelining multiple requests over a single connection with the upstream, as explained by yesterday's answer here by Eugène Adell.
Whether this is a bug in nginx, tomcat, your application, or the interaction of any combination of the above, would probably be a discussion for another forum, but for now, let's consider what would be the best solution:
Can you post your nginx configuration? Specifically, if you're using keepalive and a non-default value of proxy_http_version within nginx? – cnst 1 hour ago
#cnst I'm using proxy_http_version 1.1 and keepalive 100 – Kohei Nozaki 1 hour ago
As per an earlier answer to an unrelated question here on SO, yet sharing the configuration parameters as above, you might want to reconsider the reasons behind your use of the keepalive functionality between the front-end load-balancer (e.g., nginx) and the backend application server (e.g., tomcat).
As per a keepalive explanation on ServerFault in the context of nginx, the keepalive functionality in the upstream context of nginx wasn't even supported until very-very recently in the nginx development years. Why? It's because there are very few valid scenarios for using keepalive when it's basically faster to establish a new connection than to wait for an existing one to become available:
When the latency between the client and the server is on the order of 50ms+, keepalive makes it possible to reuse the TCP and SSL credentials, resulting in a very significant speedup, because no extra roundtrips are required to get the connection ready for servicing the HTTP requests.
This is why you should never disable keepalive between the client and nginx (controlled through http://nginx.org/r/keepalive_timeout in http, server and location contexts).
But when the latency between the front-end proxy server and the backend application server is on the order of 1ms (0.001s), using keepalive is a recipe for chasing Heisenbugs without reaping any benefits, as the extra 1ms latency to establish a connection might as well be less than the 100ms latency of waiting for an existing connection to become available. (This is a gross oversimplification of connection handling, but it just shows you how extremely insignificant any possible benefits of the keepalive between the front-end load-balancer and the application server would be, provided both of them live in the same region.)
This is why using http://nginx.org/r/keepalive in the upstream context is rarely a good idea, unless you really do need it, and have specifically verified that it produces the results you desire, given the points as above.
(And, just to make it clear, these points are irrespective of what actual software you're using, so, even if you weren't experiencing the problems you experience with your combination of nginx and tomcat, I'd still recommend you not use keepalive between the load-balancer and the application server even if you decide to switch away from either or both of nginx and tomcat.)
My suggestion?
The problem wouldn't be reproducible with the default values of http://nginx.org/r/proxy_http_version and http://nginx.org/r/keepalive.
If your backend is within 5ms of front-end, you most certainly aren't even getting any benefits from modifying these directives in the first place, so, unless chasing Heisenbugs is your path, you might as well keep these specific settings at their most sensible defaults.
We see that you are reusing an established connection to send the POST request and that, as you said, the response comes without the status-line and the headers.
after Tomcat receives a request it just returns only a response body.
Not exactly. It starts with 5d which is probably a chunk-size and this means that the latest "full" response (with status-line and headers) got from this connection contained a "Transfer-Encoding: chunked" header. For any reason, your server still believes the previous response isn't finished by the time it starts sending this new response to your last request.
A missing chunked seems confirmed as the screenshot doesn't show a last-chunk (value = 0) ending the previous request. Note that the last response ends with a last-chunk (the last byte shown is 0).
What causes this ? The previous response isn't technically considered as fully answered. It can be a bug on Tomcat, your webservice library, your own code. Maybe even, you're sending your request too early, before the previous one was completely answered.
Are some bytes missing if you compare the chunk-sizes from what is actually sent to the client ? Are all buffers flushed ? Beware of the line endings (CRLF vs LF only) too.
One last cause that I'm thinking about, if your response contains some kind of user input taken from the request, you can be facing HTTP Splitting.
Possible solutions.
It is worth trying to disable the chunked encoding at your library level, for example with Axis2 check the HTTP Transport.
When reusing a connection, check your client code to make sure that you aren't sending a request before you read all of the previous response (to avoid overlapping).
Further reading
RFC 2616 3.6.1 Chunked Transfer Coding
It turned out that the "sjsxp" library which JAX-WS RI v2.1.3 uses makes Tomcat behave this way. I tried a different version of JAX-WS RI (v2.1.7) which doesn't use the "sjsxp" library anymore and it solved the issue.
A very similar issue posted on Metro mailing list: http://metro.1045641.n5.nabble.com/JAX-WS-RI-2-1-5-returning-malformed-response-tp1063518.html

When a browser says that an http request is aborted what has actually happened?

On some occasions an http request appears to be aborted by the browser. Using Firebug or something in the status column where it might normally say, for example, 200 OK it says "aborted" (in red). When this occurs in Internet Explorer the user may see an IE generated message "Internet Explorer cannot display this page".
What has happened here?
I don't think it is a timeout issue as this occurs in quite a short time frame and I believe that I can get a successful response (e.g. a 200) when the response takes longer.
And it isn't to do with the server; the request is aborted by the browser. It isn't that we have had a server error back. (E.g. 500).
Also; the same request (to the same URL with the same method) usually works. So it isn't something to do say with SSL being misconfigured.
I am assuming that this is something to do with internet connectivity. But I don't know enough about networking / the internet to know what that really means.
So. The specific question is; what cases could cause this error?
This can happen when the browser is using an outdated SSL/TLS version and requests a resource that requires a secure connection
The server, your browser or any machine (or operating system) in between can drop the underlying TCP connection for any reason (timeouts, digging machines, intrusion detection).
You won't get a server error from those situations, because the server either didn't receive your request, it did but it took too long to process, or the server sent its (proper) response but it wasn't fully transmitted.
This can happer when a post are fired during a get (for example during dowload of a image), or when some image tag have not a src

HTTP 400 - Hard to understand error code with minimal description

All,
My requirement is fairly simple. I have to perform a simple HTTP POST to an IP:port combination. I used simple socket programming to do that and I have been successful in sending across my request to them and also get back response from them. The only problem being that the response is always a HTTP 400: Bad Request followed by my HTTP POST message. I am not sure if the problem is with the client or the server. My only guess being that there might be a problem with my data that I am sending. This is what my POST looks like
POST /<Server Tag> HTTP/5.1
Content-Length: xxx
--Content--
and the response from the server looks something like this
HTTP/1.1 400 Bad Request
Content-Length: xxx
--Same content that I sent them--
I was not sure If I could put in the IP of the server here so kept myself to using . I am pretty sure that the problem would not be there since I get back some response from the server and confident about the connection. Can someone help me ?
PS: Some pointers about my POST:
1) HTTP 5.1 was requested by the server and I am not sure if that is correct
2) I have played around with the number of line spaces after the content length. I have tried giving one and two lines. Not sure if that would make a difference. On wireshark though I see a difference with the number of line spaces as with a single line space the protocol is specified as TCP but with two it changes to HTTP. The response is always received on HTTP protocol. Some explanation on the difference would also help
Thanks
edit: the other thing that confuses me is that the response has a HTTP 1.1 and not a 5.1 that I had sent. I have also tried changing my post to 1.1 with no success
edit2: Based on suggestion form fvu and others, I used WebClient to Upload my request. Still got back a 400. The header that was generated by the WebClient looks like this
POST <server tag> HTTP/1.1
Host: <IP:PORT>
Content-Length: 484
Expect: 100-continue
Connection: Keep-Alive
The issue I see with this might be that the server was not expecting all the details in the header. The server has requested only the Content-Length from us. Would that be a problem?
Thanks
You can use a debugging proxy to view a client request and a server response to figure out what your client socket program needs to do.
But first you need to create a simple web page that a browser displays, allows you to do a POST from the browser to the web server, and get a simple response back from the server.
HTTP/5.1 is either wrong or misused by the programmer of the server application
You should get a valid example from the server api to check your protocol implementation first.

HTTP: error during reply after 200 OK status code

As an HTTP 1.1 server, I reply to a GET request with 200 OK status code, then start sending the data to the client.
During this send, an error occurs, and I cannot finish.
I cannot send a new status code as a final status code has already been sent.
How should I behave to let the client know an error occurred and I cannot continue with this HTTP request?
I can think of only one solution: close the socket, but it's not perfect: it breaks the keep-alive feature, and no clear explanation of the error is given to the client.
The HTTP standard seems to suppose that the server already knows exactly what to reply before it starts replying.
But this is not always the case.
Examples:
I return a very large file (several GB) from disk, and I get an IO error at some point during the reading of the file.
Same example with a large DB dump.
I cannot construct my whole response in memory then send it.
The HTTP 1.1 standard helps for such usage with the chunked transfer encoding: I don't even need to know the final size before starting to send the reply.
So these usage are not excluded from HTTP 1.1.
I finally found a possible solution for this:
HTTP 1.1 Trailer headers.
In chunked encoded bodies, HTTP 1.1 allows the sender to add data after the last (empty) chunk, in the form of a block of headers.
The specification hints some use-cases like computing on the fly a md5 of the body, and send it after the body so the client can check its integrity.
I think it could be used for error reporting, even if I haven't found anything about this kind of usage.
The issues I see with this are:
this requires using chunked encoding (but it's not much of an issue)
trailers support is probably very low:
server-side (it could be bypassed by manually creating the chunked encoding, but since it's applied after the content-encoding (gzip), it would require a lot of reimplementation)
client-side (bugs fixed only in 2010 in curl for example)
and on proxies (that could then loose the trailers if not properly implemented)
I have pushed the similar question to be answered, so here you can find that there is no solution:
How to tell there's something wrong with the server during response that started as 200 OK. Fail gracefully

HTTP server detecting a broken network connection from a HTTP client

I have an web application in which after making a HTTP request to the server, the client quits ( or network connection is broken) before the response was completely received by the client.
In this scenario the server side of the application needs to do some cleanup work. Is there a way built into HTTP protocol to detect this condition. How does the server know if the client is still waiting for the response or has quit?
Thanks
Vijay Kumar
No, there is nothing built in to the protocol to do this (after all, you can't tell whether the response has been received by the client itself yet, or just a downstream proxy).
Just have your client make a second request to acknowledge that it has received and stored the original response. If you don't see a timely acknowedgement, run the cleanup.
However, make sure that you understand the implications of the Two Generals' Problem.
You might have a network problem... usualy, when you send a HTTP request to the server, first you send headers and then the content of the POST (if it is a post method). Likewise, the server responds with the headers and document body. The first line in the header is the status. Usually, status 200 is the success status, if you get that, then there should be no problem getting the rest of the document. Check this for details on the HTTP response status headers http://www.w3.org/Protocols/rfc2616/rfc2616-sec6.html
LE:
Sorry, missread your question. Basically, you don't have a trigger for when the user disconnects. If you use OOP, you could use the destructor of a class to clean whatever it is you need to clean.

Resources