I'm working on a high-traffic ASP.NET website, and around once per 30 minutes, we get a POST request that triggers a HttpException: Request timed out. In our debugging, we found out that ASP.NET is getting the request, but not the request body. Here are the headers:
Connection: Keep-Alive
Content-Length: 49476
Content-Type: application/x-www-form-urlencoded
Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Encoding: gzip,deflate
Accept-Language: en-US,en
Host: abc.123.com
Referer: https://abc.123.com/page.aspx
And the request body is empty. We are behind a load balancer, if that helps with anything.
My question is, how would I debug and fix an issue like this? According to the accepted answer in this post:
Diagnosing "Request timed out" HttpExceptions
It looks like what may be happening is that the request is being split into two TCP segments, one for the header and one for the body. Since we're behind a load balancer with a common virtual IP, it may be entirely possible that one of the TCP segments is being sent to one server and the other is being sent to another. Would this be a plausible case? Or can something else be causing it?
Related
I'm trying to write a .NET web API that will receive HTTP requests from some devices and handle the data sent. I know the exact format of the data being sent and the ip/port that the data is sent to. The problem is that the API does not even seem to respond to the request as the controller method to handle the POST is never called.
I have tested the API with Postman; using the correct data format and host information and it works as intended. In order to ensure some kind of connection attempt is being made by the device, I listened to the port using a nodejs TCP server. There is data being sent and this is the header info that precedes it:
POST / HTTP/1.0
Host: xxx
Connection: keep-alive
User-Agent: xxx
Content-Type: application/json
Transfer-Encoding: chunked
Transfer-Content: chunked
I can't post the body data, but it is in JSON format as expected (but separated into chunks).
Since there are requests being made, data being sent but the API doesn't acknowledge it despite working when tested using Postman, I'm wondering if there is an issue with the head. I've been researching about the headers and I did read that HTTP 1.0 doesn't support chunked transfer-encoding. Could it be that the devices are making erroneous requests? Or are the headers fine and the problem could be elsewhere?
Thank you for your help.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
HTTP/1.1 proxies MUST parse the Connection header field before a message is forwarded and, for each connection-token in this field, remove any header field(s) from the message with the same name as the connection-token.
Could somebody please give an example of a common scenario the above paragraph is referring to?
Does that have anything to do with Connection: close header?
A good example, in HTTP/1.1, is Upgrade, to indicate that a client wishes to move from HTTP/1.1 to another protocol:
GET http://www.example.com/hello.txt HTTP/1.1
Connection: upgrade
Upgrade: HTTP/2.0, SHTTP/1.3, IRC/6.9, RTA/x11
If this were a proxy, the Upgrade header should not be passed to any upstream servers, as it only makes sense for this connection.
The Keep-Alive header could also appear here in HTTP/1.0 but is now obsolete with HTTP/1.1.
Let's say a client makes a request like the following (pulled from iOS):
GET /test.mp4 HTTP/1.1
Host: example.com:80
Range: bytes=0-1
X-Playback-Session-Id: 3DFA3BE3-CB22-4EC5-808F-B59A735DCECE
Accept-Encoding: identity
Accept: */*
Accept-Language: en-us
Connection: keep-alive
User-Agent: AppleCoreMedia/1.0.0.11B554a (iPad; U; CPU OS 7_0_4 like Mac OS X; en_us)
There other such requests out there, I believe Chrome might test the waters by asking for blank Range.
How can the server respond to any such request so that it does not need to honor Range , but rather treat it as a standard HTTP delivery, and the client will play the file?
Sending a regular header response and the data as though the client were not asking for Range does not seem to work.
EDIT: Conversely, if the client does not request a Range, is it okay to respond with HTTP 206 with full filesize in Content-Length and also Content-Range header (which client will ignore)?
If the server does not support the Range header, it would send a normal 200 reply to send the entire file. If the server supports the Range header, it would send a 206 or 416 reply, depending on whether the requested range can be satisfied or not. This is covered in RFC 2616 Section 14.35.
It is not OK to respond with 206 if the client did not request a Range.
Try responding with HTTP 1.0 - it doesn't support range requests at all.
Maybe the client will treat such a reply more gracefully.
Hello StackOverflow community!
I started to learn Node.js recently, and decided to implement a reverse HTTP proxy as a task. There were a couple of rough places, which I managed to get through on my own, but now I'm a bit of stuck, and need your help. I managed to handle redirects and relative urls, and with implementation of relative url support I faced the problem I'm going to describe.
You can find my code at http://pastebin.com/vZfEfk8r. It's not very big, but still doesn't fit nicely to this page.
So to the problems (there are 2 of them). I'm using http.request to forward client's request to the target server, then waiting for response and sending this response back to client. It works okay for some of the requests, but not for others. This is the first problem: on the web-site I'm using to test the proxy ( http://ixbt.com, cool russian web-site about the tech) I can always get the main page /index.html, but when browser starts to fetch other files referenced from that page (css, img, etc.), most of the requests are ending with ParseError ({"bytesParsed":0}).
While debugging (using Wireshark) I noticed that at some of the requests (if not all) fail with this error when the following HTTP negotiation between proxy and target server occurs:
Request:
GET articles/pics2/201206/coolermaster-computex2012_70x70.jpg HTTP/1.1
Host: www.ixbt.com
Connection: keep-alive
Response:
<html>
<head><title>400 Bad Request</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>
Looks like server doesn't send the status code, and no headers. So the question is, can this be the reason of failure (ParseError)?
My another concern is that when I'm trying to get the same file as a standalone request, I have no problems. Just look:
Request:
GET /articles/pics2/201206/coolermaster-computex2012_70x70.jpg HTTP/1.1
Host: www.ixbt.com
Connection: keep-alive
Response:
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 25 Jun 2012 17:09:51 GMT
Content-Type: image/jpeg
Content-Length: 3046
Last-Modified: Fri, 22 Jun 2012 00:06:27 GMT
Connection: keep-alive
Expires: Wed, 25 Jul 2012 17:09:51 GMT
Cache-Control: max-age=2592000
Accept-Ranges: bytes
... and here goes the body ...
So in the end of the day there may be some mistake in how I do the proxy requests. Maybe it's because I actually do lots of them, when the main page is loaded - it has many images, etc.?
I hope I was clear enough, but please ask about details if I missed something. And the full source code is available (again, at the http://pastebin.com/vZfEfk8r), so if somebody would try it, it would be just great. :)
Much thanks in advance!
P.S. As I said, I'm just learning, so if you'll see some bad practices in my code (even unrelated to the question), it would be nice know them.
UPDATE: As was mentioned in comment, I didn't proxied the original request's headers, which in theory could lead to problems with the following requests. I changed that, but, unfortunately, the behavior remained the same. Here's example of new request and response:
Request
GET css/main_fixed.css HTTP/1.1
Host: www.ixbt.com
connection: keep-alive
cache-control: no-cache
pragma: no-cache
user-agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.56 Safari/536.5
accept: text/css,*/*;q=0.1
accept-encoding: gzip,deflate,sdch
accept-language: ru-RU,ru;q=0.8,en-US;q=0.6,en;q=0.4
accept-charset: windows-1251,utf-8;q=0.7,*;q=0.3
referer: http://www.ixbt.com/
Response
<html>
<head><title>400 Bad Request</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>
I had to craft the 'referer' header by hand, since browser is sending it with reverse proxy url. Still behavior is the same, as you can see. Any other ideas?
Thanks to the valuable comments, I was able to find the answer to this problem. It was nothing related to Node or target web-servers, just a coding error.
The answer is that path component of url was wrong for the relative urls. It already can be visible from my logs in the question's body. I'll repeat them here to reiterate:
Wrong request:
GET articles/pics2/201206/coolermaster-computex2012_70x70.jpg HTTP/1.1
Right request:
GET /articles/pics2/201206/coolermaster-computex2012_70x70.jpg HTTP/1.1
See the difference? The leading slash. Turns out I missed it on my requests for relative urls, due to my own awkward client's url handling. But with a quick-and-dirty fix it's working now, well enough until I'll do a proper client's url handling.
Much thanks for comments, they were insightful!
If above solutions do not work, try removing content-length header. Content-length mismatch causes body parsers to cause this errors
Solved: pasting the bytes here made me realise that I was missing empty lines between chunks...
Does an HTTP/1.1 request need to specify a Connection: keep-alive header, or is it always keep-alive by default?
This guide made me think it would; that, when my http server gets a 1.1 request, it is keep-alive unless explicitly receiving a Connection: close header.
I ask since my the different client behaviour of ab and httperf is driving me mad enough to wonder my sanity on this one...
Here's what httperf --hog --port 42042 --print-reply body sends:
GET / HTTP/1.1
User-Agent: httperf/0.9.0
Host: localhost
And here's my server's response:
HTTP/1.1 200 OK
Connection: keep-alive
Transfer-Encoding: chunked
Content-Length: 18
12
Hello World 1
0
httpref promptly prints out the response, but then just sits there, neither side closing the connection and httpref not exiting.
Where's my bug?
From RFC 2616, section 8.1.2:
A significant difference between HTTP/1.1 and earlier versions of HTTP is that persistent connections are the default behavior of any HTTP connection. That is, unless otherwise indicated, the client SHOULD assume that the server will maintain a persistent connection, even after error responses from the server.