HTTP header 400 bad request response - http

I'm trying to test writing correct HTTP headers to understand
the syntax. Here I'm trying to PUT some text into httpbin.org/put and I expect the response body content to be the same.
PUT /HTTP/1.1
Host: httpbin.org
Accept-Language: en-us
Connection: Keep-Alive
Content-type: text/plain
Content-Length: 12
Hello jerome
However I'm getting the following bad request 400 response:
HTTP/1.1 400 Bad Request
Server: nginx
Date: Tue, 01 Mar 2016 12:34:02 GMT
Content-Type: text/html
Content-Length: 166
Connection: close
Response:
<html>
<head><title>400 Bad Request</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>
What syntactical errors have I done?
NOTE: newlines are \r\n not \n in the request.

Apparently the correct syntax goes like this for PUT:
PUT /put HTTP/1.1\r\n
Content-Length: 11\r\n
Content-Type: text/plain\r\n
Host: httpbin.org\r\n\r\n
hello lala\n
I believe I didn't say much on how I connected to httpbin.org; it was via sockets in C. So the connection was already established before sending the header + message.

You miss the destination url following the PUT verb, the first line must be:
PUT http://httpbin.org/ HTTP/1.1
This will probably also fail, you need one of their handler urls so they know what to reply with:
PUT http://httpbin.org/put HTTP/1.1

The general form of the first line, or Request Line, in an HTTP request is as follows:
<method> <path component of URL, or absolute URL> HTTP/<Version>\r\n
Where for your example, the method is PUT. Including an absolute URL (so, starting with http:// or https:// is only necessary when connecting to a proxy, because the proxy will then attempt to retrieve that URL, rather than attempt to serve a local resource (as found by the path component).
As presented, the only change you should have needed to make was ensuring there was a space between the / and HTTP/1.1. Otherwise, the path would be "/HTTP/1.1"... which would be a 404, if it weren't already a badly formed request. /HTTP/1.1 being interpreted as a path means the HTTP server that's parsing your request line doesn't find the protocol specifier (the HTTP/1.1 bit) before the terminating \r\n... and that's one example of how 400 response codes are born.
Hope that helped. Consult the HTTP 1.1 RFC (2616), section 5.1 for more information and the official definitions.

Related

Should non-2xx status code responses include CORS specific headers

Should non-2XX status code responses still include CORS specific headers such as Access-Control-Allow-Origin, Access-Control-Allow-Methods, and Access-Control-Max-Age? Does that even make any sense for clients?
For example:
➜ api git:(master) ✗ curl -i http://127.0.0.1:9000/dfas
HTTP/1.1 404 Not Found
Connection: close
Server: Node.js v6.3.1
Cache-Control: no-cache, no-store
Access-Control-Max-Age: 300
Access-Control-Allow-Origin: *
Content-Type: application/json
Content-Length: 60
Date: Thu, 11 Aug 2016 22:58:33 GMT
{"code":"ResourceNotFound","message":"/dfas does not exist"}
Yes it makes sense to have the server send CORS headers even with non-2xx responses. The reason is: without the CORS headers in the response, non-2xx response codes aren’t exposed to frontend code (through Fetch or XHR). The response codes may show up in the devtools console but without the CORS headers the only thing the frontend code will be able to determine programmatically is that an error occurred—but not the response code for the error.
So if you want frontend code to have the ability to do useful error handling based on the response code, the server should send CORS headers even in non-2xx responses.

FrameTooLongException Mime HTTP Header size calculation

When receiving a response back with a netty client object, I run into a FrameTooLongException. After taking a tcpdump, found that the response received is a large Mutlipart Mime response with about 200 parts (each with some short headers), but the actual HTTP Header for the response is quite small and are listed as;
> Host: foobar.com:20804
> Accept: */*
>
< HTTP/1.1 207 Multi-Status
< Date: Tue, 04 Aug 2015 19:44:09 GMT
< Vary: Accept
< Content-Type: multipart/mixed; boundary="63602357878446117"
< Content-Length: 33023
I couldn't find anything in the documentation about this, but are Mime part headers used in the HTTP Header size calculation, and does Netty parse it as such?
The exception I get is as follows:
io.netty.handler.codec.TooLongFrameException: HTTP header is larger than 8192 bytes.
at io.netty.handler.codec.http.HttpObjectDecoder$HeaderParser.newException(HttpObjectDecoder.java:787)
at io.netty.handler.codec.http.HttpObjectDecoder$HeaderParser.process(HttpObjectDecoder.java:779)
at io.netty.buffer.AbstractByteBuf.forEachByteAsc0(AbstractByteBuf.java:1022)
at io.netty.buffer.AbstractByteBuf.forEachByte(AbstractByteBuf.java:1000)
at io.netty.handler.codec.http.HttpObjectDecoder$HeaderParser.parse(HttpObjectDecoder.java:751)
at io.netty.handler.codec.http.HttpObjectDecoder.readHeaders(HttpObjectDecoder.java:545)
at io.netty.handler.codec.http.HttpObjectDecoder.decode(HttpObjectDecoder.java:221)
at io.netty.handler.codec.http.HttpClientCodec$Decoder.decode(HttpClientCodec.java:136)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:315)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:229)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:147)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1044)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:934)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:315)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:229)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
Http header terminates with 2 cr/lf (such as between Accept and HTTP in your example), and header shall start with a "start line" (HTTP/1.1...).
Therefore I see 2 issues with your example:
Your header does not start correctly : HTTP/1.1 should be the first line, followed later on by your accept and other host header params.
Probably there is something wrong in your response such that there is no 2 cr/of between your header and the body, thus leading to the decoding of the body as if it was part of the header, so the exception...

Getting 404 error if requesting a page through proxy, but 200 if connecting directly

I am developing an HTTP proxy in Java. I resend all the data from client to server without touching it, but for some URLs (for example this) server returns the 404 error if I am connecting through my proxy.
The requested URL uses Varnish caching, so it might be the root of problem. I cannot reconfigure it - it is not my.
If I request that URL directly with browser, the server returns 200 and the image is shown correctly.
I am stuck because I even do not know what to read and how to compose a search request.
Thanks a lot.
Fix the Host: header of the re-issued request. The request going out from the proxy either has no Host header or it is broken (or only X-Host exists). Also take note that the proxy application will execute its own DNS lookup and that might yield a different IP address than your local computer (where you issued the original request).
This works:
> curl -s -D - -o /dev/null http://212.25.95.152/w/w-200/1902047-41.jpg -H "Host: msc.wcdn.co.il"
HTTP/1.1 200 OK
Content-Type: image/jpeg
Cache-Control: max-age = 315360000
magicmarker: 1
Content-Length: 27922
Accept-Ranges: bytes
Date: Sun, 05 Jul 2015 00:52:08 GMT
X-Varnish: 2508753650 2474246958
Age: 67952
Via: 1.1 varnish
Connection: keep-alive
X-Cache: HIT

Unable to test HTTP PUT-based file upload via Squid Proxy

I can upload a file to my Apache web server using Curl just fine:
echo "[$(date)] file contents." | curl -T - http://WEB-SERVER/upload/sample.put
However, if I put a Squid proxy server in between, then I am not able to:
echo "[$(date)] file contents." | curl -x http://SQUID-PROXY:3128 -T - http://WEB-SERVER/upload/sample.put
Curl reports the following error:
Note: This error response was in HTML format, but I've removed the tags for ease of reading.
ERROR: The requested URL could not be retrieved
ERROR
The requested URL could not be retrieved
While trying to retrieve the URL:
http://WEB-SERVER/upload/sample.put
The following error was encountered:
Unsupported Request Method and Protocol
Squid does not support all request methods for all access protocols.
For example, you can not POST a Gopher request.
Your cache administrator is root.
My squid.conf doesn't seem to be having any ACL/rule that should disallow based on the src or dst IP addresses, or the protocol, or the HTTP method... as I can do an HTTP POST just fine between the same client and the web server, with the same proxy sitting in between.
In case of the failing HTTP PUT case, to see the request and response traffic that was actually occurring, I placed a netcat process in between Curl and Squid, and this is what I saw:
Request:
PUT http://WEB-SERVER/upload/sample.put HTTP/1.1
User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5
Host: WEB-SERVER
Pragma: no-cache
Accept: */*
Proxy-Connection: Keep-Alive
Transfer-Encoding: chunked
Expect: 100-continue
Response:
HTTP/1.0 501 Not Implemented
Server: squid/2.6.STABLE21
Date: Sun, 13 May 2012 02:11:39 GMT
Content-Type: text/html
Content-Length: 1078
Expires: Sun, 13 May 2012 02:11:39 GMT
X-Squid-Error: ERR_UNSUP_REQ 0
X-Cache: MISS from SQUID-PROXY-FQDN
X-Cache-Lookup: NONE from SQUID-PROXY-FQDN:3128
Via: 1.0 SQUID-PROXY-FQDN:3128 (squid/2.6.STABLE21)
Proxy-Connection: close
<SNIPPED the HTML error response already shown earlier above>
Note: I have anonymized the IP addresses and server names throughout for readability reasons.
Thanks to Amos Jeffries for answering this on squid-users forum. The issue is basically that Squid before version 3.1 does not implement HTTP 1.1 and thus rejects the chunked transfer encoding.

Compojure: getting the body from a POST request from which the Content-Type header was missing

Given this snippet:
(defroutes main-routes
(POST "/input/:controller" request
(let [buff (ByteArrayOutputStream.)]
(copy (request :body) buff)
;; --- snip
The value of buff will be a non-empty byte array iff there's the Content-Type header in the request. The value can be nonsencial, the header just has to be there.
However, I need to dump the body (hm... that came out wrong) if the request came without a content type, so that the client can track down the offending upload. (The uploading software is not under my control and its maintainers won't provide anything extra in the headers.)
Thank you for any ideas on how to solve or work around this!
EDIT:
Here are the headers I get from the client:
{
"content-length" "159",
"accept" "*/*",
"host" (snip),
"user-agent" (snip)
}
Plus, I discovered that Ring, using an instance of Java's ServletRequest, fills in the content type with the standard default, x-www-form-urlencoded. I'm now guessing that HTTPParser, which supplies the body through HTTPParser#Input, can't parse it correctly.
I face the same issue. It's definitely one of the middleware not being able to parse the body correctly and transforming :body. The main issue is that the Content-Type suggest the body should be parsable.
Using ngrep, I found out how curl confuses the middleware. The following, while intuitive (or rather sexy) on the command line sends a wrong Content-Type which confuses the middleware:
curl -nd "Unknown error" http://localhost:3000/event/error
T 127.0.0.1:44440 -> 127.0.0.1:3000 [AP]
POST /event/error HTTP/1.1.
Authorization: Basic SzM5Mjg6ODc2NXJkZmdoam5idmNkOQ==.
User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3.
Host: localhost:3000.
Accept: */*.
Content-Length: 13.
Content-Type: application/x-www-form-urlencoded.
.
Unknown error
The following however forces the Content-Type to being opaque and the middleware will not interfere with the :body.
curl -nd "Unknown error" -H "Content-Type: application/data" http://localhost:3000/event/error
T 127.0.0.1:44441 -> 127.0.0.1:3000 [AP]
POST /event/error HTTP/1.1.
Authorization: Basic SzM5Mjg6ODc2NXJkZmdoam5idmNkOQ==.
User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3.
Host: localhost:3000.
Accept: */*.
Content-Type: application/data.
Content-Length: 13.
.
Unknown error
I'm considering replacing the middleware with a more liberal one because even though the request is wrong, I'd still like to be able to decide what to do with the body myself. It's a really weird choice to zero the request body when the request doesn't make sense. I actually think a more correct behavior would be to pass it to an error handler which by default would return a 400 Bad Request or 406 Not Acceptable.
Any thoughts on that? In my case I might propose a patch to Compojure.
According to:
http://mmcgrana.github.com/ring/ring.middleware.content-type-api.html
the default content type is application/octet-stream. Unless you actively support that content type, can't you just check if the content type matches that one, and then dump whatever you need based on that?

Resources