When receiving a response back with a netty client object, I run into a FrameTooLongException. After taking a tcpdump, found that the response received is a large Mutlipart Mime response with about 200 parts (each with some short headers), but the actual HTTP Header for the response is quite small and are listed as;
> Host: foobar.com:20804
> Accept: */*
>
< HTTP/1.1 207 Multi-Status
< Date: Tue, 04 Aug 2015 19:44:09 GMT
< Vary: Accept
< Content-Type: multipart/mixed; boundary="63602357878446117"
< Content-Length: 33023
I couldn't find anything in the documentation about this, but are Mime part headers used in the HTTP Header size calculation, and does Netty parse it as such?
The exception I get is as follows:
io.netty.handler.codec.TooLongFrameException: HTTP header is larger than 8192 bytes.
at io.netty.handler.codec.http.HttpObjectDecoder$HeaderParser.newException(HttpObjectDecoder.java:787)
at io.netty.handler.codec.http.HttpObjectDecoder$HeaderParser.process(HttpObjectDecoder.java:779)
at io.netty.buffer.AbstractByteBuf.forEachByteAsc0(AbstractByteBuf.java:1022)
at io.netty.buffer.AbstractByteBuf.forEachByte(AbstractByteBuf.java:1000)
at io.netty.handler.codec.http.HttpObjectDecoder$HeaderParser.parse(HttpObjectDecoder.java:751)
at io.netty.handler.codec.http.HttpObjectDecoder.readHeaders(HttpObjectDecoder.java:545)
at io.netty.handler.codec.http.HttpObjectDecoder.decode(HttpObjectDecoder.java:221)
at io.netty.handler.codec.http.HttpClientCodec$Decoder.decode(HttpClientCodec.java:136)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:315)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:229)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:147)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1044)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:934)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:315)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:229)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
Http header terminates with 2 cr/lf (such as between Accept and HTTP in your example), and header shall start with a "start line" (HTTP/1.1...).
Therefore I see 2 issues with your example:
Your header does not start correctly : HTTP/1.1 should be the first line, followed later on by your accept and other host header params.
Probably there is something wrong in your response such that there is no 2 cr/of between your header and the body, thus leading to the decoding of the body as if it was part of the header, so the exception...
Related
I'm writing a HTTP server in Go that will receive requests from clients with a Expect: 100-continue header. However, it appears that the net/http server doesn't send a HTTP/1.1 100 Continue unless the client also sends a Transfer-Encoding: Chunked header, which some clients (for example, ffmpeg with an icecast:// destination) do not.
Here's a minimal server, that writes into a bytes.Buffer (I've reproduced the same behaviour with a more complicated server that, for example, uses io.Copy() to write into a file):
func main() {
http.HandleFunc("/", func(writer http.ResponseWriter, r *http.Request) {
log.Printf("Expect header: %v\n", r.Header.Get("Expect"))
log.Printf("Transfer-Encoding header: %v\n", r.Header.Get("Transfer-Encoding"))
buf := new(bytes.Buffer)
defer func() {
log.Printf("Buffer size: %d\n", buf.Len())
}()
defer r.Body.Close()
log.Println("Writing.")
io.Copy(buf, r.Body)
})
log.Fatal(http.ListenAndServe(":3948", nil))
}
And here's a transcript of two HTTP conversations (via telnet), where the server sends a 100 in one but not in the other:
PUT /telnetlol HTTP/1.1
Host: localhost
Expect: 100-continue
HTTP/1.1 200 OK
Date: Thu, 18 Mar 2021 10:59:09 GMT
Content-Length: 0
PUT /telnetlol HTTP/1.1
Host: localhost
Expect: 100-continue
Transfer-Encoding: chunked
HTTP/1.1 100 Continue
test
HTTP/1.1 200 OK
Date: Thu, 18 Mar 2021 10:59:35 GMT
Content-Length: 0
Connection: close
Is this a bug in Go, or am I misunderstanding the HTTP spec? The spec reads:
Upon receiving a request which includes an Expect request-header field with the "100-continue" expectation, an origin server MUST respond with 100 (Continue) status and continue to read from the input stream, or respond with a final status code. The origin server MUST NOT wait for the request body before sending the 100 (Continue) response.
Edit: Sending a non-zero Content-Length header in the initial request also makes the server reply with a 100 Continue. (Although, if I understand the spec correctly, it should still reply with a Continue irregardless.)
The net/http server correctly handles the request:
PUT /telnetlol HTTP/1.1
Host: localhost
Expect: 100-continue
with this response:
HTTP/1.1 200 OK
Date: Thu, 18 Mar 2021 10:59:09 GMT
Content-Length: 0
The request does not have a message body per RFC 7230 3.3:
The presence of a message body in a request is signaled by a
Content-Length or Transfer-Encoding header field.
A server may omit sending the a 100 response when there is no message body per RFC 7231 5.1.1:
A server MAY omit sending a 100 (Continue) response if it has
already received some or all of the message body for the
corresponding request, or if the framing indicates that there is
no message body.
In addition, the client request is bad per RFC 7231 5.1.1:
A client MUST NOT generate a 100-continue expectation in a request that does not include a message body.
I'm trying to test writing correct HTTP headers to understand
the syntax. Here I'm trying to PUT some text into httpbin.org/put and I expect the response body content to be the same.
PUT /HTTP/1.1
Host: httpbin.org
Accept-Language: en-us
Connection: Keep-Alive
Content-type: text/plain
Content-Length: 12
Hello jerome
However I'm getting the following bad request 400 response:
HTTP/1.1 400 Bad Request
Server: nginx
Date: Tue, 01 Mar 2016 12:34:02 GMT
Content-Type: text/html
Content-Length: 166
Connection: close
Response:
<html>
<head><title>400 Bad Request</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>
What syntactical errors have I done?
NOTE: newlines are \r\n not \n in the request.
Apparently the correct syntax goes like this for PUT:
PUT /put HTTP/1.1\r\n
Content-Length: 11\r\n
Content-Type: text/plain\r\n
Host: httpbin.org\r\n\r\n
hello lala\n
I believe I didn't say much on how I connected to httpbin.org; it was via sockets in C. So the connection was already established before sending the header + message.
You miss the destination url following the PUT verb, the first line must be:
PUT http://httpbin.org/ HTTP/1.1
This will probably also fail, you need one of their handler urls so they know what to reply with:
PUT http://httpbin.org/put HTTP/1.1
The general form of the first line, or Request Line, in an HTTP request is as follows:
<method> <path component of URL, or absolute URL> HTTP/<Version>\r\n
Where for your example, the method is PUT. Including an absolute URL (so, starting with http:// or https:// is only necessary when connecting to a proxy, because the proxy will then attempt to retrieve that URL, rather than attempt to serve a local resource (as found by the path component).
As presented, the only change you should have needed to make was ensuring there was a space between the / and HTTP/1.1. Otherwise, the path would be "/HTTP/1.1"... which would be a 404, if it weren't already a badly formed request. /HTTP/1.1 being interpreted as a path means the HTTP server that's parsing your request line doesn't find the protocol specifier (the HTTP/1.1 bit) before the terminating \r\n... and that's one example of how 400 response codes are born.
Hope that helped. Consult the HTTP 1.1 RFC (2616), section 5.1 for more information and the official definitions.
While writing my HTTP/1.1 server, I get stuck dealing multiple ranges request.
Section 14.35.1 of RFC 2616 refers some examples but doesn't clarify server behaviour.
For instance:
GET /some/resource HTTP/1.1
...
Range: bytes=200-400,100-300,500-600
...
Should I return this exact sequence of bytes?
Or should I merge all ranges, sending 100-400,500-600?
Or sending all in between, 100-600?
Worst, when checking Content-Range response header (Section 14.16), only a single range may be returned, so I wonder how would a server response to example in Section 14.35.1 bytes=0-0,-1!!!
How should my server handle such requests?
I just had a look at how other servers that support the Range header field might respond and did a quick curl to example.com:
~# curl -s -D - -H "Range: bytes=100-200, 300-400" http://www.example.com
HTTP/1.1 206 Partial Content
Accept-Ranges: bytes
Content-Type: multipart/byteranges; boundary=3d6b6a416f9b5
Content-Length: 385
Server: ECS (fll/0761)
--3d6b6a416f9b5
Content-Type: text/html
Content-Range: bytes 100-200/1270
eta http-equiv="Content-type" content="text/html; charset=utf-8" />
<meta name="vieport" content
--3d6b6a416f9b5
Content-Type: text/html
Content-Range: bytes 300-400/1270
-color: #f0f0f2;
margin: 0;
padding: 0;
font-family: "Open Sans", "Helvetica
--3d6b6a416f9b5--
Apparently, what your looking for is the Content-Type: multipart/byteranges; boundary response header. Googling exactly that turned up a W3C document with appendices to RFC 2616
When an HTTP 206 (Partial Content) response message includes the content of multiple ranges (a response to a request for multiple non-overlapping ranges), these are transmitted as a multipart message-body. The media type for this purpose is called "multipart/byteranges".
The multipart/byteranges media type includes two or more parts, each with its own Content-Type and Content-Range fields. The required boundary parameter specifies the boundary string used to separate each body-part.
So there you go.
By the way, the server at example.com does not check for overlapping byte ranges and sends you exactly the ranges that you requested...
I am attempting to upload a file to Google Drive using the "upload" URL with a type of "multipart". I'm trying to do this without a library and using basic HTTP with a multipart POST. With a body like the following, I am constantly getting the error "Invalid multipart request with 0 mime parts."
The HTTP message looks valid to me. Is there something obvious that I'm missing or doing wrong?
Is there a protocol tester that can verify if my POST body is valid or not?
POST /upload/drive/v2/files?uploadType=multipart HTTP/1.1
Authentiction: Bearer {valid auth_token}
Content-Type: multipart/mixed; boundary="--314159265358979323846"
host: localhost:3004
content-length: 254
Connection: keep-alive
--314159265358979323846
Content-Type: application/json
{"title":"Now","mimeType":"text/plain"}
--314159265358979323846
Content-Type: text/plain
Content-Transfer-Encoding: 8bit
Mon Jun 17 2013 20:59:02 GMT-0400 (EDT)
--314159265358979323846--
(The segments look like they have double newlines. I think this is an artifact of the pasting, they are CRLF pairs in the code and appear as a newline when testing, but I guess this could theoretically be the problem, but I'd like proof.)
boundary attribute on the Content-Type header should not include double dashes. Use the following as your Content-Type:
Content-Type: multipart/mixed; boundary="314159265358979323846"
I am developing a web application using Java and keep getting this error in Chrome on some particular pages:
net::ERR_RESPONSE_HEADERS_MULTIPLE_CONTENT_DISPOSITION
So, I checked WireShark for the corresponding TCP stream and this was the header of the response:
HTTP/1.0 200 OK
Date: Mon, 10 Sep 2012 08:48:49 GMT
Server: Apache-Coyote/1.1
Content-Disposition: attachment; filename=KBM 80 U (50/60Hz,220/230V)_72703400230.pdf
Content-Type: application/pdf
Content-Length: 564449
X-Cache: MISS from my-company-proxy.local
X-Cache-Lookup: MISS from my-company-proxy.local:8080
Via: 1.0 host-of-application.com, 1.1 my-company-proxy.local:8080 (squid/2.7.STABLE5)
Connection: keep-alive
Proxy-Connection: keep-alive
%PDF-1.4
[PDF data ...]
I only see one content disposition header in there. Why does chrome tell me there were several?
Because the filename parameter is unquoted, and contains a comma character (which is not allowed in unquoted values, and in this case indicates that multiple header values have been folded into a single one).
See http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.4.2.p.5 and http://greenbytes.de/tech/webdav/rfc6266.html