Debugging Multiple Disposition Headers - http

I am developing a web application using Java and keep getting this error in Chrome on some particular pages:
net::ERR_RESPONSE_HEADERS_MULTIPLE_CONTENT_DISPOSITION
So, I checked WireShark for the corresponding TCP stream and this was the header of the response:
HTTP/1.0 200 OK
Date: Mon, 10 Sep 2012 08:48:49 GMT
Server: Apache-Coyote/1.1
Content-Disposition: attachment; filename=KBM 80 U (50/60Hz,220/230V)_72703400230.pdf
Content-Type: application/pdf
Content-Length: 564449
X-Cache: MISS from my-company-proxy.local
X-Cache-Lookup: MISS from my-company-proxy.local:8080
Via: 1.0 host-of-application.com, 1.1 my-company-proxy.local:8080 (squid/2.7.STABLE5)
Connection: keep-alive
Proxy-Connection: keep-alive
%PDF-1.4
[PDF data ...]
I only see one content disposition header in there. Why does chrome tell me there were several?

Because the filename parameter is unquoted, and contains a comma character (which is not allowed in unquoted values, and in this case indicates that multiple header values have been folded into a single one).
See http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.4.2.p.5 and http://greenbytes.de/tech/webdav/rfc6266.html

Related

play send chunked data as one response

I want get multi response in one connection, I use the play framework sample code
def index = Action {
Ok.chunked(
Enumerator("kiki", "foo", "bar").andThen(Enumerator.eof)
)
}
the doc show the result like:
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Transfer-Encoding: chunked
4
kiki
3
foo
3
bar
0
but I get
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Request-Time: 1
Transfer-Encoding: chunked
Connection: close
Date: Wed, 01 Jul 2015 03:42:31 GMT
kikifoobar
UPDATE
I use the Paw and chrome 43 to test this
This seems to be a bug in chrome. It was reported (and more or less confirmed) as an issue.
Note: This only seems to happen when you use "text/plain" as content type. You might want to try another content type or add an additional header: 'x-content-type-options':'nosniff' as suggested on the issue tracker.

Is the version of HTTP either 1.0 or 1.1 defined by webserver? How works the HTTP protocol definition?

I have a quick question but in advance I've read the RFC 2616 Chapter 14.22 about Host and HTTP Header but I still not understand where in httpd.conf or configuration file of a webserver should be changed? Please correct me if I'm wrong.
Look at following two HTTP GET I did to an Apache. The first one is GET for HTTP 1.0 , the other one is GET for HTTP 1.1. See the output:
HTTP/1.0 200 OK
Date: Thu, 24 Oct 2013 03:46:22 GMT
Server: Apache/1.3.41 (Unix) mod_gzip/1.3.26.1a PHP/5.2.9 mod_throttle/3.1.2 mod_psoft_traffic/0.2 mod_ssl/2.8.31 OpenSSL/0.9.8b
Vary: *
Last-Modified: Fri, 10 Aug 2012 20:22:30 GMT
ETag: "17c815b-3b-50256d86"
Accept-Ranges: bytes
Content-Length: 59
Connection: close
Content-Type: text/html
<html>
<body>
<center>webli7</center>
</body>
</html>
HTTP/1.1 400 Bad Request
Date: Thu, 24 Oct 2013 04:04:40 GMT
Server: Apache/1.3.41 (Unix) mod_gzip/1.3.26.1a PHP/5.2.9 mod_throttle/3.1.2 mod_psoft_traffic/0.2 mod_ssl/2.8.31 OpenSSL/0.9.8b
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=iso-8859-1
16e
The HTTP protocol version is decided dynamicaly, not through configuration files. The client send a request specifying the highest protocol version that its support. Then, the server must respond with either the version requested by the client, or any earlier version that it prefers.
Since Apache does support HTTP/1.1, it should therefore match exactly the version provided by the client.
There exist a flag that you may set in Apache's config to force Apache to use HTTP/1.0 in certain situations, even though the browser requested HTTP/1.1. This is used to fix bugs in HTTP/1.1 handling of some very old browser. Today, you should not need to play with this flag.
As for your error, I would suggest that you make sure that your GET does provide the Host: header. This header is required in HTTP/1.1, yet optional in HTTP/1.0, and having it missing would certainly result in a 400 error.

Issues with valid multipart/mixed and Google Drive

I am attempting to upload a file to Google Drive using the "upload" URL with a type of "multipart". I'm trying to do this without a library and using basic HTTP with a multipart POST. With a body like the following, I am constantly getting the error "Invalid multipart request with 0 mime parts."
The HTTP message looks valid to me. Is there something obvious that I'm missing or doing wrong?
Is there a protocol tester that can verify if my POST body is valid or not?
POST /upload/drive/v2/files?uploadType=multipart HTTP/1.1
Authentiction: Bearer {valid auth_token}
Content-Type: multipart/mixed; boundary="--314159265358979323846"
host: localhost:3004
content-length: 254
Connection: keep-alive
--314159265358979323846
Content-Type: application/json
{"title":"Now","mimeType":"text/plain"}
--314159265358979323846
Content-Type: text/plain
Content-Transfer-Encoding: 8bit
Mon Jun 17 2013 20:59:02 GMT-0400 (EDT)
--314159265358979323846--
(The segments look like they have double newlines. I think this is an artifact of the pasting, they are CRLF pairs in the code and appear as a newline when testing, but I guess this could theoretically be the problem, but I'd like proof.)
boundary attribute on the Content-Type header should not include double dashes. Use the following as your Content-Type:
Content-Type: multipart/mixed; boundary="314159265358979323846"

Uploading a file to Google Docs Api getting error 504

I'm working on a delphi api for Google docs and having a hard time getting the upload to work. I'm following Google's development guide here and from what I understand it looks like the process should go like this:
Make a POST request to this url: https://docs.google.com/feeds/upload/create-session/default/private/full/?access_token=my_access_token&v=3&convert=false with these headers: X-Upload-Content-Type and X-Upload-Content-Length
Get a 200 OK response with the next upload location stored in the Location header
Make a PUT request to the Location header with the header Content-Type set to whatever I had X-Upload-Content-Type set to in step 1 and the header Content-Range set to something like this: bytes 0-524287/2097152 and the first 512kb of data in the body
Get a 308 Resume Incomplete Response that has the next upload location in the Location header
Go back to 3 until all bytes are uploaded, at which point I will receive a 201 Created response that will have the xml data describing the file I uploaded
Everything up to and including step 3 works fine. It is at step 4 that things start to go wrong.
The one thing that confuses me the most is that the response on step 4 doesn't contain a Location header. I figured that meant I should just send the next request to the same url, but that causes me to get a 504 error. I tried the entire process with fiddler just to see if it was the delphi code, a lack of understanding on my part, or something that google is doing.
Here's the requests and responses I sent and received using fiddler:
POST https://docs.google.com/feeds/upload/create-session/default/private/full/?access_token=my_access_token&v=3&convert=false HTTP/1.1
Content-Type: application/x-www-form-urlencoded
X-Upload-Content-Type: application/octet-stream
X-Upload-Content-Length: 2097152
Content-Length: 0
Host: docs.google.com
HTTP/1.1 200 OK
Server: HTTP Upload Server Built on May 16 2012 12:03:24 (1337195004)
Location: https://docs.google.com/feeds/upload/create-session/default/private/full/?access_token=my_access_token&v=3&convert=false&upload_id=AEnB2Ur9-9VxMSI6kaFzbybY2qiyzK6kVoKzcZ6Yo02H8Ni4FlQFl_N06DdjZXzp3vSjOPH3CEb_4vDlKZp7VlC0hxpkypzlKg
Date: Tue, 22 May 2012 16:53:27 GMT
Pragma: no-cache
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Cache-Control: no-cache, no-store, must-revalidate
Content-Length: 0
Content-Type: text/html
PUT https://docs.google.com/feeds/upload/create-session/default/private/full/?access_token=my_access_token&v=3&convert=false&upload_id=AEnB2Ur9-9VxMSI6kaFzbybY2qiyzK6kVoKzcZ6Yo02H8Ni4FlQFl_N06DdjZXzp3vSjOPH3CEb_4vDlKZp7VlC0hxpkypzlKg HTTP/1.1
Content-Type: application/octet-stream
Content-Length: 524288
Content-Range: bytes 0-524287/2097152
Host: docs.google.com
[first 512kb of data here]
HTTP/1.1 308 Resume Incomplete
Server: HTTP Upload Server Built on May 16 2012 12:03:24 (1337195004)
Range: bytes=0-524287
X-Range-MD5: bd9d4ee7afa24b7da0e685f05b5f1f44
Date: Tue, 22 May 2012 16:54:29 GMT
Pragma: no-cache
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Cache-Control: no-cache, no-store, must-revalidate
Content-Length: 0
Content-Type: text/html
PUT https://docs.google.com/feeds/upload/create-session/default/private/full/?access_token=my_access_token&v=3&convert=false&upload_id=AEnB2Ur9-9VxMSI6kaFzbybY2qiyzK6kVoKzcZ6Yo02H8Ni4FlQFl_N06DdjZXzp3vSjOPH3CEb_4vDlKZp7VlC0hxpkypzlKg HTTP/1.1
Content-Type: application/octet-stream
Content-Length: 524288
Content-Range: bytes 524288-1048575/2097152
Host: docs.google.com
[next 512kb of data]
HTTP/1.1 504 Fiddler - Send Failure
Content-Type: text/html; charset=UTF-8
Connection: close
Timestamp: 10:54:14.056
The only thing I was able to do was to be able to say for a fact that it is not just the delphi code that is wrong, and since I don't think it's google, I'm going to have to go with I don't understand something that should be happening. What am I missing?
Edit
I was able to get the upload working, I'm not entirely sure what I did differently, but the documentation is a little misleading. At least it is to me. When you send a PUT request, you don't get a new location, you just continue to upload to the same one. Also, when you finish the upload, the 201 response doesn't contain the actual XML data, instead, it has a Location header that points to where you can grab the XML data from. Not a huge deal but a little confusing.
It seems like the 504 error is returned by Fiddler, these two links should help:
https://urda.com/blog/2010/09/28/iis-services-504s-and-fiddler/
https://urda.com/blog/2010/09/30/follow-up-iis-services-504s-and-fiddler/

Why is Connection: keep-alive still being specified in http headers (isn't it deprecated)?

According to "HTTP: The Definitive Guide", using
Connection: keep-alive
to specify a persistent connection is deprecated in HTTP/1.1, since HTTP/1.1 specifies that connections are persistent by default and must be closed manually by sending
Connection: close
Thus, my simple assumption is that "Connection: keep-alive" shouldn't really be used anymore. However, it still seems alive and well. For example, keep-alive is being returned in the following query:
curl -I https://foursquare.com
HTTP/1.1 200 OK
Server: nginx/0.8.52
Date: Thu, 11 Aug 2011 21:15:45 GMT
Content-Type: text/html; charset=utf-8
Connection: keep-alive
Expires: Thu, 11 Aug 2011 21:15:45 UTC
Set-Cookie: XSESSIONID=w19~kqtn4bpqmfq51p8qolstpk6ti;Path=/;Secure;HttpOnly
Set-Cookie: LOCATION=49.25::-123.13330078125::Hockeytown::CA;Path=/;Secure
Set-Cookie: bbhive=OQ32XATE0OQAEVCY0IVSWUDPQ1A2GT
Content-Length: 38815
Cache-Control: no-cache, private, no-store
Pragma: no-cache
My question is: Why is Connection: keep-alive still being specified in HTTP headers?
A corollary question is: Are there still (clients, servers, proxies, etc) that still only speak HTTP/1.0 and its variants, or are most such entities on HTTP/1.1 as of 2011?
Here are my working hypotheses:
1) HTTP/1.0 is no longer in use, b/c that was "many years" ago
2) Given (1), keep-alive shouldn't be used anymore, but is purely for vestigial reasons (that is, certain technologies haven't bothered to remove it, or keep it around as voodoo code, etc.)
If (1) is incorrect, and HTTP/1.0 is still in use, then sure it seems plausible to keep using keep-alive, despite follow-up questions on HTTP 1.0-1.1 interop.
Thanks in advance for any insights shared!
HTTP/1.0 have no headers like Connection, but there is many different implementation of HTTP/1.0 and HTTP/1.1.
so Connection: keep-alive is used 'Just in case'

Resources