Chrome MULTIPLE_CONTENT_LENGTH error - http

If I access my page directly, I get:
$ wget http://localhost:8010/ --save-headers -O -
--2010-10-29 18:30:24-- http://localhost:8010/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8010... connected.
HTTP request sent, awaiting response... 200 OK
Length: 950 [text/html]
Saving to: `STDOUT'
HTTP/1.1 200 OK
Server: gunicorn/0.11.1
Date: Fri, 29 Oct 2010 16:30:24 GMT
Connection: keep-alive
Vary: Accept-Language, Cookie, Accept-Encoding
Content-Length: 950
Content-Type: text/html; charset=utf-8
Content-Language: en-us
If I access it via the cache:
$ wget http://localhost:8000/ --save-headers -O -
--2010-10-29 18:30:31-- http://localhost:8000/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 950 [text/html]
Saving to: `STDOUT'
HTTP/1.1 200 OK
Server: gunicorn/0.11.1
Vary: Accept-Language, Cookie, Accept-Encoding
Content-Type: text/html; charset=utf-8
Content-Language: en-us
Content-Length: 950
Date: Fri, 29 Oct 2010 16:30:31 GMT
X-Varnish: 818233557
Age: 0
Via: 1.1 varnish
Connection: keep-alive
When I open the latter in Chromium (8.0.552.18 (0)), I get this error:
Error 346 (net::ERR_RESPONSE_HEADERS_MULTIPLE_CONTENT_LENGTH): Unknown error.
I only see three extra headers; which one should I remove to make it display in Chrome?
EDIT: I have eventually got rid of this problem, but I can't remember how, and I don't have access to that system anymore. I'm starting a bounty, maybe somebody will explain me what was going on here.

Check out this version of the chromium source. It looks like if you do not specify "Transfer-Encoding" and you include multiple lengths it will throw this very error. Later revisions added a check that the content length sizes must be different to throw this error. Seems like it was added as a security precaution.
Probably would not have ever seen this error with a newer version of Chromium.

You might try disabling the DNS prefetching in the Chromium settings. Go to Preferences > Under the Hood and un-check "Use DNS pre-fetching to improve page load times".

Related

A strange problem about size limit in http header

Context: I maintain a kind of web service server, but with a particular implementation: all data sent by the web services are located in the http header. That means there is only http header in the response (no body part). The web service runs as a windows service. The consumer is my PHP code which invokes the web service via CURL library. All this is in production since 3 years and works fine. I recently had to build a development environment.
I have the web service on a Windows 7 pro, as a windows service.
I have my PHP consumer in another windows 7 pro (WAMP + CURL).
my PHP code invokes the web service and displays the raw response.
in this context the problem occurs: if the response contains more than 1215 characters, I have an empty response (but no error message).
I installed my PHP code (exactly the same) on a new Linux ubuntu: I have the same problem.
I installed my PHP code (exactly the same) on a new Linux centos: I DON'T HAVE THE PROBLEM.
I read a lot on internet about size limitation on http header, and I think today it's not the reason of the problem.
I examined all size limitation parameters on Apache, PHP, Curl but I didn't find something relevant.
If someone has some information. All tracks are welcome. Thanks
not an answer, but want to say that using PHP 7.2.5 under mod_php with Apache 2.4.33, i am unable to reproduce your issue, as i have no problems sending anything from 1 byte to 10,000 to even 100,000 bytes in headers:
here is my producer.php:
<?php
$size=((int)($_GET['s'] ?? 1));
header("X-size: {$size}");
$data=str_repeat("a",$size);
header("X-data: {$data}");
http_response_code(204); // 204 NO CONTENT
and whether i hit http://127.0.0.1/producer.php?s=1 or http://127.0.0.1/producer.php?s=10000 or even http://127.0.0.1/producer.php?s=100000 , the data is returned without issue, as you can see in the screenshot above. can you reproduce the issue using my producer.php code?
btw, interestingly, when i try 1 million bytes, i get this error from curl:
$ curl -I http://127.0.0.1/producer.php?s=1000000
HTTP/1.1 204 No Content
Date: Wed, 16 Jan 2019 20:11:25 GMT
Server: Apache/2.4.33 (Win32) OpenSSL/1.1.0h PHP/7.2.5
X-Powered-By: PHP/7.2.5
X-size: 1000000
curl: (27) Rejected 104960 bytes header (max is 102400)!
Hanshenrik,
i also used CURLOPT_VERBOSE as you said. Here are the 2 curl logs.
The only difference is the line
<* stopped the pause stream!> in the Ubuntu curl log.
CURL log from Ubuntu witch has the probleme:
* Trying 192.168.1.205...
* TCP_NODELAY set
* Connected to 192.168.1.205 (192.168.1.205) port 8084 (#0)
> POST /datasnap/rest/TServerMethods/%22W_GetDashboard%22/ HTTP/1.1
Host: 192.168.1.205:8084
Accept-Encoding: gzip,deflate
Accept: application/json
Content-Type: text/xml; charset=utf-8
Pragma: dssession=146326.909376.656191
Content-Length: 15
* upload completely sent off: 15 out of 15 bytes
< HTTP/1.1 200 OK
< Connection: close
< Content-Encoding: deflate
< Content-Type: application/json
< Content-Length: 348
< Date: Thu, 17 Jan 2019 15:27:03 GMT
< Pragma: dssession=146326.909376.656191,dssessionexpires=3600000
<
* stopped the pause stream!
* Closing connection 0
CURL log from Centos witch has NOT the probleme:
* About to connect() to 192.168.1.205 port 8084 (#1)
* Trying 192.168.1.205...
* Connected to 192.168.1.205 (192.168.1.205) port 8084 (#1)
> POST /datasnap/rest/TServerMethods/%22W_GetDashboard%22/ HTTP/1.1
Host: 192.168.1.205:8084
Accept-Encoding: gzip,deflate
Accept: application/json
Content-Type: text/xml; charset=utf-8
Pragma: dssession=3812.553164.889594
Content-Length: 15
* upload completely sent off: 15 out of 15 bytes
< HTTP/1.1 200 OK
< Connection: close
< Content-Encoding: deflate
< Content-Type: application/json
< Content-Length: 348
< Date: Thu, 17 Jan 2019 15:43:39 GMT
< Pragma: dssession=3812.553164.889594,dssessionexpires=3600000
<
* Closing connection 1

Nagios check_http gives 'HTTP/1.0 503 Service Unavailable' for HAProxy site

Can't figure this one out!
OS: CentOS 6.6 (Up-To-Date)
I get the following 503 error when using my nagios check_http check (or curl) to query an SSL site served via HAProxy 1.5.
[root#nagios ~]# /usr/local/nagios/libexec/check_http -v -H example.com -S1
GET / HTTP/1.1
User-Agent: check_http/v2.0 (nagios-plugins 2.0)
Connection: close
Host: example.com
https://example.com:443/ is 212 characters
STATUS: HTTP/1.0 503 Service Unavailable
**** HEADER ****
Cache-Control: no-cache
Connection: close
Content-Type: text/html
**** CONTENT ****
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
HTTP CRITICAL: HTTP/1.0 503 Service Unavailable - 212 bytes in 1.076 second response time |time=1.075766s;;;0.000000 size=212B;;;0
[root#nagios ~]# curl -I https://example.com
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
However. I can access the site fine via any browser fine (200 OK), and also curl -I https://example.com from another server:
root#localhost:~# curl -I https://example.com
HTTP/1.1 200 OK
Date: Wed, 18 Feb 2015 14:36:51 GMT
Server: Apache/2.4.6
Expires: Mon, 26 Jul 1997 05:00:00 GMT
Pragma: no-cache
Last-Modified: Wed, 18 Feb 2015 14:36:52 GMT
Content-Type: text/html; charset=UTF-8
Strict-Transport-Security: max-age=31536000;
The HAProxy server is runnning on pfSense 2.2.
I see that HAProxy returns an HTTP/1.0 for nagios and HTTP/1.1 from elsewhere. So is it my check_http' plugin causing this or is itcurl`?
Is my server just not sending the HOST header? If so, how can I resolve this?
What check_http does is it checks whether there exists a index.html-file on the server. This means you might have http running and working, while the check still fails.
Regardless whether or not creating an index.html file on the server resolves the issue, u might not want to create the circumstances such that the check works.
I suppose setting up a check for pinging your example.com and a check via nrpe to see whether your http-service is running will meet your requirements.
check_http has an option called --sni
You need to use that option

Is the version of HTTP either 1.0 or 1.1 defined by webserver? How works the HTTP protocol definition?

I have a quick question but in advance I've read the RFC 2616 Chapter 14.22 about Host and HTTP Header but I still not understand where in httpd.conf or configuration file of a webserver should be changed? Please correct me if I'm wrong.
Look at following two HTTP GET I did to an Apache. The first one is GET for HTTP 1.0 , the other one is GET for HTTP 1.1. See the output:
HTTP/1.0 200 OK
Date: Thu, 24 Oct 2013 03:46:22 GMT
Server: Apache/1.3.41 (Unix) mod_gzip/1.3.26.1a PHP/5.2.9 mod_throttle/3.1.2 mod_psoft_traffic/0.2 mod_ssl/2.8.31 OpenSSL/0.9.8b
Vary: *
Last-Modified: Fri, 10 Aug 2012 20:22:30 GMT
ETag: "17c815b-3b-50256d86"
Accept-Ranges: bytes
Content-Length: 59
Connection: close
Content-Type: text/html
<html>
<body>
<center>webli7</center>
</body>
</html>
HTTP/1.1 400 Bad Request
Date: Thu, 24 Oct 2013 04:04:40 GMT
Server: Apache/1.3.41 (Unix) mod_gzip/1.3.26.1a PHP/5.2.9 mod_throttle/3.1.2 mod_psoft_traffic/0.2 mod_ssl/2.8.31 OpenSSL/0.9.8b
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=iso-8859-1
16e
The HTTP protocol version is decided dynamicaly, not through configuration files. The client send a request specifying the highest protocol version that its support. Then, the server must respond with either the version requested by the client, or any earlier version that it prefers.
Since Apache does support HTTP/1.1, it should therefore match exactly the version provided by the client.
There exist a flag that you may set in Apache's config to force Apache to use HTTP/1.0 in certain situations, even though the browser requested HTTP/1.1. This is used to fix bugs in HTTP/1.1 handling of some very old browser. Today, you should not need to play with this flag.
As for your error, I would suggest that you make sure that your GET does provide the Host: header. This header is required in HTTP/1.1, yet optional in HTTP/1.0, and having it missing would certainly result in a 400 error.

WGET 401 Unauthorized

I'm trying to use a batch file with WGET to download the public FCC file from here
http://wireless.fcc.gov/uls/data/complete/l_micro.zip
When I intially run the batch file with parameters
wget --server-response -owget.log http://wireless.fcc.gov/uls/data/complete/l_micro.zip
It fails with an HTTP 401 unauthorized error. I can retry at this point and it keeps failing. However I noticed if I open up IE, start a download and cancel when prompted to save, I can rerun the batch file and it executes perfectly!
Here is my detailed server response from the log
--2012-02-06 14:32:24-- http://wireless.fcc.gov/uls/data/complete/l_micro.zip
Resolving wireless.fcc.gov (wireless.fcc.gov)... 192.104.54.158
Connecting to wireless.fcc.gov (wireless.fcc.gov)|192.104.54.158|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 302 Found
Location: REMOVED - appears to have my IP
Cache-Control: no-cache
Pragma: no-cache
Content-Type: text/html; charset=utf-8
Connection: close
Content-Length: 513
Location: REMOVED [following]
--2012-02-06 14:32:24-- REMOVED
Resolving REMOVED... 192.168.2.11
Connecting to REMOVED|192.168.2.11|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 401 Unauthorized
Cache-Control: no-cache
Pragma: no-cache
WWW-Authenticate: NTLM
WWW-Authenticate: BASIC realm="AD_BCAAA"
Content-Type: text/html; charset=utf-8
Proxy-Connection: close
Set-Cookie: BCSI-CS-8ECFB6B4AA642EF0=2; Path=/
Connection: close
Content-Length: 575
Authorization failed.
Here is the log after doing my little IE procedure and getting it to work
--2012-02-08 15:52:43-- http://wireless.fcc.gov/uls/data/complete/l_micro.zip
Resolving wireless.fcc.gov (wireless.fcc.gov)... 192.104.54.158
Connecting to wireless.fcc.gov (wireless.fcc.gov)|192.104.54.158|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Server: Sun-Java-System-Web-Server/7.0
Date: Fri, 27 Jan 2012 18:37:51 GMT
Content-type: application/zip
Last-modified: Sun, 22 Jan 2012 11:18:09 GMT
Etag: "46fa95c-4f1bf071"
Accept-ranges: bytes
Content-length: 74426716
Connection: Keep-Alive
Age: 1045014
Length: 74426716 (71M) [application/zip]
Saving to: `l_micro.zip'
Any help is appreciated!
If the website has simply a htpassword setup, you can try:
wget --user=admin --ask-password https://www.yourwebsite.com/file.zip
I used --auth-no-challenge and the exact error get solved .
You have a Blue Coat secure web gateway on your network, as evidenced by the line in the response:
Set-Cookie: BCSI-CS-8ECFB6B4AA642EF0=2; Path=/
It looks like it wants you to authenticate, presumably with your domain credentials. Try passing them with --http-user and --http-passwd.
I had a similar issue with the xwiki based site. after several attempts I found some combination that worked for me just fine
wget --no-check-certificate --auth-no-challenge -k -nc -p -l 1 -r https://user:password#host.domain
I think the key was --auth-no-challenge
Try using this extension for firefox. It generates a wget or a curl command that can be copied and run from bash.
I came here trying to find out why wget was giving a 401 unauthorized message when on another system the problem did not occur.
After installing a later version of wget from source (binary was not available in my distro) it worked. I can't explain why, except that it must be some kind of bug so if none of the above fixes your problem, consider upgrading wget.
Try setting a user-agent string with wget - e.g.
--user-agent=Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)
it's entirely feasible for a site to reject requests from certain user agents, particularly if they look to be circumventing the "usual" routes to information (i.e. through webpages).
Although this doesn't explain your problem, it's a good idea anyway. Perhaps the site implements a mechanism whereby when you browse with a "known" browser (e.g. IE) it then caches your IP as "safe" then allows any user agent from your IP to download anything :)

Determine supported HTTP version by the web server

Is there a way to check whether a web server supports HTTP 1.0 or 1.1? If so, how is this done?
You could issue a:
curl --head www.test.com
that will print out the HTTP version in the first line of the output...
e.g.
HTTP/1.1 200 OK
Content-Length: 28925
Content-Type: text/html
Last-Modified: Fri, 26 Jun 2009 16:08:04 GMT
Accept-Ranges: bytes
ETag: "a41944978f6c91:0"
Server: Microsoft-IIS/7.0
X-Powered-By: ASP.NET
Date: Fri, 31 Jul 2009 06:13:25 GMT
In Google Chrome you can see protocol of each requests like this
open developers tools with F12
go to Network Tab
right click any where in column headers (like Name in the picture) and from the context menu select Protocol to be displayed as a new column
then you will see values like h2 (HTTP 2) or http/1.1 entry like the following picture in Protocol column
This should work on any platform that includes a telnet client:
telnet <host> 80
Then you have to type one of the following blind:
HEAD / HTTP/1.0
or
GET /
and hit enter twice.
The first line returned should output the HTTP version supported:
telnet www.stackoverflow.com 80
HEAD / HTTP/1.0
HTTP/1.1 404 Not Found
Content-Length: 315
Content-Type: text/html; charset=us-ascii
Server: Microsoft-HTTPAPI/2.0
Date: Fri, 31 Jul 2009 15:15:15 GMT
Connection: close
Read the release notes or the documentation of the webserver to check that. For example Apache Tomcat documentation tells it supports HTTP 1.1
Which webserver are you looking for?
Also are you asking if this can be checked programmatically?
In Google Chrome and Brave, you can easily use the Developer tools (F12 or Command + Option + I). Open the Network tab, find the request, click the Header tab, scroll down to "Response Headers", and click view source. It should show the HTTP version in the first line.
In the screenshot below, the server is using HTTP/1.1, as you can see: HTTP/1.1 200 OK. If that is missing, it's HTTP/2, since there is no readable source, it's in binary instead.
Alternatively, you can also use netcat so that you don't have to type it blindly as in telnet.
user#linux:~$ nc www.stackoverflow.com 80
HEAD / HTTP
HTTP/1.1 400 Bad Request
Connection: close
Content-Length: 0
user#linux:~$
$curl --head https://url:port -k
You get result something like...
HTTP/1.1 200 OK
blah....blah.
blah...blah..
$
So first line shows version it supports..

Resources