A strange problem about size limit in http header - http

Context: I maintain a kind of web service server, but with a particular implementation: all data sent by the web services are located in the http header. That means there is only http header in the response (no body part). The web service runs as a windows service. The consumer is my PHP code which invokes the web service via CURL library. All this is in production since 3 years and works fine. I recently had to build a development environment.
I have the web service on a Windows 7 pro, as a windows service.
I have my PHP consumer in another windows 7 pro (WAMP + CURL).
my PHP code invokes the web service and displays the raw response.
in this context the problem occurs: if the response contains more than 1215 characters, I have an empty response (but no error message).
I installed my PHP code (exactly the same) on a new Linux ubuntu: I have the same problem.
I installed my PHP code (exactly the same) on a new Linux centos: I DON'T HAVE THE PROBLEM.
I read a lot on internet about size limitation on http header, and I think today it's not the reason of the problem.
I examined all size limitation parameters on Apache, PHP, Curl but I didn't find something relevant.
If someone has some information. All tracks are welcome. Thanks

not an answer, but want to say that using PHP 7.2.5 under mod_php with Apache 2.4.33, i am unable to reproduce your issue, as i have no problems sending anything from 1 byte to 10,000 to even 100,000 bytes in headers:
here is my producer.php:
<?php
$size=((int)($_GET['s'] ?? 1));
header("X-size: {$size}");
$data=str_repeat("a",$size);
header("X-data: {$data}");
http_response_code(204); // 204 NO CONTENT
and whether i hit http://127.0.0.1/producer.php?s=1 or http://127.0.0.1/producer.php?s=10000 or even http://127.0.0.1/producer.php?s=100000 , the data is returned without issue, as you can see in the screenshot above. can you reproduce the issue using my producer.php code?
btw, interestingly, when i try 1 million bytes, i get this error from curl:
$ curl -I http://127.0.0.1/producer.php?s=1000000
HTTP/1.1 204 No Content
Date: Wed, 16 Jan 2019 20:11:25 GMT
Server: Apache/2.4.33 (Win32) OpenSSL/1.1.0h PHP/7.2.5
X-Powered-By: PHP/7.2.5
X-size: 1000000
curl: (27) Rejected 104960 bytes header (max is 102400)!

Hanshenrik,
i also used CURLOPT_VERBOSE as you said. Here are the 2 curl logs.
The only difference is the line
<* stopped the pause stream!> in the Ubuntu curl log.
CURL log from Ubuntu witch has the probleme:
* Trying 192.168.1.205...
* TCP_NODELAY set
* Connected to 192.168.1.205 (192.168.1.205) port 8084 (#0)
> POST /datasnap/rest/TServerMethods/%22W_GetDashboard%22/ HTTP/1.1
Host: 192.168.1.205:8084
Accept-Encoding: gzip,deflate
Accept: application/json
Content-Type: text/xml; charset=utf-8
Pragma: dssession=146326.909376.656191
Content-Length: 15
* upload completely sent off: 15 out of 15 bytes
< HTTP/1.1 200 OK
< Connection: close
< Content-Encoding: deflate
< Content-Type: application/json
< Content-Length: 348
< Date: Thu, 17 Jan 2019 15:27:03 GMT
< Pragma: dssession=146326.909376.656191,dssessionexpires=3600000
<
* stopped the pause stream!
* Closing connection 0
CURL log from Centos witch has NOT the probleme:
* About to connect() to 192.168.1.205 port 8084 (#1)
* Trying 192.168.1.205...
* Connected to 192.168.1.205 (192.168.1.205) port 8084 (#1)
> POST /datasnap/rest/TServerMethods/%22W_GetDashboard%22/ HTTP/1.1
Host: 192.168.1.205:8084
Accept-Encoding: gzip,deflate
Accept: application/json
Content-Type: text/xml; charset=utf-8
Pragma: dssession=3812.553164.889594
Content-Length: 15
* upload completely sent off: 15 out of 15 bytes
< HTTP/1.1 200 OK
< Connection: close
< Content-Encoding: deflate
< Content-Type: application/json
< Content-Length: 348
< Date: Thu, 17 Jan 2019 15:43:39 GMT
< Pragma: dssession=3812.553164.889594,dssessionexpires=3600000
<
* Closing connection 1

Related

How to successfully resend a POST request and get the correct respond without knowing the actual payload

I've been working on to capture a multiple post requests from an android app for testing purpose.
Unfortunately, I'm stuck in finding a way to get the actual payload of the request by using a request sender to resend the request. I could get 200 status code but I could only get a wrong respond, and that is not what I expected. I'm hoping to get any advice in here if it's possible?
The request is sent via a POST method.
The request address looks like this(from my perspective it doesn't have a body, does it?)
http://proxy.ABC.ABC.com/ABC/qryunreadmsgcount.do?d=2&m=1&t=803514
Please correct me if the description or the title needs further editing .
Cheers
=========================================================================
Edit:
this is the respond that I got
Preview: {
"respbase": {
"status": "false",
"returncode": "000002",
"returndesc": "必填参数[clientrequest]"
}
}
Server: nginx/1.14.0
Date: Thu, 19 Jul 2018 01:44:38 GMT
Content-Type: application/json;charset=UTF-8
Content-Length: 4
Connection: keep-alive
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type, Accept-Language, Origin, Accept-Encoding
X-Frame-Options: SAMEORIGIN
* Preparing request to http://proxy.ABC.ABC.com/vboxserver/qryunreadmsgcount.do?d=2&m=1&t=803514
* Using libcurl/7.54.0 LibreSSL/2.0.20 zlib/1.2.11 nghttp2/1.24.0
* Enable automatic URL encoding
* Enable SSL validation
* Enable cookie sending with jar of 7 cookies
* Trying 101.XXX.XXX.XXX...
* TCP_NODELAY set
* Connected to proxy.ABC.ABC.com (101.xxx.xxx.xxx) port 80 (#75)
> POST /ABC/qryunreadmsgcount.do?d=2&m=1&t=803514 HTTP/1.1
> Host: proxy.ABC.ABC.com
> User-Agent: insomnia/5.16.6
> Accept: */*
> Content-Length: 0
< HTTP/1.1 200 OK
< Server: nginx/1.14.0
< Date: Thu, 19 Jul 2018 02:13:24 GMT
< Content-Type: text/plain
< Content-Length: 96
< Connection: keep-alive
< X-Frame-Options: SAMEORIGIN
* Received 88 B chunk
* Connection #75 to host proxy.ABC.ABC.com left intact
And this is the request sender I've been using:
For those who might want to know the answer:
Burpsuite is very handy in dealing with this
:)

POST from REST console perceived as GET by server

Im using both the chrome REST API console and Postman to send a post request to my server (running nginx and symfony2)
Its a very simple request, just simply posting to a URL with an empty body. If this request runs from another server via a HTTP request, it will register as POST. Trying to POST from the api consoles registers as GET in my nginx access logs and returns a 405 Method not allowed.
If I use curl I initially get a 301 Moved Permanently, so I have to use -L to follow redirects. Im not sure if this is standard Symfony or if it is effecting the request.
I've found some problems with the curl request, but am unsure how to resolve them.
$ curl -v -L -d "1EepG1a63X" xxx.io/api/convert_mov/
* Trying xx.76.9.82...
* Connected to xxx.io (xx.76.9.82) port 80 (#0)
> POST /api/convert_mov/ HTTP/1.1
> Host: xxx.io
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 10
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 10 out of 10 bytes
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.6.2
< Date: Tue, 15 Sep 2015 09:00:43 GMT
< Content-Type: text/html
< Content-Length: 184
< Connection: keep-alive
< Location: https://xxx.io/api/convert_mov/
<
* Ignoring the response-body
* Connection #0 to host xxx.io left intact
* Issue another request to this URL: 'https://xxx.io/api/convert_mov/'
* Switch from POST to GET
* Found bundle for host xxx.io: 0x7fcad9c14e70
* Trying xx.76.9.82...
* Connected to xxx.io (xx.76.9.82) port 443 (#1)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
* Server certificate: xxx.io
* Server certificate: DigiCert SHA2 Secure Server CA
* Server certificate: DigiCert Global Root CA
> GET /api/convert_mov/ HTTP/1.1
> Host: xxx.io
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Server: nginx/1.6.2
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< X-Powered-By: PHP/5.5.25
< Cache-Control: no-cache
< Date: Tue, 15 Sep 2015 09:00:43 GMT
If you look closer, you will see your request is with HTTP. Then your server sends a redirect to your HTTPS site. And a 301 redirect does not preserve the request method. You MUST issue all your requests correctly against HTTPS.

Heroku router closing persistent http/1.1 connection requests?

I have a application hosted on Heroku, and it seems to be adding a Connection:close to the response header of a HTTP/1.1 connection request, and not allowing us to re-use a persistent HTTP/1.1 connection. This works for other apps I have on Heroku, but I can't figure out why it would do this for this app. Any clues?
So if I attempt to test with curl for example,
curl -v "http://myapp.herokuapp.com/api/posts/trending" "http://myapp.com/api/posts/trending"
* Connected to myapp.herokuapp.com () port 80 (#0)
> GET /api/posts/trending HTTP/1.1
> User-Agent: curl/7.37.1
> Host: myapp.herokuapp.com
> Accept: */*
>
< HTTP/1.1 200 OK
< Connection: close
< Date: Mon, 09 Mar 2015 20:54:15 GMT
< Cache-Control: no-cache, no-store, must-revalidate
< Pragma: no-cache
< Expires: 0
< Content-Type: application/json;charset=UTF-8
* Server Jetty(9.2.7.v20150116) is not blacklisted
< Server: Jetty(9.2.7.v20150116)
< Via: 1.1 vegur
...response...
* Closing connection 0
Answering my own question here, according to Heroku Support, it is known limitation of the Heroku router, since Jetty doesn't send Connection: keep-alive in the response of HTTP/1.1 request by design. No suggested workarounds at this time.

HTTP 400 Bad Request Error

I am trying to post a XML file (test1.xml) and receive an output from a webservice API. I get an error HTTP/1.1 400 Bad Request. This is the code below.
myheader=c(Connection="close",
'Content-Type' = "application/xml",
'Content-length' =nchar("test1.xml"))
data = getURL("http://abcd/efg/requests/",
userpwd="m12345:123456", httpauth = 1L,
postfields="test1.xml",
httpheader=myheader,
verbose=TRUE)
This is the output
* Hostname was NOT found in DNS cache
* Trying 123.456.789.123...
* Connected to rcftomdev1 (123.456.789.123) port 8086 (#0)
* Server auth using Basic with user 'm12345'
> POST /dart/requests/ HTTP/1.1
Authorization: Basic bTEzNDQ4M#$#$#$#$%5==
Host: abcd:8086
Accept: */*
Connection: close
Content-Type: application/xml
Content-length: 9
* upload completely sent off: 9 out of 9 bytes
< HTTP/1.1 400 Bad Request
< Server: Apache-Coyote/1.1
< Content-Type: text/html;charset=utf-8
< Content-Length: 968
< Date: Tue, 20 Jan 2015 07:50:05 GMT
< Connection: close
<
* Closing connection 0
Not sure where I am going wrong, need help ?
If that was indeed exactly what was sent in the Authorization: header for the given user+password then something seriously wrong happened as your combo should've created bTEyMzQ1OjEyMzQ1Ngo=.
The other aspects of the request looks fine so only reading up on the server side requirements for this server and API will give you the answer.

Chrome MULTIPLE_CONTENT_LENGTH error

If I access my page directly, I get:
$ wget http://localhost:8010/ --save-headers -O -
--2010-10-29 18:30:24-- http://localhost:8010/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8010... connected.
HTTP request sent, awaiting response... 200 OK
Length: 950 [text/html]
Saving to: `STDOUT'
HTTP/1.1 200 OK
Server: gunicorn/0.11.1
Date: Fri, 29 Oct 2010 16:30:24 GMT
Connection: keep-alive
Vary: Accept-Language, Cookie, Accept-Encoding
Content-Length: 950
Content-Type: text/html; charset=utf-8
Content-Language: en-us
If I access it via the cache:
$ wget http://localhost:8000/ --save-headers -O -
--2010-10-29 18:30:31-- http://localhost:8000/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 950 [text/html]
Saving to: `STDOUT'
HTTP/1.1 200 OK
Server: gunicorn/0.11.1
Vary: Accept-Language, Cookie, Accept-Encoding
Content-Type: text/html; charset=utf-8
Content-Language: en-us
Content-Length: 950
Date: Fri, 29 Oct 2010 16:30:31 GMT
X-Varnish: 818233557
Age: 0
Via: 1.1 varnish
Connection: keep-alive
When I open the latter in Chromium (8.0.552.18 (0)), I get this error:
Error 346 (net::ERR_RESPONSE_HEADERS_MULTIPLE_CONTENT_LENGTH): Unknown error.
I only see three extra headers; which one should I remove to make it display in Chrome?
EDIT: I have eventually got rid of this problem, but I can't remember how, and I don't have access to that system anymore. I'm starting a bounty, maybe somebody will explain me what was going on here.
Check out this version of the chromium source. It looks like if you do not specify "Transfer-Encoding" and you include multiple lengths it will throw this very error. Later revisions added a check that the content length sizes must be different to throw this error. Seems like it was added as a security precaution.
Probably would not have ever seen this error with a newer version of Chromium.
You might try disabling the DNS prefetching in the Chromium settings. Go to Preferences > Under the Hood and un-check "Use DNS pre-fetching to improve page load times".

Resources