Server utility: receive HTTPS POST requests, cat the data - http

To test something, I want to run a simple web server that:
Will listen for HTTPS POST requests
Print the POST data received to STDOUT (along with other stuff, potentially, so it's fine if it just cats the whole HTTP request)
Is there a quick way to set something like this up? I've tried using OpenSSL's s_server, but it only seems to want to respond to GET requests.

Since s_server does not support POST requests, you should use socat instead of openssl s_server:
# socat -v OPENSSL-LISTEN:443,cert=mycert.pem,key=key.pem,verify=0,fork 'SYSTEM:/bin/echo HTTP/1.1 200 OK;/bin/echo;/bin/echo this-is-the-content-of-the-http-answer'
Here are essential parameters:
fork: to loop for many requests
-v: to display the POST data (and other stuff) to STDOUT
verify=0: do not ask for mutual authentication
Now, here is an example:
We use the following POST request:
% wget -O - --post-data=abcdef --no-check-certificate https://localhost/
[...]
this-is-the-content-of-the-http-answer
We see the following socat output:
# socat -v OPENSSL-LISTEN:443,cert=mycert.crt,key=key.pem,verify=0,fork 'SYSTEM:/bin/echo HTTP/1.1 200 OK;/bin/echo;/bin/echo this-is-the-content-of-the-http-answer'
> 2017/08/05 03:13:04.346890 length=212 from=0 to=211
POST / HTTP/1.1\r
User-Agent: Wget/1.19.1 (freebsd10.3)\r
Accept: */*\r
Accept-Encoding: identity\r
Host: localhost:443\r
Connection: Keep-Alive\r
Content-Type: application/x-www-form-urlencoded\r
Content-Length: 6\r
\r
< 2017/08/05 03:13:04.350299 length=16 from=0 to=15
HTTP/1.1 200 OK
> 2017/08/05 03:13:04.350516 length=6 from=212 to=217
abcdef< 2017/08/05 03:13:04.351549 length=1 from=16 to=16
< 2017/08/05 03:13:04.353019 length=39 from=17 to=55
this-is-the-content-of-the-http-answer

Related

Curl ends with "curl: (6) Could not resolve host: HTTP"

I've been following a blog on how to compile modsecurity with nginx, Blog. I tried to verify that everything works with creating the file /etc/nginx/conf.d/echo.conf which contains:
server {
listen localhost:8085;
location / {
default_type text/plain;
return 200 "Thank you for requesting ${request_uri}\n";
}
}
I ran the following in cmd:
sudo nginx -s reload
curl -D - http://localhost:8085 HTTP/1.1 200 OK
and I got
HTTP/1.1 200 OK
Server: nginx/1.19.0
Date: Wed, 10 Jun 2020 19:31:08 GMT
Content-Type: text/plain
Content-Length: 27
Connection: keep-alive
Thank you for requesting /
curl: (6) Could not resolve host: HTTP
I have been on this for hours and can't figure out what to do. The two solutions I've found were
IPv6 enabled
Wrong DNS server
I ran the command in cmd with --ipv4 curl --ipv4 -D - http://localhost:8085 HTTP/1.1 200 OK with no success.
I also changed the nameserver in /etc/resolv.conf to 8.8.8.8 instead of 127.0.0.53 which also didn't work.
Any clues on what to do?
That error message spawns due to the command syntax you used. When using curl it should be enough by running:
curl -D - http://localhost:8085
To make a HTTP request to the webserver you define (localhost in this case). Otherwise it will take additional arguments as extra URLs to query if there are not additional options to parse, so it is trying to query HTTP as if you typed http://HTTP, which simply will not work, at least until you define a specific entry for HTTP host in your /etc/hosts for example.

curl uses POST for all requests after redirect

According to the documentation and some similar questions on SO curl should follow a redirect using GET method, unless --post30x is specified as a parameter. However that's the result of my testing
curl -kvv -b /tmp/tmp.BEo6w3GKDq -c /tmp/tmp.BEo6w3GKDq -X POST -H "Accept: application/json" -L https://localhost/api/v1/resource
> POST /api/v1/resource HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost
> Cookie: JSESSIONIDSSO=AB59F2FD09D38EDBAACB726CF212EA2E; JSESSIONID=743FD68B520840094B6D283A81CF3CFA
> Accept: application/json
>
< HTTP/1.1 302 Found
< Server: Apache-Coyote/1.1
< Strict-Transport-Security: max-age=15768000; includeSubDomains
< Cache-control: no-cache, no-store
< Pragma: no-cache
< Location: https://testserver.int/api/v1/resource
< Content-Length: 0
< Date: Fri, 27 Jan 2017 08:41:05 GMT
<
> POST /api/v1/resource HTTP/1.1
> User-Agent: curl/7.29.0
> Host: testserver.int
> Cookie: JSESSIONID=1tcxpkul4qyqh1hycpf9insei9
> Accept: application/json
I would expect the second request to actually be using GET instead of POST.
curl's man page says:
When curl follows a redirect and the request is not a plain GET (for
example POST or PUT), it will do the following request with a GET if
the HTTP response was 301, 302, or 303. If the response code was any
other 3xx code, curl will re-send the following request using the same
unmodified method.
You can tell curl to not change the non-GET request method to GET
after a 30x response by using the dedicated options for that:
--post301, --post302 and --post303.
Unfortunatelly that's not what I'm seeing and there is no option for --get30x.
So my question is - how to make curl follow a redirect response (301/302/303) with a GET request to the Location as it is written in the documentation?
I've tested it with curl/7.29.0 as well as curl/7.50.3.
Problem: You are telling curl to do that with your use of -X POST. As the man page section for -X explains this:
The method string you set with -X, --request will be used for all requests, which
if you for example use -L, --location may cause unintended side-effects when curl
doesn't change request method according to the HTTP 30x response codes - and
similar.
Fix: Remove the -X POST from your command line. Use -d "" instead to send an empty post that will adjust accordingly to the proper method after redirect.
More: Explanation and rant in my blog post unnecessary use of curl -X.

How to make an HTTP GET request manually with netcat?

So, I have to retrieve temperature from any one of the cities from http://www.rssweather.com/dir/Asia/India.
Let's assume I want to retrieve of Kanpur's.
How to make an HTTP GET request with Netcat?
I'm doing something like this.
nc -v rssweather.com 80
GET http://www.rssweather.com/wx/in/kanpur/wx.php HTTP/1.1
I don't know exactly if I'm even in the right direction or not. I am not able to find any good tutorials on how to make an HTTP get request with netcat, so I'm posting it on here.
Of course you could dig in standards searched for google, but actually if you want to get only a single URL, it isn't​‎​‎ worth the effort.
You could also start a netcat in listening mode on a port:
nc -l 64738
(Sometimes nc -l -p 64738 is the correct argument list)
...and then do a browser request into this port with a real browser. Just type in your browser http://localhost:64738 and see.
In your actual case the problem is that HTTP/1.1 doesn't close the connection automatically, but it waits your next URL you want to retrieve. The solution is simple:
Use HTTP/1.0:
GET /this/url/you/want/to/get HTTP/1.0
Host: www.rssweather.com
<empty line>
or use a Connection: request header to say the server you want to close after that:
GET /this/url/you/want/to/get HTTP/1.1
Host: www.rssweather.com
Connection: close
<empty line>
Extension: After the GET header write only the path part of the request. The hostname from which you want to get data belongs to a Host: header as you can see in my examples. This is because multiple websites can run on the same webserver, so the browsers need to say him, from which site it wants to load the page.
This works for me:
$ nc www.rssweather.com 80
GET /wx/in/kanpur/wx.php HTTP/1.0
Host: www.rssweather.com
And then hit double <enter>, i.e. once for the remote http server and once for the nc command.
source: pentesterlabs
You don't even need to use/install netcat
Create a tcp socket via an unused file-descriptor i.e I use 88 here
Write the request into it
use the fd
exec 88<>/dev/tcp/rssweather.com/80
echo -e "GET /dir/Asia/India HTTP/1.1\nhost: www.rssweather.com\nConnection: close\n\n" >&88
sed 's/<[^>]*>/ /g' <&88
On MacOS, you need the -c flag as follows:
Little-Net:~ minfrin$ nc -c rssweather.com 80
GET /wx/in/kanpur/wx.php HTTP/1.1
Host: rssweather.com
Connection: close
[empty line]
The response then appears as follows:
HTTP/1.1 200 OK
Date: Thu, 23 Aug 2018 13:20:49 GMT
Server: Apache
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html
The -c flag is described as "Send CRLF as line-ending".
To be HTTP/1.1 compliant, you need the Host header, as well as the "Connection: close" if you want to disable keepalive.
Test it out locally with python3 http.server
This is also a fun way to test it out. On one shell, launch a local file server:
python3 -m http.server 8000
Then on the second shell, make a request:
printf 'GET / HTTP/1.1\r\nHost: localhost\r\n\r\n' | nc localhost 8000
The Host: header is required in HTTP 1.1.
This shows an HTML listing of the directory, just as you would see from:
firefox http://localhost:8000
Next you can try to list files and directories and observe the response:
printf 'GET /my-subdir/ HTTP/1.1\n\n' | nc localhost 8000
printf 'GET /my-file HTTP/1.1\n\n' | nc localhost 8000
Every time you make a successful request, the server prints:
127.0.0.1 - - [05/Oct/2018 11:20:55] "GET / HTTP/1.1" 200 -
confirming that it was received.
example.com
This IANA maintained domain is another good test URL:
printf 'GET / HTTP/1.1\r\nHost: example.com\r\n\r\n' | nc example.com 80
and compare with: http://example.com/
https SSL
nc does not seem to be able to handle https URLs. Instead, you can use:
sudo apt-get install nmap
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | ncat --ssl github.com 443
See also: https://serverfault.com/questions/102032/connecting-to-https-with-netcat-nc/650189#650189
If you try nc, it just hangs:
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | nc github.com 443
and trying port 80:
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | nc github.com 443
just gives a redirect response to the https version:
HTTP/1.1 301 Moved Permanently
Content-Length: 0
Location: https://github.com/
Connection: keep-alive
Tested on Ubuntu 18.04.

Why does curl not work, but wget works?

I am using both curl and wget to get this url: http://opinionator.blogs.nytimes.com/2012/01/19/118675/
For curl, it returns no output at all, but with wget, it returns the entire HTML source:
Here are the 2 commands. I've used the same user agent, and both are coming from the same IP, and are following redirects. The URL is exactly the same. For curl, it returns immediately after 1 second, so I know it's not a timeout issue.
curl -L -s "http://opinionator.blogs.nytimes.com/2012/01/19/118675/" --max-redirs 10000 --location --connect-timeout 20 -m 20 -A "Mozilla/5.0 (Windows NT 5.2; rv:2.0.1) Gecko/20100101 Firefox/4.0.1" 2>&1
wget http://opinionator.blogs.nytimes.com/2012/01/19/118675/ --user-agent="Mozilla/5.0 (Windows NT 5.2; rv:2.0.1) Gecko/20100101 Firefox/4.0.1"
If NY Times might be cloaking, and not returning the source to curl, what could be different in the headers curl is sending? I assumed since the user agent is the same, the request should look exactly the same from both of these requests. What other "footprints" should I check?
The way to solve is to analyze your curl request by doing curl -v ... and your wget request by doing wget -d ... which shows that curl is redirected to a login page
> GET /2012/01/19/118675/ HTTP/1.1
> User-Agent: Mozilla/5.0 (Windows NT 5.2; rv:2.0.1) Gecko/20100101 Firefox/4.0.1
> Host: opinionator.blogs.nytimes.com
> Accept: */*
>
< HTTP/1.1 303 See Other
< Date: Wed, 08 Jan 2014 03:23:06 GMT
* Server Apache is not blacklisted
< Server: Apache
< Location: http://www.nytimes.com/glogin?URI=http://opinionator.blogs.nytimes.com/2012/01/19/118675/&OQ=_rQ3D0&OP=1b5c69eQ2FCinbCQ5DzLCaaaCvLgqCPhKP
< Content-Length: 0
< Content-Type: text/plain; charset=UTF-8
followed by a loop of redirections (which you must have noticed, because you have already set the --max-redirs flag).
On the other hand, wget follows the same sequence except that it returns the cookie set by nytimes.com with its subsequent request(s)
---request begin---
GET /2012/01/19/118675/?_r=0 HTTP/1.1
User-Agent: Mozilla/5.0 (Windows NT 5.2; rv:2.0.1) Gecko/20100101 Firefox/4.0.1
Accept: */*
Host: opinionator.blogs.nytimes.com
Connection: Keep-Alive
Cookie: NYT-S=0MhLY3awSMyxXDXrmvxADeHDiNOMaMEZFGdeFz9JchiAIUFL2BEX5FWcV.Ynx4rkFI
The request sent by curl never includes the cookie.
The easiest way I see to modify your curl command and obtain the desired resource is by adding -c cookiefile to your curl command. This stores the cookie in the otherwise unused temporary "cookie jar" file called "cookiefile" thereby enabling curl to send the needed cookie(s) with its subsequent requests.
For example, I added the flag -c x directly after "curl " and I obtained the output just like from wget (except that wget writes it to a file and curl prints it on STDOUT).
In my case was because the https_proxy enviroment variable for utility cURL needs set the port in the URL, for example :
Not work with cURL :
https_proxy=http://proxyapp.net.com/
Works with cURL :
https_proxy=http://proxyapp.net.com:80/
With "wget" utility works with and without the port in url, but curl needs it, in case of not set the utility "curl" return error "(56) Proxy CONNECT aborted".
When you get verbosity of the command "curl -v" could see "curl" use port "1080" as default if port in not set at proxy url.

Unable to test HTTP PUT-based file upload via Squid Proxy

I can upload a file to my Apache web server using Curl just fine:
echo "[$(date)] file contents." | curl -T - http://WEB-SERVER/upload/sample.put
However, if I put a Squid proxy server in between, then I am not able to:
echo "[$(date)] file contents." | curl -x http://SQUID-PROXY:3128 -T - http://WEB-SERVER/upload/sample.put
Curl reports the following error:
Note: This error response was in HTML format, but I've removed the tags for ease of reading.
ERROR: The requested URL could not be retrieved
ERROR
The requested URL could not be retrieved
While trying to retrieve the URL:
http://WEB-SERVER/upload/sample.put
The following error was encountered:
Unsupported Request Method and Protocol
Squid does not support all request methods for all access protocols.
For example, you can not POST a Gopher request.
Your cache administrator is root.
My squid.conf doesn't seem to be having any ACL/rule that should disallow based on the src or dst IP addresses, or the protocol, or the HTTP method... as I can do an HTTP POST just fine between the same client and the web server, with the same proxy sitting in between.
In case of the failing HTTP PUT case, to see the request and response traffic that was actually occurring, I placed a netcat process in between Curl and Squid, and this is what I saw:
Request:
PUT http://WEB-SERVER/upload/sample.put HTTP/1.1
User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5
Host: WEB-SERVER
Pragma: no-cache
Accept: */*
Proxy-Connection: Keep-Alive
Transfer-Encoding: chunked
Expect: 100-continue
Response:
HTTP/1.0 501 Not Implemented
Server: squid/2.6.STABLE21
Date: Sun, 13 May 2012 02:11:39 GMT
Content-Type: text/html
Content-Length: 1078
Expires: Sun, 13 May 2012 02:11:39 GMT
X-Squid-Error: ERR_UNSUP_REQ 0
X-Cache: MISS from SQUID-PROXY-FQDN
X-Cache-Lookup: NONE from SQUID-PROXY-FQDN:3128
Via: 1.0 SQUID-PROXY-FQDN:3128 (squid/2.6.STABLE21)
Proxy-Connection: close
<SNIPPED the HTML error response already shown earlier above>
Note: I have anonymized the IP addresses and server names throughout for readability reasons.
Thanks to Amos Jeffries for answering this on squid-users forum. The issue is basically that Squid before version 3.1 does not implement HTTP 1.1 and thus rejects the chunked transfer encoding.

Resources