I've been following a blog on how to compile modsecurity with nginx, Blog. I tried to verify that everything works with creating the file /etc/nginx/conf.d/echo.conf which contains:
server {
listen localhost:8085;
location / {
default_type text/plain;
return 200 "Thank you for requesting ${request_uri}\n";
}
}
I ran the following in cmd:
sudo nginx -s reload
curl -D - http://localhost:8085 HTTP/1.1 200 OK
and I got
HTTP/1.1 200 OK
Server: nginx/1.19.0
Date: Wed, 10 Jun 2020 19:31:08 GMT
Content-Type: text/plain
Content-Length: 27
Connection: keep-alive
Thank you for requesting /
curl: (6) Could not resolve host: HTTP
I have been on this for hours and can't figure out what to do. The two solutions I've found were
IPv6 enabled
Wrong DNS server
I ran the command in cmd with --ipv4 curl --ipv4 -D - http://localhost:8085 HTTP/1.1 200 OK with no success.
I also changed the nameserver in /etc/resolv.conf to 8.8.8.8 instead of 127.0.0.53 which also didn't work.
Any clues on what to do?
That error message spawns due to the command syntax you used. When using curl it should be enough by running:
curl -D - http://localhost:8085
To make a HTTP request to the webserver you define (localhost in this case). Otherwise it will take additional arguments as extra URLs to query if there are not additional options to parse, so it is trying to query HTTP as if you typed http://HTTP, which simply will not work, at least until you define a specific entry for HTTP host in your /etc/hosts for example.
Related
To test something, I want to run a simple web server that:
Will listen for HTTPS POST requests
Print the POST data received to STDOUT (along with other stuff, potentially, so it's fine if it just cats the whole HTTP request)
Is there a quick way to set something like this up? I've tried using OpenSSL's s_server, but it only seems to want to respond to GET requests.
Since s_server does not support POST requests, you should use socat instead of openssl s_server:
# socat -v OPENSSL-LISTEN:443,cert=mycert.pem,key=key.pem,verify=0,fork 'SYSTEM:/bin/echo HTTP/1.1 200 OK;/bin/echo;/bin/echo this-is-the-content-of-the-http-answer'
Here are essential parameters:
fork: to loop for many requests
-v: to display the POST data (and other stuff) to STDOUT
verify=0: do not ask for mutual authentication
Now, here is an example:
We use the following POST request:
% wget -O - --post-data=abcdef --no-check-certificate https://localhost/
[...]
this-is-the-content-of-the-http-answer
We see the following socat output:
# socat -v OPENSSL-LISTEN:443,cert=mycert.crt,key=key.pem,verify=0,fork 'SYSTEM:/bin/echo HTTP/1.1 200 OK;/bin/echo;/bin/echo this-is-the-content-of-the-http-answer'
> 2017/08/05 03:13:04.346890 length=212 from=0 to=211
POST / HTTP/1.1\r
User-Agent: Wget/1.19.1 (freebsd10.3)\r
Accept: */*\r
Accept-Encoding: identity\r
Host: localhost:443\r
Connection: Keep-Alive\r
Content-Type: application/x-www-form-urlencoded\r
Content-Length: 6\r
\r
< 2017/08/05 03:13:04.350299 length=16 from=0 to=15
HTTP/1.1 200 OK
> 2017/08/05 03:13:04.350516 length=6 from=212 to=217
abcdef< 2017/08/05 03:13:04.351549 length=1 from=16 to=16
< 2017/08/05 03:13:04.353019 length=39 from=17 to=55
this-is-the-content-of-the-http-answer
In a book I'm reading now the author shows what HTTP headers mean. Namely he said that there are servers that host multiple web site.
Let's do this:
ping fideloper.com
We can see the IP address: 198.211.113.202.
Now let's use the IP address only:
curl -I 198.211.113.202
We catch:
$ curl -I 198.211.113.202
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Thu, 03 Aug 2017 14:48:33 GMT
Content-Type: text/html
Content-Length: 178
Connection: keep-alive
Location: https://book.serversforhackers.com/
Let’s next see what happens when we add a Host header to the HTTP request:
$ curl -I -H "Host: fideloper.com" 198.211.113.202
HTTP/1.1 200 OK
Server: nginx
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Vary: Accept-Encoding
Cache-Control: max-age=86400, public
Date: Thu, 03 Aug 2017 13:23:58 GMT
Last-Modified: Fri, 30 Dec 2016 22:32:12 GMT
X-Frame-Options: SAMEORIGIN
Set-Cookie: laravel_session=eyJpdiI6IjhVQlk2UWcyRExsaDllVEpJOERaT3dcL2d2aE9mMHV4eUduSjFkQTRKU0R3PSIsInZhbHVlIjoiMmcwVUpNSjFETWs1amJaNzhGZXVGZjFPZ3hINUZ1eHNsR0dBV1FvdE9mQ1RFak5IVXBKUEs2aEZzaEhpRHRodE1LcGhFbFI3OTR3NzQxZG9YUlN5WlE9PSIsIm1hYyI6ImRhNTVlZjM5MDYyYjUxMTY0MjBkZjZkYTQ1ZTQ1YmNlNjU3ODYzNGNjZTBjZWUyZWMyMjEzYjZhOWY1MWYyMDUifQ%3D%3D; expires=Thu, 03-Aug-2017 15:23:58 GMT; Max-Age=7200; path=/; httponly
X-Fastcgi-Cache: HIT
This means that serversforhackers.com is the default site.
Then the author said that we could request Servers for Hackers on the same server:
$ curl -I -H "Host: serversforhackers.com” 198.211.113.202
Here in the book HTTP/1.1 200 OK is received.
But I receve this:
curl -I -H "Host: serversforhackers.com" 198.211.113.202
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Thu, 03 Aug 2017 14:55:14 GMT
Content-Type: text/html
Content-Length: 178
Connection: keep-alive
Location: https://book.serversforhackers.com/
Well, the author organized a 301 redirect and uses HTTPS now.
I could do this:
curl -I https://serversforhackers.com
But this doesn't illustrate the whole idea of what default site is and how Host header can address a special site on a shared IP address.
Is it still possible somehow to get 200 Ok addressing via IP address?
In HTTP/1.1, without HTTPS, the Host header is the only place where the hostname is sent to the server.
With HTTPS, things are more interesting.
First, your client will normally try to check the server’s TLS certificate against the expected name:
$ curl -I -H "Host: book.serverforhackers.com" https://198.211.113.202
curl: (51) SSL: certificate subject name (book.serversforhackers.com) does not match target host name '198.211.113.202'
Most clients provide a way to override this check. curl has the -k/--insecure option for that:
$ curl -k -I -H "Host: book.serverforhackers.com" https://198.211.113.202
HTTP/1.1 200 OK
Server: nginx
[...]
But then there’s the second issue. I can’t illustrate it with your example server, but here’s one I found on the Internet:
$ curl -k -I https://analytics.usa.gov
HTTP/1.1 200 OK
Content-Type: text/html
[...]
$ host analytics.usa.gov | head -n 1
analytics.usa.gov has address 54.240.184.142
$ curl -k -I -H "Host: analytics.usa.gov" https://54.240.184.142
curl: (35) gnutls_handshake() failed: Handshake failed
This is caused by server name indication (SNI) — a feature of TLS (HTTPS) whereby the hostname is also sent in the TLS handshake. It is necessary because the server needs to present the right certificate (for the right hostname) before it can receive any HTTP headers at all. In the example above, when we use https://54.240.184.142, curl doesn’t send the correct SNI, and the server refuses the handshake. Other servers might accept the connection but route it to a wrong place, where the Host header will end up being ignored.
With curl, you can’t set SNI with a separate option like you set the Host header. curl will always take it from the request URL. But curl has a special --resolve option:
Provide a custom address for a specific host and port pair. Using this, you can make the curl requests(s) use a specified address and prevent the otherwise normally resolved address to be used. Consider it a sort of /etc/hosts alternative provided on the command line.
In this case:
$ curl -I --resolve analytics.usa.gov:443:54.240.184.142 https://analytics.usa.gov
HTTP/1.1 200 OK
Content-Type: text/html
[...]
(443 is the standard TCP port for HTTPS)
If you want to experiment at a lower level, you can use the openssl tool to establish a raw TLS connection with the right SNI:
$ openssl s_client -connect 54.240.184.142:443 -servername analytics.usa.gov -crlf
You will then be able to type an HTTP request and see the right response:
HEAD / HTTP/1.1
Host: analytics.usa.gov
HTTP/1.1 200 OK
Content-Type: text/html
[...]
Lastly, note that in HTTP/2, there’s a special header named :authority (yes, with a colon) that may be used instead of Host by some clients. The distinction between them exists for backward compatibility with HTTP/1.1 and proxies: see RFC 7540 § 8.1.2.3 and RFC 7230 § 5.3 for details.
So, I have to retrieve temperature from any one of the cities from http://www.rssweather.com/dir/Asia/India.
Let's assume I want to retrieve of Kanpur's.
How to make an HTTP GET request with Netcat?
I'm doing something like this.
nc -v rssweather.com 80
GET http://www.rssweather.com/wx/in/kanpur/wx.php HTTP/1.1
I don't know exactly if I'm even in the right direction or not. I am not able to find any good tutorials on how to make an HTTP get request with netcat, so I'm posting it on here.
Of course you could dig in standards searched for google, but actually if you want to get only a single URL, it isn't worth the effort.
You could also start a netcat in listening mode on a port:
nc -l 64738
(Sometimes nc -l -p 64738 is the correct argument list)
...and then do a browser request into this port with a real browser. Just type in your browser http://localhost:64738 and see.
In your actual case the problem is that HTTP/1.1 doesn't close the connection automatically, but it waits your next URL you want to retrieve. The solution is simple:
Use HTTP/1.0:
GET /this/url/you/want/to/get HTTP/1.0
Host: www.rssweather.com
<empty line>
or use a Connection: request header to say the server you want to close after that:
GET /this/url/you/want/to/get HTTP/1.1
Host: www.rssweather.com
Connection: close
<empty line>
Extension: After the GET header write only the path part of the request. The hostname from which you want to get data belongs to a Host: header as you can see in my examples. This is because multiple websites can run on the same webserver, so the browsers need to say him, from which site it wants to load the page.
This works for me:
$ nc www.rssweather.com 80
GET /wx/in/kanpur/wx.php HTTP/1.0
Host: www.rssweather.com
And then hit double <enter>, i.e. once for the remote http server and once for the nc command.
source: pentesterlabs
You don't even need to use/install netcat
Create a tcp socket via an unused file-descriptor i.e I use 88 here
Write the request into it
use the fd
exec 88<>/dev/tcp/rssweather.com/80
echo -e "GET /dir/Asia/India HTTP/1.1\nhost: www.rssweather.com\nConnection: close\n\n" >&88
sed 's/<[^>]*>/ /g' <&88
On MacOS, you need the -c flag as follows:
Little-Net:~ minfrin$ nc -c rssweather.com 80
GET /wx/in/kanpur/wx.php HTTP/1.1
Host: rssweather.com
Connection: close
[empty line]
The response then appears as follows:
HTTP/1.1 200 OK
Date: Thu, 23 Aug 2018 13:20:49 GMT
Server: Apache
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html
The -c flag is described as "Send CRLF as line-ending".
To be HTTP/1.1 compliant, you need the Host header, as well as the "Connection: close" if you want to disable keepalive.
Test it out locally with python3 http.server
This is also a fun way to test it out. On one shell, launch a local file server:
python3 -m http.server 8000
Then on the second shell, make a request:
printf 'GET / HTTP/1.1\r\nHost: localhost\r\n\r\n' | nc localhost 8000
The Host: header is required in HTTP 1.1.
This shows an HTML listing of the directory, just as you would see from:
firefox http://localhost:8000
Next you can try to list files and directories and observe the response:
printf 'GET /my-subdir/ HTTP/1.1\n\n' | nc localhost 8000
printf 'GET /my-file HTTP/1.1\n\n' | nc localhost 8000
Every time you make a successful request, the server prints:
127.0.0.1 - - [05/Oct/2018 11:20:55] "GET / HTTP/1.1" 200 -
confirming that it was received.
example.com
This IANA maintained domain is another good test URL:
printf 'GET / HTTP/1.1\r\nHost: example.com\r\n\r\n' | nc example.com 80
and compare with: http://example.com/
https SSL
nc does not seem to be able to handle https URLs. Instead, you can use:
sudo apt-get install nmap
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | ncat --ssl github.com 443
See also: https://serverfault.com/questions/102032/connecting-to-https-with-netcat-nc/650189#650189
If you try nc, it just hangs:
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | nc github.com 443
and trying port 80:
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | nc github.com 443
just gives a redirect response to the https version:
HTTP/1.1 301 Moved Permanently
Content-Length: 0
Location: https://github.com/
Connection: keep-alive
Tested on Ubuntu 18.04.
I am developing an HTTP proxy in Java. I resend all the data from client to server without touching it, but for some URLs (for example this) server returns the 404 error if I am connecting through my proxy.
The requested URL uses Varnish caching, so it might be the root of problem. I cannot reconfigure it - it is not my.
If I request that URL directly with browser, the server returns 200 and the image is shown correctly.
I am stuck because I even do not know what to read and how to compose a search request.
Thanks a lot.
Fix the Host: header of the re-issued request. The request going out from the proxy either has no Host header or it is broken (or only X-Host exists). Also take note that the proxy application will execute its own DNS lookup and that might yield a different IP address than your local computer (where you issued the original request).
This works:
> curl -s -D - -o /dev/null http://212.25.95.152/w/w-200/1902047-41.jpg -H "Host: msc.wcdn.co.il"
HTTP/1.1 200 OK
Content-Type: image/jpeg
Cache-Control: max-age = 315360000
magicmarker: 1
Content-Length: 27922
Accept-Ranges: bytes
Date: Sun, 05 Jul 2015 00:52:08 GMT
X-Varnish: 2508753650 2474246958
Age: 67952
Via: 1.1 varnish
Connection: keep-alive
X-Cache: HIT
I can upload a file to my Apache web server using Curl just fine:
echo "[$(date)] file contents." | curl -T - http://WEB-SERVER/upload/sample.put
However, if I put a Squid proxy server in between, then I am not able to:
echo "[$(date)] file contents." | curl -x http://SQUID-PROXY:3128 -T - http://WEB-SERVER/upload/sample.put
Curl reports the following error:
Note: This error response was in HTML format, but I've removed the tags for ease of reading.
ERROR: The requested URL could not be retrieved
ERROR
The requested URL could not be retrieved
While trying to retrieve the URL:
http://WEB-SERVER/upload/sample.put
The following error was encountered:
Unsupported Request Method and Protocol
Squid does not support all request methods for all access protocols.
For example, you can not POST a Gopher request.
Your cache administrator is root.
My squid.conf doesn't seem to be having any ACL/rule that should disallow based on the src or dst IP addresses, or the protocol, or the HTTP method... as I can do an HTTP POST just fine between the same client and the web server, with the same proxy sitting in between.
In case of the failing HTTP PUT case, to see the request and response traffic that was actually occurring, I placed a netcat process in between Curl and Squid, and this is what I saw:
Request:
PUT http://WEB-SERVER/upload/sample.put HTTP/1.1
User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5
Host: WEB-SERVER
Pragma: no-cache
Accept: */*
Proxy-Connection: Keep-Alive
Transfer-Encoding: chunked
Expect: 100-continue
Response:
HTTP/1.0 501 Not Implemented
Server: squid/2.6.STABLE21
Date: Sun, 13 May 2012 02:11:39 GMT
Content-Type: text/html
Content-Length: 1078
Expires: Sun, 13 May 2012 02:11:39 GMT
X-Squid-Error: ERR_UNSUP_REQ 0
X-Cache: MISS from SQUID-PROXY-FQDN
X-Cache-Lookup: NONE from SQUID-PROXY-FQDN:3128
Via: 1.0 SQUID-PROXY-FQDN:3128 (squid/2.6.STABLE21)
Proxy-Connection: close
<SNIPPED the HTML error response already shown earlier above>
Note: I have anonymized the IP addresses and server names throughout for readability reasons.
Thanks to Amos Jeffries for answering this on squid-users forum. The issue is basically that Squid before version 3.1 does not implement HTTP 1.1 and thus rejects the chunked transfer encoding.