I can upload a file to my Apache web server using Curl just fine:
echo "[$(date)] file contents." | curl -T - http://WEB-SERVER/upload/sample.put
However, if I put a Squid proxy server in between, then I am not able to:
echo "[$(date)] file contents." | curl -x http://SQUID-PROXY:3128 -T - http://WEB-SERVER/upload/sample.put
Curl reports the following error:
Note: This error response was in HTML format, but I've removed the tags for ease of reading.
ERROR: The requested URL could not be retrieved
ERROR
The requested URL could not be retrieved
While trying to retrieve the URL:
http://WEB-SERVER/upload/sample.put
The following error was encountered:
Unsupported Request Method and Protocol
Squid does not support all request methods for all access protocols.
For example, you can not POST a Gopher request.
Your cache administrator is root.
My squid.conf doesn't seem to be having any ACL/rule that should disallow based on the src or dst IP addresses, or the protocol, or the HTTP method... as I can do an HTTP POST just fine between the same client and the web server, with the same proxy sitting in between.
In case of the failing HTTP PUT case, to see the request and response traffic that was actually occurring, I placed a netcat process in between Curl and Squid, and this is what I saw:
Request:
PUT http://WEB-SERVER/upload/sample.put HTTP/1.1
User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5
Host: WEB-SERVER
Pragma: no-cache
Accept: */*
Proxy-Connection: Keep-Alive
Transfer-Encoding: chunked
Expect: 100-continue
Response:
HTTP/1.0 501 Not Implemented
Server: squid/2.6.STABLE21
Date: Sun, 13 May 2012 02:11:39 GMT
Content-Type: text/html
Content-Length: 1078
Expires: Sun, 13 May 2012 02:11:39 GMT
X-Squid-Error: ERR_UNSUP_REQ 0
X-Cache: MISS from SQUID-PROXY-FQDN
X-Cache-Lookup: NONE from SQUID-PROXY-FQDN:3128
Via: 1.0 SQUID-PROXY-FQDN:3128 (squid/2.6.STABLE21)
Proxy-Connection: close
<SNIPPED the HTML error response already shown earlier above>
Note: I have anonymized the IP addresses and server names throughout for readability reasons.
Thanks to Amos Jeffries for answering this on squid-users forum. The issue is basically that Squid before version 3.1 does not implement HTTP 1.1 and thus rejects the chunked transfer encoding.
Related
For my research I need to cURL the fqdns and get their status codes. (For Http, Https services) But some http urls open as https although it returns 200 with cURL. (successful request, no redirect)
curl -I http://example.example.com/
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 22 Nov 2021 10:43:32 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 64991
Connection: keep-alive
Keep-Alive: timeout=20
Vary: Accept-Encoding
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Pragma: no-cache
Link: <https://example.example.com/>; rel=shortlink
X-Powered-By: WP Engine
X-Cacheable: SHORT
Vary: Accept-Encoding,Cookie
Cache-Control: max-age=600, must-revalidate
X-Cache: HIT: 10
X-Cache-Group: normal
Accept-Ranges: bytes
As seen above I get 200 response with curl request. But I can see the 307 code in my browser. (available in the picture below)
Request URL: http://example.example.com/
Request Method: GET
Status Code: 307 Internal Redirect
Referrer Policy: strict-origin-when-cross-origin
Can I detect 307 code with curl? (-L parameter doesn't work) Any suggestions?
curl -w '%{response_code}\n' -so /dev/null $URL
It can be tested out like this:
curl -w '%{response_code}\n' -so /dev/null httpbin.org/status/307
so what is the 307 in the question?
As Stefan explains here in a separate answer: that's an internal message from Chrome that informs you that it uses HSTS. It is not an actual response code. Which is why curl can't show it. Chrome should make that clearer.
HSTS
HSTS is a way for a HTTPS server to ask clients to not contact them over clear text HTTP again. curl also supports HSTS but then you need to use --hsts - and curl will still not confusingly claim any 307 response codes.
The 307 http status isn't actually a response that is sent by a server. It's an internal redirect, something that your browser does for you before even sending the request to the server.
That's why it won't show up in curl. It's a feature of your browser. cURL is much more reliable when it comes to sending unaltered requests.
A 307 (especially since you mention https redirects) internal redirect is usually encountered when dealing with the security feature of HSTS (HTTP strict-transport-security) where the whole purpose is to make sure that you never send unencrypted http requests to a server that wants to communicate via encrypted https.
See this.
I have a simple file on my web server, and when I request it in a browser, it loads without problems:
http://example.server/report.php
But when I request the file with wget from a Raspberry Pi, I get this:
$ wget -d --spider http://example.server/report.php
Setting --spider (spider) to 1
DEBUG output created by Wget 1.18 on linux-gnueabihf.
Reading HSTS entries from /home/pi/.wget-hsts
URI encoding = 'ANSI_X3.4-1968'
converted 'http://example.server/report.php' (ANSI_X3.4-1968) -> 'http://example.server/report.php' (UTF-8)
Converted file name 'report.php' (UTF-8) -> 'report.php' (ANSI_X3.4-1968)
Spider mode enabled. Check if remote file exists.
--2018-06-03 07:29:29-- http://example.server/report.php
Resolving example.server (example.server)... 49.132.206.71
Caching example.server => 49.132.206.71
Connecting to example.server (example.server)|49.132.206.71|:80... connected.
Created socket 3.
Releasing 0x00832548 (new refcount 1).
---request begin---
HEAD /report.php HTTP/1.1
User-Agent: Wget/1.18 (linux-gnueabihf)
Accept: */*
Accept-Encoding: identity
Host: example.server
Connection: Keep-Alive
---request end---
HTTP request sent, awaiting response...
---response begin---
HTTP/1.1 406 Not Acceptable
Date: Fri, 15 Jun 2018 08:25:17 GMT
Server: Apache
Keep-Alive: timeout=3, max=200
Connection: Keep-Alive
Content-Type: text/html; charset=iso-8859-1
---response end---
406 Not Acceptable
Registered socket 3 for persistent reuse.
URI content encoding = 'iso-8859-1'
Remote file does not exist -- broken link!!!
I read somewhere that it might be an encoding problem, so I tried
$ wget -d --spider --header="Accept-encoding: *" http://example.server/report.php
but that gives me the exact same error.
That's because the server you're connecting to serves only to certain User-Agents.
Change the user agent and it works fine:
wget -d --user-agent="Mozilla/5.0 (Windows NT x.y; rv:10.0) Gecko/20100101 Firefox/10.0" http://example.server/report.php
In a book I'm reading now the author shows what HTTP headers mean. Namely he said that there are servers that host multiple web site.
Let's do this:
ping fideloper.com
We can see the IP address: 198.211.113.202.
Now let's use the IP address only:
curl -I 198.211.113.202
We catch:
$ curl -I 198.211.113.202
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Thu, 03 Aug 2017 14:48:33 GMT
Content-Type: text/html
Content-Length: 178
Connection: keep-alive
Location: https://book.serversforhackers.com/
Let’s next see what happens when we add a Host header to the HTTP request:
$ curl -I -H "Host: fideloper.com" 198.211.113.202
HTTP/1.1 200 OK
Server: nginx
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Vary: Accept-Encoding
Cache-Control: max-age=86400, public
Date: Thu, 03 Aug 2017 13:23:58 GMT
Last-Modified: Fri, 30 Dec 2016 22:32:12 GMT
X-Frame-Options: SAMEORIGIN
Set-Cookie: laravel_session=eyJpdiI6IjhVQlk2UWcyRExsaDllVEpJOERaT3dcL2d2aE9mMHV4eUduSjFkQTRKU0R3PSIsInZhbHVlIjoiMmcwVUpNSjFETWs1amJaNzhGZXVGZjFPZ3hINUZ1eHNsR0dBV1FvdE9mQ1RFak5IVXBKUEs2aEZzaEhpRHRodE1LcGhFbFI3OTR3NzQxZG9YUlN5WlE9PSIsIm1hYyI6ImRhNTVlZjM5MDYyYjUxMTY0MjBkZjZkYTQ1ZTQ1YmNlNjU3ODYzNGNjZTBjZWUyZWMyMjEzYjZhOWY1MWYyMDUifQ%3D%3D; expires=Thu, 03-Aug-2017 15:23:58 GMT; Max-Age=7200; path=/; httponly
X-Fastcgi-Cache: HIT
This means that serversforhackers.com is the default site.
Then the author said that we could request Servers for Hackers on the same server:
$ curl -I -H "Host: serversforhackers.com” 198.211.113.202
Here in the book HTTP/1.1 200 OK is received.
But I receve this:
curl -I -H "Host: serversforhackers.com" 198.211.113.202
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Thu, 03 Aug 2017 14:55:14 GMT
Content-Type: text/html
Content-Length: 178
Connection: keep-alive
Location: https://book.serversforhackers.com/
Well, the author organized a 301 redirect and uses HTTPS now.
I could do this:
curl -I https://serversforhackers.com
But this doesn't illustrate the whole idea of what default site is and how Host header can address a special site on a shared IP address.
Is it still possible somehow to get 200 Ok addressing via IP address?
In HTTP/1.1, without HTTPS, the Host header is the only place where the hostname is sent to the server.
With HTTPS, things are more interesting.
First, your client will normally try to check the server’s TLS certificate against the expected name:
$ curl -I -H "Host: book.serverforhackers.com" https://198.211.113.202
curl: (51) SSL: certificate subject name (book.serversforhackers.com) does not match target host name '198.211.113.202'
Most clients provide a way to override this check. curl has the -k/--insecure option for that:
$ curl -k -I -H "Host: book.serverforhackers.com" https://198.211.113.202
HTTP/1.1 200 OK
Server: nginx
[...]
But then there’s the second issue. I can’t illustrate it with your example server, but here’s one I found on the Internet:
$ curl -k -I https://analytics.usa.gov
HTTP/1.1 200 OK
Content-Type: text/html
[...]
$ host analytics.usa.gov | head -n 1
analytics.usa.gov has address 54.240.184.142
$ curl -k -I -H "Host: analytics.usa.gov" https://54.240.184.142
curl: (35) gnutls_handshake() failed: Handshake failed
This is caused by server name indication (SNI) — a feature of TLS (HTTPS) whereby the hostname is also sent in the TLS handshake. It is necessary because the server needs to present the right certificate (for the right hostname) before it can receive any HTTP headers at all. In the example above, when we use https://54.240.184.142, curl doesn’t send the correct SNI, and the server refuses the handshake. Other servers might accept the connection but route it to a wrong place, where the Host header will end up being ignored.
With curl, you can’t set SNI with a separate option like you set the Host header. curl will always take it from the request URL. But curl has a special --resolve option:
Provide a custom address for a specific host and port pair. Using this, you can make the curl requests(s) use a specified address and prevent the otherwise normally resolved address to be used. Consider it a sort of /etc/hosts alternative provided on the command line.
In this case:
$ curl -I --resolve analytics.usa.gov:443:54.240.184.142 https://analytics.usa.gov
HTTP/1.1 200 OK
Content-Type: text/html
[...]
(443 is the standard TCP port for HTTPS)
If you want to experiment at a lower level, you can use the openssl tool to establish a raw TLS connection with the right SNI:
$ openssl s_client -connect 54.240.184.142:443 -servername analytics.usa.gov -crlf
You will then be able to type an HTTP request and see the right response:
HEAD / HTTP/1.1
Host: analytics.usa.gov
HTTP/1.1 200 OK
Content-Type: text/html
[...]
Lastly, note that in HTTP/2, there’s a special header named :authority (yes, with a colon) that may be used instead of Host by some clients. The distinction between them exists for backward compatibility with HTTP/1.1 and proxies: see RFC 7540 § 8.1.2.3 and RFC 7230 § 5.3 for details.
I am developing an HTTP proxy in Java. I resend all the data from client to server without touching it, but for some URLs (for example this) server returns the 404 error if I am connecting through my proxy.
The requested URL uses Varnish caching, so it might be the root of problem. I cannot reconfigure it - it is not my.
If I request that URL directly with browser, the server returns 200 and the image is shown correctly.
I am stuck because I even do not know what to read and how to compose a search request.
Thanks a lot.
Fix the Host: header of the re-issued request. The request going out from the proxy either has no Host header or it is broken (or only X-Host exists). Also take note that the proxy application will execute its own DNS lookup and that might yield a different IP address than your local computer (where you issued the original request).
This works:
> curl -s -D - -o /dev/null http://212.25.95.152/w/w-200/1902047-41.jpg -H "Host: msc.wcdn.co.il"
HTTP/1.1 200 OK
Content-Type: image/jpeg
Cache-Control: max-age = 315360000
magicmarker: 1
Content-Length: 27922
Accept-Ranges: bytes
Date: Sun, 05 Jul 2015 00:52:08 GMT
X-Varnish: 2508753650 2474246958
Age: 67952
Via: 1.1 varnish
Connection: keep-alive
X-Cache: HIT
I'm trying to use a batch file with WGET to download the public FCC file from here
http://wireless.fcc.gov/uls/data/complete/l_micro.zip
When I intially run the batch file with parameters
wget --server-response -owget.log http://wireless.fcc.gov/uls/data/complete/l_micro.zip
It fails with an HTTP 401 unauthorized error. I can retry at this point and it keeps failing. However I noticed if I open up IE, start a download and cancel when prompted to save, I can rerun the batch file and it executes perfectly!
Here is my detailed server response from the log
--2012-02-06 14:32:24-- http://wireless.fcc.gov/uls/data/complete/l_micro.zip
Resolving wireless.fcc.gov (wireless.fcc.gov)... 192.104.54.158
Connecting to wireless.fcc.gov (wireless.fcc.gov)|192.104.54.158|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 302 Found
Location: REMOVED - appears to have my IP
Cache-Control: no-cache
Pragma: no-cache
Content-Type: text/html; charset=utf-8
Connection: close
Content-Length: 513
Location: REMOVED [following]
--2012-02-06 14:32:24-- REMOVED
Resolving REMOVED... 192.168.2.11
Connecting to REMOVED|192.168.2.11|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 401 Unauthorized
Cache-Control: no-cache
Pragma: no-cache
WWW-Authenticate: NTLM
WWW-Authenticate: BASIC realm="AD_BCAAA"
Content-Type: text/html; charset=utf-8
Proxy-Connection: close
Set-Cookie: BCSI-CS-8ECFB6B4AA642EF0=2; Path=/
Connection: close
Content-Length: 575
Authorization failed.
Here is the log after doing my little IE procedure and getting it to work
--2012-02-08 15:52:43-- http://wireless.fcc.gov/uls/data/complete/l_micro.zip
Resolving wireless.fcc.gov (wireless.fcc.gov)... 192.104.54.158
Connecting to wireless.fcc.gov (wireless.fcc.gov)|192.104.54.158|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Server: Sun-Java-System-Web-Server/7.0
Date: Fri, 27 Jan 2012 18:37:51 GMT
Content-type: application/zip
Last-modified: Sun, 22 Jan 2012 11:18:09 GMT
Etag: "46fa95c-4f1bf071"
Accept-ranges: bytes
Content-length: 74426716
Connection: Keep-Alive
Age: 1045014
Length: 74426716 (71M) [application/zip]
Saving to: `l_micro.zip'
Any help is appreciated!
If the website has simply a htpassword setup, you can try:
wget --user=admin --ask-password https://www.yourwebsite.com/file.zip
I used --auth-no-challenge and the exact error get solved .
You have a Blue Coat secure web gateway on your network, as evidenced by the line in the response:
Set-Cookie: BCSI-CS-8ECFB6B4AA642EF0=2; Path=/
It looks like it wants you to authenticate, presumably with your domain credentials. Try passing them with --http-user and --http-passwd.
I had a similar issue with the xwiki based site. after several attempts I found some combination that worked for me just fine
wget --no-check-certificate --auth-no-challenge -k -nc -p -l 1 -r https://user:password#host.domain
I think the key was --auth-no-challenge
Try using this extension for firefox. It generates a wget or a curl command that can be copied and run from bash.
I came here trying to find out why wget was giving a 401 unauthorized message when on another system the problem did not occur.
After installing a later version of wget from source (binary was not available in my distro) it worked. I can't explain why, except that it must be some kind of bug so if none of the above fixes your problem, consider upgrading wget.
Try setting a user-agent string with wget - e.g.
--user-agent=Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)
it's entirely feasible for a site to reject requests from certain user agents, particularly if they look to be circumventing the "usual" routes to information (i.e. through webpages).
Although this doesn't explain your problem, it's a good idea anyway. Perhaps the site implements a mechanism whereby when you browse with a "known" browser (e.g. IE) it then caches your IP as "safe" then allows any user agent from your IP to download anything :)