WGET 401 Unauthorized - http

I'm trying to use a batch file with WGET to download the public FCC file from here
http://wireless.fcc.gov/uls/data/complete/l_micro.zip
When I intially run the batch file with parameters
wget --server-response -owget.log http://wireless.fcc.gov/uls/data/complete/l_micro.zip
It fails with an HTTP 401 unauthorized error. I can retry at this point and it keeps failing. However I noticed if I open up IE, start a download and cancel when prompted to save, I can rerun the batch file and it executes perfectly!
Here is my detailed server response from the log
--2012-02-06 14:32:24-- http://wireless.fcc.gov/uls/data/complete/l_micro.zip
Resolving wireless.fcc.gov (wireless.fcc.gov)... 192.104.54.158
Connecting to wireless.fcc.gov (wireless.fcc.gov)|192.104.54.158|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 302 Found
Location: REMOVED - appears to have my IP
Cache-Control: no-cache
Pragma: no-cache
Content-Type: text/html; charset=utf-8
Connection: close
Content-Length: 513
Location: REMOVED [following]
--2012-02-06 14:32:24-- REMOVED
Resolving REMOVED... 192.168.2.11
Connecting to REMOVED|192.168.2.11|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 401 Unauthorized
Cache-Control: no-cache
Pragma: no-cache
WWW-Authenticate: NTLM
WWW-Authenticate: BASIC realm="AD_BCAAA"
Content-Type: text/html; charset=utf-8
Proxy-Connection: close
Set-Cookie: BCSI-CS-8ECFB6B4AA642EF0=2; Path=/
Connection: close
Content-Length: 575
Authorization failed.
Here is the log after doing my little IE procedure and getting it to work
--2012-02-08 15:52:43-- http://wireless.fcc.gov/uls/data/complete/l_micro.zip
Resolving wireless.fcc.gov (wireless.fcc.gov)... 192.104.54.158
Connecting to wireless.fcc.gov (wireless.fcc.gov)|192.104.54.158|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Server: Sun-Java-System-Web-Server/7.0
Date: Fri, 27 Jan 2012 18:37:51 GMT
Content-type: application/zip
Last-modified: Sun, 22 Jan 2012 11:18:09 GMT
Etag: "46fa95c-4f1bf071"
Accept-ranges: bytes
Content-length: 74426716
Connection: Keep-Alive
Age: 1045014
Length: 74426716 (71M) [application/zip]
Saving to: `l_micro.zip'
Any help is appreciated!

If the website has simply a htpassword setup, you can try:
wget --user=admin --ask-password https://www.yourwebsite.com/file.zip

I used --auth-no-challenge and the exact error get solved .

You have a Blue Coat secure web gateway on your network, as evidenced by the line in the response:
Set-Cookie: BCSI-CS-8ECFB6B4AA642EF0=2; Path=/
It looks like it wants you to authenticate, presumably with your domain credentials. Try passing them with --http-user and --http-passwd.

I had a similar issue with the xwiki based site. after several attempts I found some combination that worked for me just fine
wget --no-check-certificate --auth-no-challenge -k -nc -p -l 1 -r https://user:password#host.domain
I think the key was --auth-no-challenge

Try using this extension for firefox. It generates a wget or a curl command that can be copied and run from bash.

I came here trying to find out why wget was giving a 401 unauthorized message when on another system the problem did not occur.
After installing a later version of wget from source (binary was not available in my distro) it worked. I can't explain why, except that it must be some kind of bug so if none of the above fixes your problem, consider upgrading wget.

Try setting a user-agent string with wget - e.g.
--user-agent=Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)
it's entirely feasible for a site to reject requests from certain user agents, particularly if they look to be circumventing the "usual" routes to information (i.e. through webpages).
Although this doesn't explain your problem, it's a good idea anyway. Perhaps the site implements a mechanism whereby when you browse with a "known" browser (e.g. IE) it then caches your IP as "safe" then allows any user agent from your IP to download anything :)

Related

Can cURL detect 307 response?

For my research I need to cURL the fqdns and get their status codes. (For Http, Https services) But some http urls open as https although it returns 200 with cURL. (successful request, no redirect)
curl -I http://example.example.com/
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 22 Nov 2021 10:43:32 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 64991
Connection: keep-alive
Keep-Alive: timeout=20
Vary: Accept-Encoding
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Pragma: no-cache
Link: <https://example.example.com/>; rel=shortlink
X-Powered-By: WP Engine
X-Cacheable: SHORT
Vary: Accept-Encoding,Cookie
Cache-Control: max-age=600, must-revalidate
X-Cache: HIT: 10
X-Cache-Group: normal
Accept-Ranges: bytes
As seen above I get 200 response with curl request. But I can see the 307 code in my browser. (available in the picture below)
Request URL: http://example.example.com/
Request Method: GET
Status Code: 307 Internal Redirect
Referrer Policy: strict-origin-when-cross-origin
Can I detect 307 code with curl? (-L parameter doesn't work) Any suggestions?
curl -w '%{response_code}\n' -so /dev/null $URL
It can be tested out like this:
curl -w '%{response_code}\n' -so /dev/null httpbin.org/status/307
so what is the 307 in the question?
As Stefan explains here in a separate answer: that's an internal message from Chrome that informs you that it uses HSTS. It is not an actual response code. Which is why curl can't show it. Chrome should make that clearer.
HSTS
HSTS is a way for a HTTPS server to ask clients to not contact them over clear text HTTP again. curl also supports HSTS but then you need to use --hsts - and curl will still not confusingly claim any 307 response codes.
The 307 http status isn't actually a response that is sent by a server. It's an internal redirect, something that your browser does for you before even sending the request to the server.
That's why it won't show up in curl. It's a feature of your browser. cURL is much more reliable when it comes to sending unaltered requests.
A 307 (especially since you mention https redirects) internal redirect is usually encountered when dealing with the security feature of HSTS (HTTP strict-transport-security) where the whole purpose is to make sure that you never send unencrypted http requests to a server that wants to communicate via encrypted https.
See this.

wget Fails to Download Website (ERROR 0: no description)

I'm trying to mirror the whole website at http://opposedforces.com/parts/impreza/en_g11/type_63/
Accessing through a browser (Firefox, w3m) or Postman work fine, and return the html file.
Accessing through wget, cURL, the Python requests module and HTTrack all fail.
wget specifically fails with:
↪ wget --mirror -p --convert-links "http://opposedforces.com/parts/impreza/en_g11/type_63/"
--2021-02-03 20:48:29-- http://opposedforces.com/parts/impreza/en_g11/type_63/
Resolving opposedforces.com (opposedforces.com)... 138.201.30.59Connecting to opposedforces.com (opposedforces.com)|138.201.30.59|:80... connected.
HTTP request sent, awaiting response... 0
2021-02-03 20:48:29 ERROR 0: (no description).
Converted links in 0 files in 0 seconds.
It seemingly returns no information. Originally I thought some JavaScript was generating the html, but I can't find any JS using Firefox developer tools, and I would assume Postman would not work in this case.
Any ideas how to get around this? Ideally I can use wget to download this and all sub-pages, but alternative solutions are also welcome.
This is one of those times when the website is completely and absolutely broken.
It is unfortunate that web browsers go to great lengths to support such broken web pages.
The problem is that the server sends a broken response. This is the response I see:
---response begin---
HTTP/1.1 000
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 44892
Expires: -1
Server: Microsoft-IIS/7.5
X-AspNet-Version: 2.0.50727
Set-Cookie: ASP.NET_SessionId=gxhoir45jpd43545iujdpiru; path=/; HttpOnly
X-Powered-By: ASP.NET
Date: Fri, 05 Feb 2021 09:26:26 GMT
See? It returns a HTTP/1.1 000 response, which doesn't exist in the spec. Web browsers seem to just accept it as a 200 response and move on. Wget doesn't.
But you can get around it by using the --content-on-error option which is ask Wget to download the content irrespective of the response code

Getting 404 error if requesting a page through proxy, but 200 if connecting directly

I am developing an HTTP proxy in Java. I resend all the data from client to server without touching it, but for some URLs (for example this) server returns the 404 error if I am connecting through my proxy.
The requested URL uses Varnish caching, so it might be the root of problem. I cannot reconfigure it - it is not my.
If I request that URL directly with browser, the server returns 200 and the image is shown correctly.
I am stuck because I even do not know what to read and how to compose a search request.
Thanks a lot.
Fix the Host: header of the re-issued request. The request going out from the proxy either has no Host header or it is broken (or only X-Host exists). Also take note that the proxy application will execute its own DNS lookup and that might yield a different IP address than your local computer (where you issued the original request).
This works:
> curl -s -D - -o /dev/null http://212.25.95.152/w/w-200/1902047-41.jpg -H "Host: msc.wcdn.co.il"
HTTP/1.1 200 OK
Content-Type: image/jpeg
Cache-Control: max-age = 315360000
magicmarker: 1
Content-Length: 27922
Accept-Ranges: bytes
Date: Sun, 05 Jul 2015 00:52:08 GMT
X-Varnish: 2508753650 2474246958
Age: 67952
Via: 1.1 varnish
Connection: keep-alive
X-Cache: HIT

Unable to test HTTP PUT-based file upload via Squid Proxy

I can upload a file to my Apache web server using Curl just fine:
echo "[$(date)] file contents." | curl -T - http://WEB-SERVER/upload/sample.put
However, if I put a Squid proxy server in between, then I am not able to:
echo "[$(date)] file contents." | curl -x http://SQUID-PROXY:3128 -T - http://WEB-SERVER/upload/sample.put
Curl reports the following error:
Note: This error response was in HTML format, but I've removed the tags for ease of reading.
ERROR: The requested URL could not be retrieved
ERROR
The requested URL could not be retrieved
While trying to retrieve the URL:
http://WEB-SERVER/upload/sample.put
The following error was encountered:
Unsupported Request Method and Protocol
Squid does not support all request methods for all access protocols.
For example, you can not POST a Gopher request.
Your cache administrator is root.
My squid.conf doesn't seem to be having any ACL/rule that should disallow based on the src or dst IP addresses, or the protocol, or the HTTP method... as I can do an HTTP POST just fine between the same client and the web server, with the same proxy sitting in between.
In case of the failing HTTP PUT case, to see the request and response traffic that was actually occurring, I placed a netcat process in between Curl and Squid, and this is what I saw:
Request:
PUT http://WEB-SERVER/upload/sample.put HTTP/1.1
User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5
Host: WEB-SERVER
Pragma: no-cache
Accept: */*
Proxy-Connection: Keep-Alive
Transfer-Encoding: chunked
Expect: 100-continue
Response:
HTTP/1.0 501 Not Implemented
Server: squid/2.6.STABLE21
Date: Sun, 13 May 2012 02:11:39 GMT
Content-Type: text/html
Content-Length: 1078
Expires: Sun, 13 May 2012 02:11:39 GMT
X-Squid-Error: ERR_UNSUP_REQ 0
X-Cache: MISS from SQUID-PROXY-FQDN
X-Cache-Lookup: NONE from SQUID-PROXY-FQDN:3128
Via: 1.0 SQUID-PROXY-FQDN:3128 (squid/2.6.STABLE21)
Proxy-Connection: close
<SNIPPED the HTML error response already shown earlier above>
Note: I have anonymized the IP addresses and server names throughout for readability reasons.
Thanks to Amos Jeffries for answering this on squid-users forum. The issue is basically that Squid before version 3.1 does not implement HTTP 1.1 and thus rejects the chunked transfer encoding.

Chrome MULTIPLE_CONTENT_LENGTH error

If I access my page directly, I get:
$ wget http://localhost:8010/ --save-headers -O -
--2010-10-29 18:30:24-- http://localhost:8010/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8010... connected.
HTTP request sent, awaiting response... 200 OK
Length: 950 [text/html]
Saving to: `STDOUT'
HTTP/1.1 200 OK
Server: gunicorn/0.11.1
Date: Fri, 29 Oct 2010 16:30:24 GMT
Connection: keep-alive
Vary: Accept-Language, Cookie, Accept-Encoding
Content-Length: 950
Content-Type: text/html; charset=utf-8
Content-Language: en-us
If I access it via the cache:
$ wget http://localhost:8000/ --save-headers -O -
--2010-10-29 18:30:31-- http://localhost:8000/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 950 [text/html]
Saving to: `STDOUT'
HTTP/1.1 200 OK
Server: gunicorn/0.11.1
Vary: Accept-Language, Cookie, Accept-Encoding
Content-Type: text/html; charset=utf-8
Content-Language: en-us
Content-Length: 950
Date: Fri, 29 Oct 2010 16:30:31 GMT
X-Varnish: 818233557
Age: 0
Via: 1.1 varnish
Connection: keep-alive
When I open the latter in Chromium (8.0.552.18 (0)), I get this error:
Error 346 (net::ERR_RESPONSE_HEADERS_MULTIPLE_CONTENT_LENGTH): Unknown error.
I only see three extra headers; which one should I remove to make it display in Chrome?
EDIT: I have eventually got rid of this problem, but I can't remember how, and I don't have access to that system anymore. I'm starting a bounty, maybe somebody will explain me what was going on here.
Check out this version of the chromium source. It looks like if you do not specify "Transfer-Encoding" and you include multiple lengths it will throw this very error. Later revisions added a check that the content length sizes must be different to throw this error. Seems like it was added as a security precaution.
Probably would not have ever seen this error with a newer version of Chromium.
You might try disabling the DNS prefetching in the Chromium settings. Go to Preferences > Under the Hood and un-check "Use DNS pre-fetching to improve page load times".

Resources