Icecast header response is both 400 and 200 - networking

I have Icecast 2.4.4 running on a Windows box at sub.domain.org. My website is on a different server at domain.org.
When I SSH into my Linux host shell and run curl to the mount point I get a response of 400, but if I do wget I get a response of 200. How can this be?
# wget https://sub.domain.org/live.mp3
--2018-12-19 17:52:58-- https://sub.domain.org/live.mp3 Resolving sub.domain.org... 111.111.111.111 Connecting to
sub.domain.org|111.111.111.111|:443... connected. HTTP request sent,
awaiting **response... 200 OK** Length: unspecified [audio/mpeg] Saving
to: `live.mp3'
[ <=> ] 96,600 3.93K/s ^C
# curl --head https://sub.domain.org/live.mp3
HTTP/1.0 **400 Bad Request**
Server: Icecast 2.4.4
Connection: Close Date: Thu, 20 Dec 2018
00:53:32 GMT Content-Type: text/html; charset=utf-8 Cache-Control:
no-cache, no-store Expires: Mon, 26 Jul 1997 05:00:00 GMT Pragma:
no-cache Access-Control-Allow-Origin: *

Because in case of cURL you are passing the --head parameter. This tells cURL to make a HTTP HEAD request instead of the HTTP GET request that wget performs.
Icecast does not support HTTP HEAD requests and thus the HTTP 400 response is fully justified.

Related

calling a http post api via curl with proxy server successfully but no content

I am calling a http post api via curl, and through proxy server.
how ever, call successful, response code 200 and Content-Length: 157. But no any body or content.
my curl cmd:
curl -v -I -x proxyServer:Port -X POST apiUrl
response:
HTTP/1.1 200 OK
Date: Thu, 18 Jul 2019 10:58:01 GMT
Server: Apache-Coyote/1.1
Content-Type: application/json
Content-Length: 157

Does Firebase support HTTP HEAD request?

My app has files on firebase storage that i need to serve up to another service. The files are publicly accessible but this service likes to make a HEAD request first which is denied by firebase (error 400).
An this be configured somehow? I believe that Google storage supports this.
eg: file get is ok:
$ curl https://firebasestorage.googleapis.com/v0/b/test-451f9.appspot.com/o/temp%2Fhello.txt?
alt=media -o -
Hello
but the HEAD request:
$ curl --head https://firebasestorage.googleapis.com/v0/b/test-451f9.appspot.com/o/temp%2Fhel
lo.txt?alt=media -o -
HTTP/2 400
x-guploader-uploadid: AEnB2UqWsCbhq_AKpXh29El8_aiJnZqDEUeGsn2i1j0ZPQie0-OB2AQjnKqi_ya50hIw7Yb4WmlKV19ilYQBk9KGdndj4oX9oQ
x-content-type-options: nosniff
content-type: application/json; charset=UTF-8
access-control-expose-headers: Content-Range, X-Firebase-Storage-XSRF
access-control-allow-origin: *
date: Mon, 10 Dec 2018 17:20:52 GMT
expires: Mon, 10 Dec 2018 17:20:52 GMT
cache-control: private, max-age=0
server: UploadServer
alt-svc: quic=":443"; ma=2592000; v="44,43,39,35"
fails.

Getting strange http response codes, but the site is actually working

When I view the URL below or the other below in the code it's displayed fine. I don't see anything unusual in the network tab when I press F12 in the browser, but with the code below I will get response codes 403 or 400. When I use the response code checker here http://httpstatus.io/ it will come back fine with a 200 response for both URLS.
I get a 403 for http://psychsignal.com/ using my code below.
URL u = new URL("http://www.nasdaqomxnordic.com/"); //returns 400 response code
//u.toURI(); //to check the syntax
HttpURLConnection huc = (HttpURLConnection)u.openConnection();
huc.setRequestMethod("GET");
//huc.setRequestMethod("HEAD");
huc.connect();
System.out.println(huc.getResponseCode());
Thanks if anyone has any ideas! This is actually my first post!
My guess is that there's some restrictions placed on the User-Agent of the client. Some testing seems to support my theory:
If I use the curl default user agent:
# curl -I -H "User-Agent: curl/7.35.0" "http://www.nasdaqomxnordic.com/"
HTTP/1.1 400 Bad Request
Content-Type: text/html; charset=UTF-8
Cache-Control: no-cache
Pragma: no-cache
Expires: 0
Connection: close
If I use a hacked up standard browser agent string:
# curl -I -H "User-Agent: Mozilla/5.0" -0 "http://www.nasdaqomxnordic.com/"
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 0
Content-Type: text/html;charset=UTF-8
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Server: Microsoft-IIS/7.5
X-Powered-By: ASP.NET
Date: Wed, 22 Jul 2015 15:06:22 GMT
Connection: close
And then if I use a Java agent string (which is my guess as to what you're using):
# curl -I -H "User-Agent: Java/1.6.0_26" "http://www.nasdaqomxnordic.com/"
HTTP/1.1 400 Bad Request
Content-Type: text/html; charset=UTF-8
Cache-Control: no-cache
Pragma: no-cache
Expires: 0
Connection: close
Only the "browser" user agent gets through. I'd try tweaking your code to set the user agent string to something commonly found in a web browser.

Nagios check_http gives 'HTTP/1.0 503 Service Unavailable' for HAProxy site

Can't figure this one out!
OS: CentOS 6.6 (Up-To-Date)
I get the following 503 error when using my nagios check_http check (or curl) to query an SSL site served via HAProxy 1.5.
[root#nagios ~]# /usr/local/nagios/libexec/check_http -v -H example.com -S1
GET / HTTP/1.1
User-Agent: check_http/v2.0 (nagios-plugins 2.0)
Connection: close
Host: example.com
https://example.com:443/ is 212 characters
STATUS: HTTP/1.0 503 Service Unavailable
**** HEADER ****
Cache-Control: no-cache
Connection: close
Content-Type: text/html
**** CONTENT ****
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
HTTP CRITICAL: HTTP/1.0 503 Service Unavailable - 212 bytes in 1.076 second response time |time=1.075766s;;;0.000000 size=212B;;;0
[root#nagios ~]# curl -I https://example.com
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
However. I can access the site fine via any browser fine (200 OK), and also curl -I https://example.com from another server:
root#localhost:~# curl -I https://example.com
HTTP/1.1 200 OK
Date: Wed, 18 Feb 2015 14:36:51 GMT
Server: Apache/2.4.6
Expires: Mon, 26 Jul 1997 05:00:00 GMT
Pragma: no-cache
Last-Modified: Wed, 18 Feb 2015 14:36:52 GMT
Content-Type: text/html; charset=UTF-8
Strict-Transport-Security: max-age=31536000;
The HAProxy server is runnning on pfSense 2.2.
I see that HAProxy returns an HTTP/1.0 for nagios and HTTP/1.1 from elsewhere. So is it my check_http' plugin causing this or is itcurl`?
Is my server just not sending the HOST header? If so, how can I resolve this?
What check_http does is it checks whether there exists a index.html-file on the server. This means you might have http running and working, while the check still fails.
Regardless whether or not creating an index.html file on the server resolves the issue, u might not want to create the circumstances such that the check works.
I suppose setting up a check for pinging your example.com and a check via nrpe to see whether your http-service is running will meet your requirements.
check_http has an option called --sni
You need to use that option

Chrome MULTIPLE_CONTENT_LENGTH error

If I access my page directly, I get:
$ wget http://localhost:8010/ --save-headers -O -
--2010-10-29 18:30:24-- http://localhost:8010/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8010... connected.
HTTP request sent, awaiting response... 200 OK
Length: 950 [text/html]
Saving to: `STDOUT'
HTTP/1.1 200 OK
Server: gunicorn/0.11.1
Date: Fri, 29 Oct 2010 16:30:24 GMT
Connection: keep-alive
Vary: Accept-Language, Cookie, Accept-Encoding
Content-Length: 950
Content-Type: text/html; charset=utf-8
Content-Language: en-us
If I access it via the cache:
$ wget http://localhost:8000/ --save-headers -O -
--2010-10-29 18:30:31-- http://localhost:8000/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 950 [text/html]
Saving to: `STDOUT'
HTTP/1.1 200 OK
Server: gunicorn/0.11.1
Vary: Accept-Language, Cookie, Accept-Encoding
Content-Type: text/html; charset=utf-8
Content-Language: en-us
Content-Length: 950
Date: Fri, 29 Oct 2010 16:30:31 GMT
X-Varnish: 818233557
Age: 0
Via: 1.1 varnish
Connection: keep-alive
When I open the latter in Chromium (8.0.552.18 (0)), I get this error:
Error 346 (net::ERR_RESPONSE_HEADERS_MULTIPLE_CONTENT_LENGTH): Unknown error.
I only see three extra headers; which one should I remove to make it display in Chrome?
EDIT: I have eventually got rid of this problem, but I can't remember how, and I don't have access to that system anymore. I'm starting a bounty, maybe somebody will explain me what was going on here.
Check out this version of the chromium source. It looks like if you do not specify "Transfer-Encoding" and you include multiple lengths it will throw this very error. Later revisions added a check that the content length sizes must be different to throw this error. Seems like it was added as a security precaution.
Probably would not have ever seen this error with a newer version of Chromium.
You might try disabling the DNS prefetching in the Chromium settings. Go to Preferences > Under the Hood and un-check "Use DNS pre-fetching to improve page load times".

Resources