I manually add `Content-Encoding: br` and it does not work - http

Using homemade proxy software that adds/removes headers as specified (yes, I violate proxy standards), I add Content-Encoding: br to a file in Brotli format served by the upstream:
docker run --net=host --rm proxy /root/proxy/target/release/proxy --port 8080 http://localhost:1633 \
-A "Content-Encoding: br" -R "Accept-Ranges" -R "Content-Length" -R "Decompressed-Content-Length"
(-A flags add headers, -R flags remove headers).
Then I add yet one proxy level, Apache (with the purpose to add SSL):
ProxyPass "/" "http://localhost:8080/bzz/"
A request https://test.vporton.name/008e1e5b3e2f4f2cf04a48e49c2fdafeac6e9a01f0159c6881812e919f4f8476/index.html to Apache returns:
< HTTP/1.1 200 OK
< Date: Sun, 05 Jun 2022 03:43:53 GMT
< Server: Apache/2.4.52 (Ubuntu)
< content-encoding: br
< x-forwarded-server: test.vporton.name
< user-agent: curl/7.81.0
< x-forwarded-host: test.vporton.name
< accept: */*
< host: localhost:8080
< x-forwarded-for: 87.71.212.18
< Transfer-Encoding: chunked
< Content-Type: text/html
When I try to open it in a browser, it (for Firefox) tells about bad page encoding or (for Chrome) shows empty page. Why did I do wrong? (Clearing cache and restarting the browser does not work.)
I am sure that file is correctly encoded as Brotli:
curl https://test.vporton.name/008e1e5b3e2f4f2cf04a48e49c2fdafeac6e9a01f0159c6881812e919f4f8476/index.html | brotli --test
returns no error. Moreover,
curl https://test.vporton.name/008e1e5b3e2f4f2cf04a48e49c2fdafeac6e9a01f0159c6881812e919f4f8476/index.html | brotli --decompress
produces a HTML file as intended.

Related

wget says 406 Not acceptable

I have a simple file on my web server, and when I request it in a browser, it loads without problems:
http://example.server/report.php
But when I request the file with wget from a Raspberry Pi, I get this:
$ wget -d --spider http://example.server/report.php
Setting --spider (spider) to 1
DEBUG output created by Wget 1.18 on linux-gnueabihf.
Reading HSTS entries from /home/pi/.wget-hsts
URI encoding = 'ANSI_X3.4-1968'
converted 'http://example.server/report.php' (ANSI_X3.4-1968) -> 'http://example.server/report.php' (UTF-8)
Converted file name 'report.php' (UTF-8) -> 'report.php' (ANSI_X3.4-1968)
Spider mode enabled. Check if remote file exists.
--2018-06-03 07:29:29-- http://example.server/report.php
Resolving example.server (example.server)... 49.132.206.71
Caching example.server => 49.132.206.71
Connecting to example.server (example.server)|49.132.206.71|:80... connected.
Created socket 3.
Releasing 0x00832548 (new refcount 1).
---request begin---
HEAD /report.php HTTP/1.1
User-Agent: Wget/1.18 (linux-gnueabihf)
Accept: */*
Accept-Encoding: identity
Host: example.server
Connection: Keep-Alive
---request end---
HTTP request sent, awaiting response...
---response begin---
HTTP/1.1 406 Not Acceptable
Date: Fri, 15 Jun 2018 08:25:17 GMT
Server: Apache
Keep-Alive: timeout=3, max=200
Connection: Keep-Alive
Content-Type: text/html; charset=iso-8859-1
---response end---
406 Not Acceptable
Registered socket 3 for persistent reuse.
URI content encoding = 'iso-8859-1'
Remote file does not exist -- broken link!!!
I read somewhere that it might be an encoding problem, so I tried
$ wget -d --spider --header="Accept-encoding: *" http://example.server/report.php
but that gives me the exact same error.
That's because the server you're connecting to serves only to certain User-Agents.
Change the user agent and it works fine:
wget -d --user-agent="Mozilla/5.0 (Windows NT x.y; rv:10.0) Gecko/20100101 Firefox/10.0" http://example.server/report.php

Multiple sites share one IP address: I can't reach special site using Host header

In a book I'm reading now the author shows what HTTP headers mean. Namely he said that there are servers that host multiple web site.
Let's do this:
ping fideloper.com
We can see the IP address: 198.211.113.202.
Now let's use the IP address only:
curl -I 198.211.113.202
We catch:
$ curl -I 198.211.113.202
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Thu, 03 Aug 2017 14:48:33 GMT
Content-Type: text/html
Content-Length: 178
Connection: keep-alive
Location: https://book.serversforhackers.com/
Let’s next see what happens when we add a Host header to the HTTP request:
$ curl -I -H "Host: fideloper.com" 198.211.113.202
HTTP/1.1 200 OK
Server: nginx
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Vary: Accept-Encoding
Cache-Control: max-age=86400, public
Date: Thu, 03 Aug 2017 13:23:58 GMT
Last-Modified: Fri, 30 Dec 2016 22:32:12 GMT
X-Frame-Options: SAMEORIGIN
Set-Cookie: laravel_session=eyJpdiI6IjhVQlk2UWcyRExsaDllVEpJOERaT3dcL2d2aE9mMHV4eUduSjFkQTRKU0R3PSIsInZhbHVlIjoiMmcwVUpNSjFETWs1amJaNzhGZXVGZjFPZ3hINUZ1eHNsR0dBV1FvdE9mQ1RFak5IVXBKUEs2aEZzaEhpRHRodE1LcGhFbFI3OTR3NzQxZG9YUlN5WlE9PSIsIm1hYyI6ImRhNTVlZjM5MDYyYjUxMTY0MjBkZjZkYTQ1ZTQ1YmNlNjU3ODYzNGNjZTBjZWUyZWMyMjEzYjZhOWY1MWYyMDUifQ%3D%3D; expires=Thu, 03-Aug-2017 15:23:58 GMT; Max-Age=7200; path=/; httponly
X-Fastcgi-Cache: HIT
This means that serversforhackers.com is the default site.
Then the author said that we could request Servers for Hackers on the same server:
$ curl -I -H "Host: serversforhackers.com” 198.211.113.202
Here in the book HTTP/1.1 200 OK is received.
But I receve this:
curl -I -H "Host: serversforhackers.com" 198.211.113.202
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Thu, 03 Aug 2017 14:55:14 GMT
Content-Type: text/html
Content-Length: 178
Connection: keep-alive
Location: https://book.serversforhackers.com/
Well, the author organized a 301 redirect and uses HTTPS now.
I could do this:
curl -I https://serversforhackers.com
But this doesn't illustrate the whole idea of what default site is and how Host header can address a special site on a shared IP address.
Is it still possible somehow to get 200 Ok addressing via IP address?
In HTTP/1.1, without HTTPS, the Host header is the only place where the hostname is sent to the server.
With HTTPS, things are more interesting.
First, your client will normally try to check the server’s TLS certificate against the expected name:
$ curl -I -H "Host: book.serverforhackers.com" https://198.211.113.202
curl: (51) SSL: certificate subject name (book.serversforhackers.com) does not match target host name '198.211.113.202'
Most clients provide a way to override this check. curl has the -k/--insecure option for that:
$ curl -k -I -H "Host: book.serverforhackers.com" https://198.211.113.202
HTTP/1.1 200 OK
Server: nginx
[...]
But then there’s the second issue. I can’t illustrate it with your example server, but here’s one I found on the Internet:
$ curl -k -I https://analytics.usa.gov
HTTP/1.1 200 OK
Content-Type: text/html
[...]
$ host analytics.usa.gov | head -n 1
analytics.usa.gov has address 54.240.184.142
$ curl -k -I -H "Host: analytics.usa.gov" https://54.240.184.142
curl: (35) gnutls_handshake() failed: Handshake failed
This is caused by server name indication (SNI) — a feature of TLS (HTTPS) whereby the hostname is also sent in the TLS handshake. It is necessary because the server needs to present the right certificate (for the right hostname) before it can receive any HTTP headers at all. In the example above, when we use https://54.240.184.142, curl doesn’t send the correct SNI, and the server refuses the handshake. Other servers might accept the connection but route it to a wrong place, where the Host header will end up being ignored.
With curl, you can’t set SNI with a separate option like you set the Host header. curl will always take it from the request URL. But curl has a special --resolve option:
Provide a custom address for a specific host and port pair. Using this, you can make the curl requests(s) use a specified address and prevent the otherwise normally resolved address to be used. Consider it a sort of /etc/hosts alternative provided on the command line.
In this case:
$ curl -I --resolve analytics.usa.gov:443:54.240.184.142 https://analytics.usa.gov
HTTP/1.1 200 OK
Content-Type: text/html
[...]
(443 is the standard TCP port for HTTPS)
If you want to experiment at a lower level, you can use the openssl tool to establish a raw TLS connection with the right SNI:
$ openssl s_client -connect 54.240.184.142:443 -servername analytics.usa.gov -crlf
You will then be able to type an HTTP request and see the right response:
HEAD / HTTP/1.1
Host: analytics.usa.gov
HTTP/1.1 200 OK
Content-Type: text/html
[...]
Lastly, note that in HTTP/2, there’s a special header named :authority (yes, with a colon) that may be used instead of Host by some clients. The distinction between them exists for backward compatibility with HTTP/1.1 and proxies: see RFC 7540 § 8.1.2.3 and RFC 7230 § 5.3 for details.

Why does curl repeat headers in the output?

Options I used:
-I, --head
(HTTP/FTP/FILE) Fetch the HTTP-header only! HTTP-servers feature
the command HEAD which this uses to get nothing but the header
of a document. When used on an FTP or FILE file, curl displays
the file size and last modification time only.
-L, --location
(HTTP/HTTPS) If the server reports that the requested page has moved to a different location (indi-
cated with a Location: header and a 3XX response code), this option will make curl redo the request
on the new place. If used together with -i, --include or -I, --head, headers from all requested
pages will be shown. When authentication is used, curl only sends its credentials to the initial
host. If a redirect takes curl to a different host, it won't be able to intercept the user+password.
See also --location-trusted on how to change this. You can limit the amount of redirects to follow
by using the --max-redirs option.
When curl follows a redirect and the request is not a plain GET (for example POST or PUT), it will
do the following request with a GET if the HTTP response was 301, 302, or 303. If the response code
was any other 3xx code, curl will re-send the following request using the same unmodified method.
You can tell curl to not change the non-GET request method to GET after a 30x response by using the
dedicated options for that: --post301, --post302 and -post303.
-v, --verbose
Be more verbose/talkative during the operation. Useful for debugging and seeing what's going on
"under the hood". A line starting with '>' means "header data" sent by curl, '<' means "header data"
received by curl that is hidden in normal cases, and a line starting with '*' means additional info
provided by curl.
Note that if you only want HTTP headers in the output, -i, --include might be the option you're
looking for.
If you think this option still doesn't give you enough details, consider using --trace or --trace-
ascii instead.
This option overrides previous uses of --trace-ascii or --trace.
Use -s, --silent to make curl quiet.
Below is the output that I'm wondering about. In the response containing the redirect(301), all the headers are displayed twice, but only one of the duplicates has the < in front of it. How am I supposed to interpret that?
$ curl -ILv http://www.mail.com
* Rebuilt URL to: http://www.mail.com/
* Trying 74.208.122.4...
* Connected to www.mail.com (74.208.122.4) port 80 (#0)
> HEAD / HTTP/1.1
> Host: www.mail.com
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
HTTP/1.1 301 Moved Permanently
< Date: Sun, 28 May 2017 22:02:16 GMT
Date: Sun, 28 May 2017 22:02:16 GMT
< Server: Apache
Server: Apache
< Location: https://www.mail.com/
Location: https://www.mail.com/
< Vary: Accept-Encoding
Vary: Accept-Encoding
< Connection: close
Connection: close
< Content-Type: text/html; charset=iso-8859-1
Content-Type: text/html; charset=iso-8859-1
<
* Closing connection 0
* Issue another request to this URL: 'https://www.mail.com/'
* Trying 74.208.122.4...
* Connected to www.mail.com (74.208.122.4) port 443 (#1)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
* Server certificate: *.mail.com
* Server certificate: thawte SSL CA - G2
* Server certificate: thawte Primary Root CA
> HEAD / HTTP/1.1
> Host: www.mail.com
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Date: Sun, 28 May 2017 22:02:16 GMT
Date: Sun, 28 May 2017 22:02:16 GMT
< Server: Apache
Server: Apache
< Vary: X-Forwarded-Proto,Host,Accept-Encoding
Vary: X-Forwarded-Proto,Host,Accept-Encoding
< Set-Cookie: cookieKID=kid%40autoref%40mail.com; Domain=.mail.com; Expires=Tue, 27-Jun-2017 22:02:16 GMT; Path=/
Set-Cookie: cookieKID=kid%40autoref%40mail.com; Domain=.mail.com; Expires=Tue, 27-Jun-2017 22:02:16 GMT; Path=/
< Set-Cookie: cookiePartner=kid%40autoref%40mail.com; Domain=.mail.com; Expires=Tue, 27-Jun-2017 22:02:16 GMT; Path=/
Set-Cookie: cookiePartner=kid%40autoref%40mail.com; Domain=.mail.com; Expires=Tue, 27-Jun-2017 22:02:16 GMT; Path=/
< Cache-Control: no-cache, no-store, must-revalidate
Cache-Control: no-cache, no-store, must-revalidate
< Pragma: no-cache
Pragma: no-cache
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Set-Cookie: JSESSIONID=F0BEF03C92839D69057FFB57C7FAA789; Path=/mailcom-webapp/; HttpOnly
Set-Cookie: JSESSIONID=F0BEF03C92839D69057FFB57C7FAA789; Path=/mailcom-webapp/; HttpOnly
< Content-Language: en-US
Content-Language: en-US
< Content-Length: 85237
Content-Length: 85237
< Connection: close
Connection: close
< Content-Type: text/html;charset=UTF-8
Content-Type: text/html;charset=UTF-8
<
* Closing connection 1
best guess: with -v you tell curl to be verbose (send debug info) to STDERR. with -I you tell curl to dump headers to STDOUT. and your shell, by default, combines STDOUT and STDERR. separate stdout and stderr, and you'll avoid the confusion.
curl -ILv http://www.mail.com >stdout.log 2>stderr.log ; cat stdout.log
Use:
curl -ILv http://www.mail.com 2>&1 | grep '^[<>\*].*$'
When cURL is called with the verbose command line flag, it sends the verbose output to stderr instead of stdout. The above command redirects stderr to stdout (2>&1), then we pipe the combined output to grep and use the above regex to only return the lines that begin with *, <, or >. All of the other lines in the output (including the dupes you were first concerned with) are removed from the output.

apachebench symfony2 logged in pages benchmark

I'm trying to benchmark a symfony application and therefore I also need to benchmark the pages that are restricted to users that are logged in.
I want to benchmark the application using the apachebench tool and therefore wrote a small shell script that tries to login with curl, fetch the phpsessid returned from the request and set it as a cookie in the apachebench command.
Here is what the shellscript looks like:
#!/bin/bash
COOKIE_JAR="/var/www/apachebench/test.jar"
curl -c $COOKIE_JAR --data "_email=admin%40dummy.at&_password=test&_target_path=%2Fbackend" http://symfony.local/login
PHPSESSID=$(cat $COOKIE_JAR | grep PHPSESSID | cut -f 7)
ab -n 10 -p /var/www/apachebench/albumpostfile.txt -T application/x-www-form-urlencoded -C PHPSESSID=$PHPSESSID -k http://symfony.local/album/add
The apachebench command should post a form and store the data in a database. However it looks like I'm not getting logged in because the data aren't stored and I tried the command before using a PHPSESSID that I copied from my browser and it worked perfectly fine. I also already disabled the csrf protection globally.
I also already checked if the PHPSESSID returned from the curl is correctly inputted into the ab command and it is.
I totally have no clue what I'm doing wrong as I tried to post the exact same data to the login page through the chrome extension "POSTMAN" and it works there.
The cookie-jar file from the curl request looks as followed:
# Netscape HTTP Cookie File
# http://curl.haxx.se/docs/http-cookies.html
# This file was generated by libcurl! Edit at your own risk.
symfony.local FALSE / FALSE 0 PHPSESSID t2glc67hlf6lrlik2ieg9r7rv7
Thanks in advance.
EDIT:
That is the output of apachebench when I add -v 3 to the command
WARNING: Response code not 2xx (302)
LOG: header received:
HTTP/1.0 302 Found
Date: Fri, 22 May 2015 12:03:32 GMT
Server: Apache/2.4.7 (Ubuntu)
X-Powered-By: PHP/5.5.9-1ubuntu4.9
Cache-Control: private, must-revalidate
Location: http://symfony.local/login
pragma: no-cache
expires: -1
Connection: close
Content-Type: text/html; charset=UTF-8
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8" />
<meta http-equiv="refresh" content="1;url=http://symfony.local/login" />
<title>Redirecting to http://symfony.local/login</title>
</head>
<body>
Redirecting to http://symfony.local/login.
</body>
</html>
WARNING: Response code not 2xx (302)
LOG: header received:
HTTP/1.0 302 Found
Date: Fri, 22 May 2015 12:03:32 GMT
Server: Apache/2.4.7 (Ubuntu)
X-Powered-By: PHP/5.5.9-1ubuntu4.9
Cache-Control: private, must-revalidate
Location: http://symfony.local/login
pragma: no-cache
expires: -1
Connection: close
Content-Type: text/html; charset=UTF-8
EDIT2:
So this is my new shell script, I altered it, so that the curl command sends a PHPSESSID cookie with its request. Below you can see the different outputs from both scripts. It looks like the second one is working as it states the correct url in the Redirecting to Url part. But this time the apachebench command isn't doing anything at all, it just gets stuck.
#!/bin/bash
COOKIE_JAR="/var/www/apachebench/test.jar"
#curl -c $COOKIE_JAR -v -d "_email=admin%40dummy.at&_password=test&_target_path=%2Fbackend" -b "PHPSESSID=1hrfrnud407n5j42oki13655g7" http://symfony.local/login
curl -c $COOKIE_JAR -v -d "_email=admin%40dummy.at&_password=test&_target_path=%2Fbackend" http://symfony.local/login
PHPSESSID=$(cat $COOKIE_JAR | grep PHPSESSID | cut -f 7)
ab -n 10 -p /var/www/apachebench/albumpostfile.txt -T application/x-www-form-urlencoded -C PHPSESSID=$PHPSESSID http://symfony.local/album/add
OLD-CURL-OUTPUT:
* Hostname was NOT found in DNS cache
* Trying 127.0.1.1...
* Connected to symfony.local (127.0.1.1) port 80 (#0)
> POST /login HTTP/1.1
> User-Agent: curl/7.35.0
> Host: symfony.local
> Accept: */*
> Content-Length: 62
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 62 out of 62 bytes
< HTTP/1.1 302 Found
< Date: Fri, 22 May 2015 12:39:05 GMT
* Server Apache/2.4.7 (Ubuntu) is not blacklisted
< Server: Apache/2.4.7 (Ubuntu)
< X-Powered-By: PHP/5.5.9-1ubuntu4.9
* Added cookie PHPSESSID="l2pfvtum211bd8tnpp1i0vpcj1" for domain symfony.local, path /, expire 0
< Set-Cookie: PHPSESSID=l2pfvtum211bd8tnpp1i0vpcj1; path=/
< Cache-Control: no-cache
< Location: http://symfony.local/login
< Transfer-Encoding: chunked
< Content-Type: text/html; charset=UTF-8
<
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8" />
<meta http-equiv="refresh" content="1;url=http://symfony.local/login" />
<title>Redirecting to http://symfony.local/login</title>
</head>
<body>
Redirecting to http://symfony.local/login.
</body>
* Connection #0 to host symfony.local left intact
</html>
NEW-CURL-OUTPUT:
* Hostname was NOT found in DNS cache
* Trying 127.0.1.1...
* Connected to symfony.local (127.0.1.1) port 80 (#0)
> POST /login HTTP/1.1
> User-Agent: curl/7.35.0
> Host: symfony.local
> Accept: */*
> Cookie: PHPSESSID=1hrfrnud407n5j42oki13655g7
> Content-Length: 62
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 62 out of 62 bytes
< HTTP/1.1 302 Found
< Date: Fri, 22 May 2015 12:40:07 GMT
* Server Apache/2.4.7 (Ubuntu) is not blacklisted
< Server: Apache/2.4.7 (Ubuntu)
< X-Powered-By: PHP/5.5.9-1ubuntu4.9
* Added cookie PHPSESSID="3ehl5ldkbd4ngl2er663899km1" for domain symfony.local, path /, expire 0
< Set-Cookie: PHPSESSID=3ehl5ldkbd4ngl2er663899km1; path=/
< Cache-Control: no-cache
< Location: http://symfony.local/backend
< Transfer-Encoding: chunked
< Content-Type: text/html; charset=UTF-8
<
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8" />
<meta http-equiv="refresh" content="1;url=http://symfony.local/backend" />
<title>Redirecting to http://symfony.local/backend</title>
</head>
<body>
Redirecting to http://symfony.local/backend.
</body>
* Connection #0 to host symfony.local left intact
</html>This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking symfony.local (be patient)...
The Last line of the output is where the ab command gets stuck.
I figured it out and it is working now.
The second command was perfectly fine so I you really need to pass a PHPSESSID cookie to the call of the login if you benchmark symfony.
The reason why it hung up on the apachebench command wasn't related to the shell script but due to a missing { in the code that I accidently deleted when trying to debug it. By looking into the apache2 error.log file I was able to figure it out.
So if anyone else will have the same problem keep in mind to add a PHPSESSID cookie to your curl command and the login will work and you can benchmark pages that need a login.

Nagios check_http gives 'HTTP/1.0 503 Service Unavailable' for HAProxy site

Can't figure this one out!
OS: CentOS 6.6 (Up-To-Date)
I get the following 503 error when using my nagios check_http check (or curl) to query an SSL site served via HAProxy 1.5.
[root#nagios ~]# /usr/local/nagios/libexec/check_http -v -H example.com -S1
GET / HTTP/1.1
User-Agent: check_http/v2.0 (nagios-plugins 2.0)
Connection: close
Host: example.com
https://example.com:443/ is 212 characters
STATUS: HTTP/1.0 503 Service Unavailable
**** HEADER ****
Cache-Control: no-cache
Connection: close
Content-Type: text/html
**** CONTENT ****
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
HTTP CRITICAL: HTTP/1.0 503 Service Unavailable - 212 bytes in 1.076 second response time |time=1.075766s;;;0.000000 size=212B;;;0
[root#nagios ~]# curl -I https://example.com
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
However. I can access the site fine via any browser fine (200 OK), and also curl -I https://example.com from another server:
root#localhost:~# curl -I https://example.com
HTTP/1.1 200 OK
Date: Wed, 18 Feb 2015 14:36:51 GMT
Server: Apache/2.4.6
Expires: Mon, 26 Jul 1997 05:00:00 GMT
Pragma: no-cache
Last-Modified: Wed, 18 Feb 2015 14:36:52 GMT
Content-Type: text/html; charset=UTF-8
Strict-Transport-Security: max-age=31536000;
The HAProxy server is runnning on pfSense 2.2.
I see that HAProxy returns an HTTP/1.0 for nagios and HTTP/1.1 from elsewhere. So is it my check_http' plugin causing this or is itcurl`?
Is my server just not sending the HOST header? If so, how can I resolve this?
What check_http does is it checks whether there exists a index.html-file on the server. This means you might have http running and working, while the check still fails.
Regardless whether or not creating an index.html file on the server resolves the issue, u might not want to create the circumstances such that the check works.
I suppose setting up a check for pinging your example.com and a check via nrpe to see whether your http-service is running will meet your requirements.
check_http has an option called --sni
You need to use that option

Resources