Not allow insecure connections on nginx - nginx

I installed certbot certificate for nginx:
sudo certbot --nginx -d example.com
and redirect all http to https:
# Redirect non-https traffic to https
if ($scheme != "https") {
return 301 https://$host$request_uri;
} # managed by Certbot
It is working from browser, but I still can make insecure connection via
curl --insecure example.com
Here are the main configurations in nginx.conf:
server {
listen 80;
server_name example.com;
if ($scheme != "https") {
return 301 https://$host$request_uri;
}
location / {
root /www/html/;
...
proxy_pass http://127.0.0.1:80;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by
Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed
by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
proxy_ssl_trusted_certificate /etc/letsencrypt/live/example.com/cert.pem;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
}
When I issue
curl -iI https://example.com, it returns:
HTTP/1.1 200 OK
Server: nginx/1.10.3 (Ubuntu)
Date: Wed, 04 Jul 2018 09:19:35 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 1218
Connection: keep-alive
X-Powered-By: Express
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Last-Modified: Tue, 01 Jul 2018 12:10:25 GMT
ETag: W/"Zwtf1TTMBhoSbg9LZvHbCg=="
Strict-Transport-Security: max-age=31536000; includeSubDomains

it should return HTTP/1.1 301 Moved Permanently, in which user agent may or may not redirect to new location.
use -L or --location switch in your curl command to automatically follow the redirections.
Edit 2018-07-05:
Here are the main configurations in nginx.conf:
Though that's not a bad config, if directive usage is discouraged.
You'd better split the config into two separate server block, one for http, and other for https.
Something like:
server {
listen 80;
server_name example.com;
# log your http request if you need to
error_log /var/log/nginx/example-com_error.log notice;
access_log /var/log/nginx/example-com_access.log combined;
# certbot endpoint
location ~ ^/\.well-known/ {
root /var/www/certbot/;
access_log off;
}
# other requests should end up here
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name example.com;
# log your http request if you need to
error_log /var/log/nginx/example-com_error.log notice;
access_log /var/log/nginx/example-com_access.log combined;
# default document root and document index
root /var/www/html;
index index.html;
# SSL cert, private key, and configurations.
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# https configurations
location / {
proxy_pass http://127.0.0.1:80; # why would you proxy_pass back to nginx again?
# you only need this if your proxy_pass uses https, not http like this example.
proxy_ssl_trusted_certificate /etc/letsencrypt/live/example.com/cert.pem;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
}
}
should suffice.
When I issue curl -iI https://example.com, it returns:
yes, why it wouldn't return a HTTP/1.1 200 OK?
The insecure part of --insecure flag in cURL only disables HTTPS certificate validation, i.e. you can use invalid SSL certificate in your HTTPS request (bad CN, bad SAN, bad expiry date, bad CA, self signed, etc) and cURL will still satisfy your request instead of failing hard.

Related

Need help in simulating (and blocking) HTTP_HOST spoofing attacks

I have an nginx reverse proxy serving multiple small web services. Each of the servers has different domain names, and are individually protected with SSL using Certbot. The installation for these was pretty standard as provided by Ubuntu 20.04.
I have a default server block to catch requests and return a 444 where the hostname does not match one of my server names. However about 3-5 times per day, I have a request getting through to my first server (happens to be Django), which then throws the "Not in ALLOWED_HOSTS" message. Since this is the first server block, I'm assuming something in my ruleset doesn't match any of the blocks and the request is sent upstream to serverA
Since the failure is rare, and in order to simulate this HOST_NAME spoofing attack, I have tried to use curl as well as using netcat with raw text files to try and mimic this situation, but I am not able to get past my nginx, i.e. I get a 444 back as expected.
Can you help me 1) simulate an attack with the right tools and 2) Help identify how to fix it? I'm assuming since this is reaching my server, it is coming over https?
My sanitized sudo nginx -T, and an example of an attack are shown below.
ubuntu#ip-A.B.C.D:/etc/nginx/conf.d$ sudo nginx -T
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
# configuration file /etc/nginx/nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# SSL Settings
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
# Logging Settings
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Gzip Settings
gzip on;
# Virtual Host Configs
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
# configuration file /etc/nginx/modules-enabled/50-mod-http-image-filter.conf:
load_module modules/ngx_http_image_filter_module.so;
# configuration file /etc/nginx/modules-enabled/50-mod-http-xslt-filter.conf:
load_module modules/ngx_http_xslt_filter_module.so;
# configuration file /etc/nginx/modules-enabled/50-mod-mail.conf:
load_module modules/ngx_mail_module.so;
# configuration file /etc/nginx/modules-enabled/50-mod-stream.conf:
load_module modules/ngx_stream_module.so;
# configuration file /etc/nginx/mime.types:
types {
text/html html htm shtml;
text/css css;
# Many more here.. removed to shorten list
video/x-msvideo avi;
}
# configuration file /etc/nginx/conf.d/serverA.conf:
upstream serverA {
server 127.0.0.1:8000;
keepalive 256;
}
server {
server_name serverA.com www.serverA.com;
client_max_body_size 10M;
location / {
proxy_pass http://serverA;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
listen 443 ssl; # managed by Certbot
ssl_certificate ...; # managed by Certbot
ssl_certificate_key ...; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = serverA.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = www.serverA.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name serverA.com www.serverA.com;
return 404; # managed by Certbot
}
# configuration file /etc/letsencrypt/options-ssl-nginx.conf:
# This file contains important security parameters. If you modify this file
# manually, Certbot will be unable to automatically provide future security
# updates. Instead, Certbot will print and log an error message with a path to
# the up-to-date file that you will need to refer to when manually updating
# this file.
ssl_session_cache shared:le_nginx_SSL:10m;
ssl_session_timeout 1440m;
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA";
# configuration file /etc/nginx/conf.d/serverB.conf:
upstream serverB {
server 127.0.0.1:8002;
keepalive 256;
}
server {
server_name serverB.com fsn.serverB.com www.serverB.com;
client_max_body_size 10M;
location / {
proxy_pass http://serverB;
... as above ...
}
listen 443 ssl; # managed by Certbot
... as above ...
}
server {
if ($host = serverB.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = www.serverB.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = fsn.serverB.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name serverB.com fsn.serverB.com www.serverB.com;
listen 80;
return 404; # managed by Certbot
}
# Another similar serverC, serverD etc.
# Default server configuration
#
server {
listen 80 default_server;
listen [::]:80 default_server;
# server_name "";
return 444;
}
Request data from a request that successfully gets past nginx to reach serverA (Django), where it throws an error: (Note that the path will 404, and HTTP_HOST headers are not my server names. More often, the HTTP_HOST comes in with my static IP address as well.
Exception Type: DisallowedHost at /movie/bCZgaGBj
Exception Value: Invalid HTTP_HOST header: 'www.tvmao.com'. You may need to add 'www.tvmao.com' to ALLOWED_HOSTS.
Request information:
USER: [unable to retrieve the current user]
GET: No GET data
POST: No POST data
FILES: No FILES data
COOKIES: No cookie data
META:
HTTP_ACCEPT = '*/*'
HTTP_ACCEPT_LANGUAGE = 'zh-cn'
HTTP_CACHE_CONTROL = 'no-cache'
HTTP_CONNECTION = 'Upgrade'
HTTP_HOST = 'www.tvmao.com'
HTTP_REFERER = '/movie/bCZgaGBj'
HTTP_USER_AGENT = 'Mozilla/5.0 (iPhone; CPU iPhone OS 13_2_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.0.3 Mobile/15E148 Safari/604.1'
HTTP_X_FORWARDED_FOR = '27.124.12.23'
HTTP_X_REAL_IP = '27.124.12.23'
PATH_INFO = '/movie/bCZgaGBj'
QUERY_STRING = ''
REMOTE_ADDR = '127.0.0.1'
REMOTE_HOST = '127.0.0.1'
REMOTE_PORT = 44058
REQUEST_METHOD = 'GET'
SCRIPT_NAME = ''
SERVER_NAME = '127.0.0.1'
SERVER_PORT = '8000'
wsgi.multiprocess = True
wsgi.multithread = True
Here's how I've tried to simulate the attack using raw http requests and netcat:
me#linuxmachine:~$ cat raw.http
GET /dashboard/ HTTP/1.1
Host: serverA.com
Host: test.com
Connection: close
me#linuxmachine:~$ cat raw.http | nc A.B.C.D 80
HTTP/1.1 400 Bad Request
Server: nginx/1.18.0 (Ubuntu)
Date: Fri, 27 Jan 2023 15:05:13 GMT
Content-Type: text/html
Content-Length: 166
Connection: close
<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx/1.18.0 (Ubuntu)</center>
</body>
</html>
If I send my correct serverA.com as the host header, I get a 301 (redirecting to https).
If I send an incorrect host header (e.g. test.com) I get an empty response (expected).
If I send two host headers (correct and incorrect) I get a 400 bad request
If I send the correct host, but to port 443, I get a 400 plain HTTP sent to HTTPS port...
How do I simulate a request to get past nginx to my upstream serverA like the bots do? And how do I block it with nginx?
Thanks!
There is something magical about asking SO. The process of writing makes the answer appear :)
To my first question above, of simulating the spoof, I was able to just use curl in the following way:
me#linuxmachine:~$ curl -H "Host: A.B.C.D" https://example.com
I'm pretty sure I've tried this before but not sure why I didn't try this exact spell (perhaps I was sending a different header, like Http-Host: or something)
With this call, I was able to trigger the error as before, which made it easy to test the nginx configuration and answer the second question.
It was clear that the spoof was coming on 443, which led me to this very informative post on StackExchange
This also explained why we can't just listen 443 and respond with a 444 without first having traded SSL certificates due to the way SSL works.
The three options suggested (happrox, fake cert, and the if($host ...) directive might all work, but the simplest I think is the last one. Since this if( ) is not within the location context, I believe this to be ok.
My new serverA block looks like this:
server {
server_name serverA.com www.serverA.com;
client_max_body_size 10M;
## This fixes it
if ( $http_host !~* ^(serverA\.com|www\.serverA\.com)$ ) {
return 444;
}
## and it's not inside the location context...
location / {
proxy_pass http://upstream;
proxy_http_version 1.1;
...

Reroute/canoncialize website domains to single domain with redirect

For some reason I can't get this simple thing to work: https://www.nginx.com/blog/creating-nginx-rewrite-rules/
I have this:
`# cat sites-available/custom_default
server {
listen 80;
listen [::]:80;
server_name _;
include hardening;
location /.well-known/acme-challenge/ {
root /var/www/acme-challenge/;
default_type "text/plain";
}
location / {
return 301 https://$host$request_uri;
}
}`
Which provides the ACME well-known dir over port 80 and then redirects everything else to port 443.
Then I have a bunch of vhosts:
`# ll sites-enabled/
total 0
lrwxrwxrwx 1 root root 35 sep 5 15:15 chapters -> /etc/nginx/sites-available/chapters
lrwxrwxrwx 1 root root 41 jul 15 09:21 custom_default -> /etc/nginx/sites-available/custom_default
lrwxrwxrwx 1 root root 36 jul 15 12:24 discourse -> /etc/nginx/sites-available/discourse
lrwxrwxrwx 1 root root 30 sep 5 15:15 map -> /etc/nginx/sites-available/map`
I first want to change the domain of chapters:
`# cat sites-enabled/chapters
server {
server_name chapters.example.community;
return 302 https://chapters.example.one$request_uri;
}
server {
server_name chapters.example.one;
ssl_certificate /etc/letsencrypt/live/chapters.example.community/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/chapters.example.community/privkey.pem;
include tls_params;
include hardening;
location ~ /(emoji.css|index.html|list.js|main.js|resources|robots.txt|script.js|style.min.css|errorpages/example-logo.png|errorpages/sob.png|errorpages/generic_offline.html) {
limit_except GET HEAD { deny all; }
root /var/www/chapters;
error_page 403 404 502 =404 /errorpages/generic_offline.html;
}
location /errorpages/ {
alias /var/www/errorpages/;
}
location ~* \.(css|gif|jpg|js|png|ico|otf|sng|xls|doc|exe|jpeg|tgx)$ {
access_log off;
expires 1d;
}
}`
For some reason this doesn't work, maybe I'm too used to Apache. I'm expecting the first server block to listen to the old domain, then redirect to the new domain. The next server block then listens to the new domain and serves the website. But now both domains work just fine, and the old domain does not redirect traffic to the new domain.
For full context, here are the included configs:
`# cat tls_params
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m;
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security "max-age=63072000" always;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt;`
`# cat hardening
server_tokens off;`
If I put both server names in one server block and the redirect to the new domain, I get an infinite redirect loop. My guess is that it may have to do with my custom_default block. Any thoughts?

Updating Nginx block to do two things that aren't working

For Nginx, I'm attempting to do two things:
Redirect anyone who comes in to the site from www to non-www.
The server defaults to error 500 instead of 404 when the URL path doesn't exist.
This is my current configuration of my server:
server {
root /var/www/project/;
index index.php index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php8.0-fpm.sock;
}
location ~ /\.ht {
deny all;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed$
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # manag$
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name example.com www.example.com;
return 404; # managed by Certbot
}
I attempted adding a forward for any www traffic to move into non-www but that didn't do anything (I think because of certbot). Also I tried to update the URL for returning 404 instead of 500, but what it did was it made anything outside of the homepage a 404. So I reset it to what you see above.
Redirect anyone who comes in to the site from www to non-www.
The cleanest solution is to define a separate server block only for this purpose:
server {
listen 80;
server_name www.example.com;
return 301 http://example.com$request_uri;
}
You can verify that it works by putting
127.0.0.1 localhost example.com www.example.com
in your computer's hosts file.
$ curl -I http://www.example.com
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Fri, 27 Aug 2021 06:14:43 GMT
Content-Type: text/html
Content-Length: 162
Connection: keep-alive
Location: http://example.com/
The server defaults to error 500 instead of 404 when the URL path doesn't exist.
The 500 HTTP status code is probably caused by your index.php script.

Nginx resolving wrong host when upgrading to HTTPS

I have 3 NGINX hosts, that I only want to serve on HTTPS. Two of them work correctly, however, one of them resolves the wrong host. Here's all of the info
Nginx virtual hosts
# cat alpha.domain-a.tld
server {
listen 80;
server_name alpha.domain-a.tld;
return 301 https://alpha.domain-a.tld$request_uri;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/nginx/certs/alpha.domain-a.tld.pem;
ssl_certificate_key /etc/nginx/certs/alpha.domain-a.tld.key;
ssl_client_certificate /etc/nginx/certs/cloudflare.crt;
ssl_verify_client on;
root /var/www/alpha.domain-a.tld/;
index index.html;
server_name alpha.domain-a.tld;
location / {
try_files $uri $uri/ $uri.html =404;
}
}
# cat mike.domain-a.tld
server {
listen 80;
server_name mike.domain-a.tld;
return 301 https://mike.domain-a.tld$request_uri;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/nginx/certs/domain-a.tld.pem;
ssl_certificate_key /etc/nginx/certs/domain-a.tld.key;
ssl_client_certificate /etc/nginx/certs/cloudflare.crt;
ssl_verify_client on;
root /var/www/mike.domain-a.tld/;
index index.html;
server_name mike.domain-a.tld;
location / {
try_files $uri $uri/ $uri.html =404;
}
}
# cat juliet.domain-b.tld
server {
listen 80;
server_name juliet.dommain-b.tld;
return 301 https://juliet.domain-b.tld$request_uri;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/nginx/certs/domain-b.tld.pem;
ssl_certificate_key /etc/nginx/certs/domain-b.tld.key;
ssl_client_certificate /etc/nginx/certs/cloudflare.crt;
ssl_verify_client on;
root /var/www/juliet.domain-b.tld;
index index.html;
server_name juliet.domain-b.tld;
location / {
try_files $uri $uri/ $uri.html =404;
}
}
Alpha and mike resolve correctly, however, when i try to access http://juliet, it redirects me to alpha rather than https://juliet, as shown below:
# curl -I --resolve alpha.domain-a.tld:80:127.0.0.1 http://alpha.domain-a.tld/
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: #OMMITED
Content-Type: text/html
Content-Length: #OMMITED
Connection: keep-alive
Location: https://alpha.domain-a.tld/
# curl -I --resolve mike.domain-a.tld:80:127.0.0.1 http://mike.domain-a.tld/
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: #OMMITED
Content-Type: text/html
Content-Length: #OMMITED
Connection: keep-alive
Location: https://mike.domain-a.tld/
# curl -I --resolve juliet.domain-b.tld:80:127.0.0.1 http://juliet.domain-b.tld/
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: #OMMITED
Content-Type: text/html
Content-Length: #OMMITED
Connection: keep-alive
Location: https://alpha.domain-a.tld/
Could anyone help me find out why juliet is resolving the first alphanumeric host (alpha) rather than juliet?
Look at the server_name of juliet:
# cat juliet.domain-b.tld
server {
listen 80;
server_name juliet.dommain-b.tld;
return 301 https://juliet.dommain-b.tld$request_uri;
}
juliet.dommain-b.tld probably doesn't exists? I think your curl-command is correct (with the correct url) but in your nginx config you wrote the wrong name. Your nginx server doesn't know the domain but the dns request resolves correctly to your server and so your server returns the first entry of your nginx config.

Can't connect to my webserver from within the local network

It works from outside (ipv4).
My nginx configuration has to be messed up, since when I browse for 192.168.xxx.xxx (address of my webserver), I get forwarded to my homepage's DNS. Even if I use "localhost" or "0.0.0.0" in my browser bar on the webserver itself, it doesn't work.
Can anyone tell me how to properly solve this? If I insert anything else than "cooldomain.com", it won't be reachable from the outside, right? But there has to be a solution.
The nginx server is running in a docker container, which is based on the official nginx image.
This is my nginx config file:
server {
listen 80;
listen 443 ssl http2;
server_name cooldomain.com;
ssl_protocols TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
ssl_prefer_server_ciphers On;
ssl_certificate /usr/share/nginx/fullchain.pem;
ssl_certificate_key /usr/share/nginx/privkey.pem;
ssl_trusted_certificate /usr/share/nginx/chain.pem;
ssl_session_cache shared:SSL:128m;
add_header Strict-Transport-Security "max-age=31557600; includeSubDomains";
ssl_stapling on;
ssl_stapling_verify on;
# Your favorite resolver may be used instead of the Google one below
# resolver 8.8.8.8;
# /usr/share/nginx/html;
# index index.html;
# charset koi8-r;
# access_log /var/log/nginx/host.access.log main;
location / {
if ($scheme = http) {
return 301 https://$server_name$request_uri;
}
root /usr/share/nginx/html;
# index index.html index.htm;
try_files $uri$args $uri$args/ /index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Edit:
Output of docker ps:
faXXXXX nginx "nginx -g 'daemon off" 14 minutes ago, up 14 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp webserver
(this is sadly not a copy paste)
Output of curl -v http://127.0.0.1:
$ curl -v http://127.0.0.1
Rebuilt URL to: http://127.0.0.1/
Trying 127.0.0.1...
Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
GET / HTTP/1.1
Host: 127.0.0.1
User-Agent: curl/7.47.0
Accept:
HTTP/1.1 301 Moved Permanently
Server: nginx/1.13.3
Date: Wed, 20 Sep 2017 15:46:55 GMT
Content-Type: text/html
Content-Length: 185
Connection: keep-alive
Location: https://cooldomain.com/
Strict-Transport-Security: max-age=31557600; includeSubDomains
Connection #0 to host 127.0.0.1 left intact
I managed to workaround it. I don't know if this is the right way to do it, but it does the job.
I added another server-block before my server block, which has the default_server prefix.
If you have a better idea, feel free to write an answer. :)
This is how my config file looks now. Pay attention to the first block:
server {
listen 80;
server_name 127.0.0.1 default_server;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
server {
listen 80;
listen 443 ssl http2;
server_name cooldomain.com;
ssl_protocols TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
ssl_prefer_server_ciphers On;
ssl_certificate /usr/share/nginx/fullchain.pem;
ssl_certificate_key /usr/share/nginx/privkey.pem;
ssl_trusted_certificate /usr/share/nginx/chain.pem;
ssl_session_cache shared:SSL:128m;
add_header Strict-Transport-Security "max-age=31557600; includeSubDomains";
ssl_stapling on;
ssl_stapling_verify on;
# Your favorite resolver may be used instead of the Google one below
# resolver 8.8.8.8;
# /usr/share/nginx/html;
# index index.html;
# charset koi8-r;
# access_log /var/log/nginx/host.access.log main;
location / {
if ($scheme = http) {
return 301 https://$server_name$request_uri;
}
root /usr/share/nginx/html;
# index index.html index.htm;
try_files $uri$args $uri$args/ /index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}

Resources