Docker set nginx https connection refused - nginx

I have tried in many ways to setup the nginx https configuration in docker environment.
But there are not logs showed in the docker nginx logs.
Actually, The website return connection refused or website refused to connect.
docker compose file:
version: "3"
services:
nginx:
container_name: nginx
build:
context: .
dockerfile: ./nginx/Dockerfile
ports:
- "80:80"
- "443:443"
volumes:
- .:/work
depends_on:
- django
django:
container_name: django
build:
context: .
dockerfile: ./Dockerfile
expose:
- "8000"
volumes:
- .:/work
command: uwsgi --ini ./uwsgi.ini
In nginx conf:
server {
listen 80;
server_name www.canarytechnologies.com;
rewrite ^(.*)$ https://$server_name$request_uri? permanent;
}
server {
listen 443 ssl;
ssl on;
server_name www.canarytechnologies.com;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ALL:!aNULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
charset utf-8;
client_max_body_size 10m;
location / {
include uwsgi_params;
uwsgi_pass django:8000;
}
}
I don't set the https, it works fine with 80 port. But I add the https 443 port. There refused to connect and there is no logs in the docker nginx output.
I have successfully setup the server without docker. All the configuration works without the docker environment.
I wondered why add https or add 443 port the server return to refuse connect.

I made a mistake .
I should add nginx configure file as .conf
So ADD ./nginx/conf/web.conf /etc/nginx/conf.d/web.conf Instead of this
ADD ./nginx/conf/web /etc/nginx/conf.d/web

Related

docker-compose nginx/certbot website does not load

I'm setting up a very simple docker compose script. It should setup nginx, create some let's Encrypt certificate and then serve the nginx default website to the browser in a secured website.
However, when I go to the website it loads for a long time and then doesn't give me any useful error message, other then yourfootprint.dk took too long to respond.
It works to create Certificates. So I know that the certbot part works.
I also know that the server and the domain works. If I run a simple nginx container without the docker-compose and the nginx.dev.conf the nginx default website is served fine.
I have a hunch that my nginx.dev.conf file is wrong and incoming requests will run in an infinite loop.
./docker-compose.yml
version: '3'
services:
webserver:
image: nginx:stable
ports:
- 80:80
- 443:443
restart: unless-stopped
volumes:
- ./data/nginx/nginx.dev.conf:/etc/nginx/conf.d/default.conf:ro
- ./data/certbot/www:/var/www/certbot/:ro
- ./data/certbot/conf/:/etc/nginx/ssl/:ro
certbot:
image: certbot/certbot:latest
volumes:
- ./data/certbot/www/:/var/www/certbot/:rw
- ./data/certbot/conf/:/etc/letsencrypt/:rw
./data/nginx/nginx.dev.conf
server {
listen 80;
listen [::]:80;
server_name yourfootprint.dk www.yourfootprint.dk;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://yourfootprint.dk$request_uri;
}
}
server {
listen 443 default_server ssl http2;
listen [::]:443 ssl http2;
server_name yourfootprint.dk;
ssl_certificate /etc/nginx/ssl/live/yourfootprint.dk/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/yourfootprint.dk/privkey.pem;
location / {
# ...
}
}
If you already have the certificate in /data/certbot/conf then the solution is easy:
./docker-compose.yaml
services:
app:
{{your_configurations_here}}
{{other_services...}}:
{{other_services_configuraitons}}
nginx:
container_name: nginx
image: nginx:latest
restart: always
environment:
- DOMAIN
depends_on:
- app
ports:
- 80:80
- 443:443
volumes:
- /data/nginx/templates:/etc/nginx/templates:ro
- /data/certbot/www:/var/www/certbot/
- /data/certbot/conf/:/etc/letsencrypt/
certbot:
container_name: certbot
image: certbot/certbot:latest
depends_on:
- nginx
command: >-
certonly --reinstall --webroot --webroot-path=/var/www/certbot
--email ${EMAIL} --agree-tos --no-eff-email
-d ${DOMAIN}
volumes:
- /data/certbot/conf/:/etc/letsencrypt/
- /data/certbot/www:/var/www/certbot/
/data/nginx/templates/default.template.conf
server {
listen [::]:80;
listen 80;
server_name $DOMAIN; #$DOMAIN must be defined in the environment
return 301 https://$host$request_uri;
}
./etc/nginx/templates/default.conf.template
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name $DOMAIN;
ssl_certificate /etc/letsencrypt/live/$DOMAIN/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/$DOMAIN/privkey.pem;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://app:80;
}
}
If not, then I think you need to split the process into two phases: an initiation phase and a production phase. I describe that in details here. The idea is to have a docker-compose file to initiate the letsencrypt certificate, and another docker-compose file to run the system and renew the certificate.
So without further ado, here is the file structure and content that is working really well for me (you still need to adapt the files locations and content to suit your needs):
./setup.sh
./docker-compose-initiate.yaml
./docker-compose.yaml
./etc/nginx/templpates/default.conf.template
./etc/nginx/templpates-initiation/default.conf.template
The setup in 2 phases:
In the first phase "the initiation phase" we will run an nginx container, and a certbot container just to obtain the ssl certificate for the first time and store it on the host ./etc/letsencrypt folder
I the second phase "the operation phase" we run all necessary services for the app including nginx that will use the letsencrypt folder this time to serve https on port 443, a certbot container will also run (on demand) to renew the certificate. We can add a cron job for that. So the setup.sh script is a simple convenience script that runs the commands one after another:
#!/bin/bash
# the script expects two arguments:
# - the domain name for which we are obtaining the ssl certificatee
# - the Email address associated with the ssl certificate
echo DOMAIN=$1 >> .env
echo EMAIL=$2 >> .env
# Phase 1 "Initiation"
docker-compose -f ./docker-compose-first.yaml up -d nginx
docker-compose -f ./docker-compose-first.yaml up certbot
docker-compose -f ./docker-compose-first.yaml down
# Phase 2 "Operation"
crontab ./etc/crontab
docker-compose -f ./docker-compose.yaml up -d
Phase 1: The ssl certificate initiation phase:
./docker-compose-initiate.yaml
version: "3"
services:
nginx:
container_name: nginx
image: nginx:latest
environment:
- DOMAIN
ports:
- 80:80
volumes:
- ./etc/nginx/templates-initiate:/etc/nginx/templates:ro
- ./etc/letsencrypt:/etc/letsencrypt:ro
- ./certbot/data:/var/www/certbot
certbot:
container_name: certbot
image: certbot/certbot:latest
depends_on:
- nginx
command: >-
certonly --reinstall --webroot --webroot-path=/var/www/certbot
--email ${EMAIL} --agree-tos --no-eff-email
-d ${DOMAIN}
volumes:
- ./etc/letsencrypt:/etc/letsencrypt
- ./certbot/data:/var/www/certbot
./etc/nginx/templates-initiate/default.conf.template
server {
listen [::]:80;
listen 80;
server_name $DOMAIN;
location ~/.well-known/acme-challenge {
allow all;
root /var/www/certbot;
}
}
Phase 2: The operation phase
./docker-compose.yaml
services:
app:
{{your_configurations_here}}
{{other_services...}}:
{{other_services_configuraitons}}
nginx:
container_name: nginx
image: nginx:latest
restart: always
environment:
- DOMAIN
depends_on:
- matomo
ports:
- 80:80
- 443:443
volumes:
- ./etc/nginx/templates:/etc/nginx/templates:ro
- ./etc/letsencrypt:/etc/letsencrypt
- ./certbot/data:/var/www/certbot
- /var/log/nginx:/var/log/nginx
certbot:
container_name: certbot
image: certbot/certbot:latest
depends_on:
- nginx
command: >-
certonly --reinstall --webroot --webroot-path=/var/www/certbot
--email ${EMAIL} --agree-tos --no-eff-email
-d ${DOMAIN}
volumes:
- ./etc/letsencrypt:/etc/letsencrypt
- ./certbot/data:/var/www/certbot
server {
listen [::]:80;
listen 80;
server_name $DOMAIN;
return 301 https://$host$request_uri;
}
./etc/nginx/templates/default.conf.template
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name $DOMAIN;
ssl_certificate /etc/letsencrypt/live/$DOMAIN/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/$DOMAIN/privkey.pem;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://app:80;
}
}

redirect http to https with nginx: gives to http 'This site can’t be reached'

I have a flask app running with gunicorn and nginx. https works normally.
I really strangle to find a way to redirect http to https though. I have tried multiple solutions on internet but non seems to work on my case.
project.conf
server {
listen 443 default_server;
server_name example.com www.example.com;
ssl on;
ssl_certificate certs/fullchain.pem;
ssl_certificate_key certs/privkey.pem;
location / {
proxy_pass http://websitecontainer:8000;
# Do not change this
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /static {
rewrite ^/static(.*) /$1 break;
root /static;
}
}
server {
listen 80;
server_name example.com;
rewrite ^(.*) https://example.com/$1 permanent;
}
server {
server_name www.example.com;
rewrite ^(.*) https://example.com/$1 permanent;
}
nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
# Define the maximum number of simultaneous connections that can be opened by a worker process
worker_connections 1024;
}
http {
# Include the file defining the list of file types that are supported by NGINX
include /etc/nginx/mime.types;
# Define the default file type that is returned to the user
default_type text/html;
# Define the format of log messages.
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# Define the location of the log of access attempts to NGINX
access_log /var/log/nginx/access.log main;
# Define the parameters to optimize the delivery of static content
sendfile on;
tcp_nopush on;
tcp_nodelay on;
# Define the timeout value for keep-alive connections with the client
keepalive_timeout 65;
# Define the usage of the gzip compression algorithm to reduce the amount of data to transmit
#gzip on;
# Include additional parameters for virtual host(s)/server(s)
include /etc/nginx/conf.d/*.conf;
}
Additional info:
nginx and flask app with gunicorn run on two different containers and I use docker-compose to build.
https works as expected but http gives 'This site can’t be reached'
Any idea whats wrong with my config file? Any insight could be helpfull. Thanks.
Edit:
Sharing also the docker-compose just in case something is wrong there:
version: '2'
services:
websitecontainer:
build: ./webapp
container_name: websitecontainer
restart: always
command: >
gunicorn -b 0.0.0.0:8000
--timeout 120
--access-logfile gunicorn-access.log
--error-logfile gunicorn-error.log
--reload
"app:create_app()"
environment:
PYTHONUNBUFFERED: 'true'
ports:
- '8000:8000'
nginx:
restart: always
build: ./nginx
ports:
- "443:443"
depends_on:
- websitecontainer
This happens because in your docker-compose file you only set nginx to listen to port 443, this way no request coming from port 80 won't be able to access your application.
Your docker-compose should look like this:
nginx:
restart: always
build: ./nginx
ports:
- "80:80"
- "443:443"
depends_on:
- websitecontainer
My configuration for http is as follows and works like a charm.
location / {
return 301 https://$host$request_uri;
}
Also you should delete your third server block.

How do I fix nginx reverse proxy to a wordpress docker redirecting port for the user?

So I have a raspberry pi web server I have been experimenting with, which runs nginx to serve multiple sites and such. I want to run wordpress in a docker container as a blog, but I am having issues configuring the nginx+docker wordpress setup correctly.
Here is my docker-compose.yml:
version: "3"
services:
db:
image: hypriot/rpi-mysql
restart: always
volumes:
- db_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: <password>
networks:
- wp
wordpress:
depends_on:
- db
image: wordpress
restart: always
volumes:
- ./:/var/www/html/wp-content
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: <password>
ports:
- 8082:80
networks:
- wp
networks:
wp:
volumes:
db_data:
Here is my current nginx .conf for example.com:
server {
client_max_body_size 32M;
# Listen HTTP
listen 80;
server_name www.example.com example.com;
# Redirect HTTP to HTTPS
return 301 https://$host$request_uri;
}
server {
client_max_body_size 32M;
# Listen HTTP
listen 443 ssl;
server_name example.com www.example.com;
# SSL config
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# does not fix the issue
port_in_redirect off;
# Proxy Config
location / {
# My attempts at fixing the port issue (did not work in any combination)
proxy_bind $host:443;
proxy_redirect off;
port_in_redirect off;
absolute_redirect off;
proxy_set_header Location $host:443;
proxy_set_header Host $http_host:443;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8082/;
# an extra try despite my 8082 port not being open
proxy_redirect https://example.com:8082/ https://example.com/;
}
# testing and looking at just the /wp-login.php "works" but without any of the content
location ~ \.php {
proxy_pass http://127.0.0.1:8082;
}
}
My issue: upon visiting my example.com domain, I am getting redirected to example.com:8082 and not getting any of the content and I've had a lot of issue trying to figure out a way of fixing it. I have also tried just using http on port 80 but that doesn't make a difference (unless I am on the local network, for which it gets the files locally)
Is there a simple thing that I am missing from the above nginx setup?
Is there a way to make the docker forward it on a different virtual port?
Ok so it looks like the problem was not so much with the docker/nginx setup, but with the wordpress. I made the mistake of filling out the initial wordpress setup over [rpi.local.ip.address]:8082 and this was saved in the config.
I ended up just resetting the volumes with docker-compose down --volumes, though this deletes all your data.
The real answer is the solution the the problem found here:
Docker: I can't map ports other than 80 to my WordPress container
I also made some modifications to the files, so the ones that work are below:
If these do not work for you either, then you can:
reset the container with docker-compose down --volumes,
remove ports: - 8082:80
forward to the ip address found by docker inspect [id-of-wordpress-container] with proxy_pass http://[docker-ip]:80/;
Then set up the wordpress install, and only after re-add ports: - 8082:80 as this ip can change after a reboot
docker-compose.yml
version: "3"
services:
db:
image: mysql/mysql-server:8.0
restart: always
volumes:
- db_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: VNz5EHiZkec9mn
networks:
- wp
command: '--default-authentication-plugin=mysql_native_password'
wordpress:
depends_on:
- db
image: wordpress
restart: always
volumes:
- ./wp-content/:/var/www/html/wp-content
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: VNz5EHiZkec9mn
networks:
- wp
ports:
- 8082:80
networks:
wp
volumes:
db_data:
/etx/nginx/sites-available/example.com.conf
There is an added redirect in case the 301 redirect was cached in the browser
server {
client_max_body_size 32M;
# Listen HTTP
listen 80;
server_name www.example.com example.com;
# Redirect HTTP to HTTPS
return 301 https://$http_host$request_uri;
}
server {
listen 8082 ssl;
server_name example.com www.example.com;
# SSL config
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
return 301 https://scienceangles.com;
}
server {
client_max_body_size 32M;
# Listen HTTP
listen 443 ssl;
server_name example.com www.example.com;
# SSL config
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_stapling on;
ssl_stapling_verify on;
port_in_redirect off;
# Proxy Config
location / {
proxy_pass http://localhost:8082;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

nginx timeout after https proxy to localhost

I want to run one docker-compose with Nginx which will be only a proxy to other docker-compose services.
Here is my docker-compose.yml with proxy:
version: '2'
services:
storage:
image: nginx:1.11.13
entrypoint: /bin/true
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- /path_to_ssl_cert:/path_to_ssl_cert
proxy:
image: nginx:1.11.13
ports:
- "80:80"
- "443:443"
volumes_from:
- storage
network_mode: "host"
So it will grap all connections to port 80 or 443 and proxy them to services specified in ./config/nginx/conf.d directory.
Here is example service ./config/nginx/conf.d/domain_name.conf:
server {
listen 80;
listen 443 ssl;
server_name domain_name.com;
ssl_certificate /path_to_ssl_cert/cert;
ssl_certificate_key /path_to_ssl_cert/privkey;
return 301 https://www.domain_name.com$request_uri;
}
server {
listen 80;
server_name www.domain_name.com;
return 301 https://www.domain_name.com$request_uri;
# If you uncomment this section and comment return line it's works
# location ~ {
# proxy_pass http://localhost:8888;
# # or proxy to https, doesn't matter
# #proxy_pass https://localhost:4433;
# }
}
server {
listen 443 ssl;
server_name www.domain_name.com;
ssl on;
ssl_certificate /path_to_ssl_cert/cert;
ssl_certificate_key /path_to_ssl_cert/privkey;
location ~ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Client-Verify SUCCESS;
proxy_set_header X-Client-DN $ssl_client_s_dn;
proxy_set_header X-SSL-Subject $ssl_client_s_dn;
proxy_set_header X-SSL-Issuer $ssl_client_i_dn;
proxy_pass https://localhost:4433;
# like before
# proxy_pass http://localhost:8888;
}
}
It's redirect all request http://domain_name.com, https://domain_name.com and http://www.domain_name.com to https://www.domain_name.com and proxy it to specific localhost service.
Here is my specific service docker-compose.yml
version: '2'
services:
storage:
image: nginx:1.11.13
entrypoint: /bin/true
volumes:
- /path_to_ssl_cert:/path_to_ssl_cert
- ./config/nginx/conf.d:/etc/nginx/conf.d
- ./config/php:/usr/local/etc/php
- ./config/php-fpm.d:/usr/local/etc/php-fpm.d
- php-socket:/var/share/php-socket
www:
build:
context: .
dockerfile: ./Dockerfile_www
image: domain_name_www
ports:
- "8888:80"
- "4433:443"
volumes_from:
- storage
links:
- php
php:
build:
context: .
dockerfile: ./Dockerfile_php
image: domain_name_php
volumes_from:
- storage
volumes:
php-socket:
So when you go to http://www.domain_name.com:8888 or https://www.domain_name.com:4433 you will get content. When you curl to localhost:8888 or https://localhost:4433 from server where docker is running you will get content too.
And now my issue.
When I go to browser and type domain_name.com, www.domain_name.com or https://www.domain_name.com nothing happen. Even when I curl to this domain from my local machine I got timeout.
I have search some info "nginx proxy https to localhost" but noting works for me.
I have solution!
When I setup network_mode: "host" in docker-compose.yml for my proxy I thought that ports: entries still are working but not.
Now proxy works in my host network so it use local ports and omit ports: entries from docker-compose.yml. That's mean I have manually open ports 80 and 443 on my server.

docker-compose error with ghost and an nginx proxy

So, I'm getting started with docker-compose
Right now, I'm having an issue with nginx proxying requests.
So I have a container which uses the ghost image and is exposed on 2368:
ghostblog:
container_name: ghostblog
image: ghost
restart: always
ports:
- 2368:2368
env_file:
- ./config.env
volumes:
- "./petemsGhost/content/themes:/usr/src/ghost/content/themes"
- "./petemsGhost/content/apps:/usr/src/ghost/content/apps"
- "./petemsGhost/content/images:/usr/src/ghost/content/images"
- "./petemsGhost/content/data:/usr/src/ghost/content/data"
- "./petemsGhost/config:/var/lib/ghost"
And I'm linking that to an nginx container that is proxying requests to the container:
ghost_nginx:
restart: always
build: ./ghostNginx/
ports:
- 80:80
- 443:443
links:
- 'ghostblog:ghostblog'
Inside that build, I copy over a bunch of stuff, keys, config etc:
Dockerfile
FROM centos:centos6
# Delete defaults
RUN yum install epel-release -y
RUN yum install -y nginx curl
RUN rm /etc/nginx/nginx.conf
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/nginx.conf
COPY sites-enabled/petersouter.co.uk.conf /etc/nginx/sites-available/petersouter.co.uk.conf
COPY conf.d/ghost_blog_petersouter.co.uk-upstream.conf /etc/nginx/conf.d/ghost_blog_petersouter.co.uk-upstream.conf
COPY petersouter.co.uk.crt /etc/nginx/petersouter.co.uk.crt
COPY petersouter.co.uk.key /etc/nginx/petersouter.co.uk.key
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
/etc/nginx/conf.d/ghost_blog_petersouter.co.uk-upstream.conf
upstream ghost_blog_petersouter.co.uk {
server ghostblog:2368 fail_timeout=10s;
}
/etc/nginx/sites-enabled/petersouter.co.uk.conf
# Redirect all non-SSL to SSL
server {
listen 0.0.0.0:80;
return 301 https://$server_name$request_uri;
}
# Main SSL Config Block
server {
listen 0.0.0.0:443 ssl;
ssl on;
ssl_certificate /etc/nginx/petersouter.co.uk.crt;
ssl_certificate_key /etc/nginx/petersouter.co.uk.key;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA;
ssl_prefer_server_ciphers on;
index index.html index.htm index.php;
access_log /var/log/nginx/ssl-petersouter.co.uk.access.log combined;
error_log /var/log/nginx/ssl-petersouter.co.uk.error.log;
location / {
proxy_pass http://ghost_blog_petersouter.co.uk;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_buffering off;
}
}
And the linking is working, because I can see it in the nginx container:
$ docker exec -i -t petersouterblogcompose_ghost_nginx_1 bash
$ curl ghostblog:2368
$ Moved Permanently. Redirecting to https://petersouter.co.uk/
And outside of the container I can curl the ghost instance directly:
$ curl 0.0.0.0:2368
$ Moved Permanently. Redirecting to https://petersouter.co.uk/
But when I try to go to port 80 that redirects correctly, I get no response:
$ curl curl 0.0.0.0:80
$ curl: (52) Empty reply from server
I'm guessing that I've messsed something up in the nginx config somewhere, as everything else seems to be working as intended.
Worked it out, it's always the simple things!
Note this line of the nginx Dockerfile:
COPY sites-enabled/petersouter.co.uk.conf /etc/nginx/sites-available/petersouter.co.uk.conf
I'm copying into the sites-available folder, so the conf is never getting loaded! Fixed that:
COPY sites-enabled/petersouter.co.uk.conf /etc/nginx/sites-enabled/petersouter.co.uk.conf
And everything worked! :)

Resources