nginx timeout after https proxy to localhost - nginx

I want to run one docker-compose with Nginx which will be only a proxy to other docker-compose services.
Here is my docker-compose.yml with proxy:
version: '2'
services:
storage:
image: nginx:1.11.13
entrypoint: /bin/true
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- /path_to_ssl_cert:/path_to_ssl_cert
proxy:
image: nginx:1.11.13
ports:
- "80:80"
- "443:443"
volumes_from:
- storage
network_mode: "host"
So it will grap all connections to port 80 or 443 and proxy them to services specified in ./config/nginx/conf.d directory.
Here is example service ./config/nginx/conf.d/domain_name.conf:
server {
listen 80;
listen 443 ssl;
server_name domain_name.com;
ssl_certificate /path_to_ssl_cert/cert;
ssl_certificate_key /path_to_ssl_cert/privkey;
return 301 https://www.domain_name.com$request_uri;
}
server {
listen 80;
server_name www.domain_name.com;
return 301 https://www.domain_name.com$request_uri;
# If you uncomment this section and comment return line it's works
# location ~ {
# proxy_pass http://localhost:8888;
# # or proxy to https, doesn't matter
# #proxy_pass https://localhost:4433;
# }
}
server {
listen 443 ssl;
server_name www.domain_name.com;
ssl on;
ssl_certificate /path_to_ssl_cert/cert;
ssl_certificate_key /path_to_ssl_cert/privkey;
location ~ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Client-Verify SUCCESS;
proxy_set_header X-Client-DN $ssl_client_s_dn;
proxy_set_header X-SSL-Subject $ssl_client_s_dn;
proxy_set_header X-SSL-Issuer $ssl_client_i_dn;
proxy_pass https://localhost:4433;
# like before
# proxy_pass http://localhost:8888;
}
}
It's redirect all request http://domain_name.com, https://domain_name.com and http://www.domain_name.com to https://www.domain_name.com and proxy it to specific localhost service.
Here is my specific service docker-compose.yml
version: '2'
services:
storage:
image: nginx:1.11.13
entrypoint: /bin/true
volumes:
- /path_to_ssl_cert:/path_to_ssl_cert
- ./config/nginx/conf.d:/etc/nginx/conf.d
- ./config/php:/usr/local/etc/php
- ./config/php-fpm.d:/usr/local/etc/php-fpm.d
- php-socket:/var/share/php-socket
www:
build:
context: .
dockerfile: ./Dockerfile_www
image: domain_name_www
ports:
- "8888:80"
- "4433:443"
volumes_from:
- storage
links:
- php
php:
build:
context: .
dockerfile: ./Dockerfile_php
image: domain_name_php
volumes_from:
- storage
volumes:
php-socket:
So when you go to http://www.domain_name.com:8888 or https://www.domain_name.com:4433 you will get content. When you curl to localhost:8888 or https://localhost:4433 from server where docker is running you will get content too.
And now my issue.
When I go to browser and type domain_name.com, www.domain_name.com or https://www.domain_name.com nothing happen. Even when I curl to this domain from my local machine I got timeout.
I have search some info "nginx proxy https to localhost" but noting works for me.

I have solution!
When I setup network_mode: "host" in docker-compose.yml for my proxy I thought that ports: entries still are working but not.
Now proxy works in my host network so it use local ports and omit ports: entries from docker-compose.yml. That's mean I have manually open ports 80 and 443 on my server.

Related

Redirect to login page after logined to minio console

I am going to run the service with Minio and I run it with docker-compose:
version: '3.7'
services:
service_minio:
image: quay.io/minio/minio:latest
container_name: service_minio
restart: always
ports:
- 9000:9000
- 9001:9001
volumes:
- ./volumes/minio/data:/data
environment:
- MINIO_ROOT_USER=minioadmin
- MINIO_ROOT_PASSWORD=minioadmin
command: server /data --console-address ":9001"
networks:
service_network:
ipv4_address: 172.15.10.5
networks:
service_network:
external: true
And then i serve it with Nginx by below configuration:
server {
listen 3000 default_server;
listen [::]:3000 default_server;
server_name mydomain.com;
location / {
access_log /var/log/nginx/minio_access.log;
error_log /var/log/nginx/minio_error.log;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_pass http://172.15.10.5:9001;
}
}
The problem is here, When i logged in to Minio Console, I will redirect to /login page without any errors. Actually i enter correctlly credentials but return back to /login page.
Do you know where the problem is?
in most cases of configuring load balancer/proxy, MINIO_SERVER_URL needs to be set.
Please have a look at the documentation
Environment Variable Documentation

docker-compose nginx/certbot website does not load

I'm setting up a very simple docker compose script. It should setup nginx, create some let's Encrypt certificate and then serve the nginx default website to the browser in a secured website.
However, when I go to the website it loads for a long time and then doesn't give me any useful error message, other then yourfootprint.dk took too long to respond.
It works to create Certificates. So I know that the certbot part works.
I also know that the server and the domain works. If I run a simple nginx container without the docker-compose and the nginx.dev.conf the nginx default website is served fine.
I have a hunch that my nginx.dev.conf file is wrong and incoming requests will run in an infinite loop.
./docker-compose.yml
version: '3'
services:
webserver:
image: nginx:stable
ports:
- 80:80
- 443:443
restart: unless-stopped
volumes:
- ./data/nginx/nginx.dev.conf:/etc/nginx/conf.d/default.conf:ro
- ./data/certbot/www:/var/www/certbot/:ro
- ./data/certbot/conf/:/etc/nginx/ssl/:ro
certbot:
image: certbot/certbot:latest
volumes:
- ./data/certbot/www/:/var/www/certbot/:rw
- ./data/certbot/conf/:/etc/letsencrypt/:rw
./data/nginx/nginx.dev.conf
server {
listen 80;
listen [::]:80;
server_name yourfootprint.dk www.yourfootprint.dk;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://yourfootprint.dk$request_uri;
}
}
server {
listen 443 default_server ssl http2;
listen [::]:443 ssl http2;
server_name yourfootprint.dk;
ssl_certificate /etc/nginx/ssl/live/yourfootprint.dk/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/yourfootprint.dk/privkey.pem;
location / {
# ...
}
}
If you already have the certificate in /data/certbot/conf then the solution is easy:
./docker-compose.yaml
services:
app:
{{your_configurations_here}}
{{other_services...}}:
{{other_services_configuraitons}}
nginx:
container_name: nginx
image: nginx:latest
restart: always
environment:
- DOMAIN
depends_on:
- app
ports:
- 80:80
- 443:443
volumes:
- /data/nginx/templates:/etc/nginx/templates:ro
- /data/certbot/www:/var/www/certbot/
- /data/certbot/conf/:/etc/letsencrypt/
certbot:
container_name: certbot
image: certbot/certbot:latest
depends_on:
- nginx
command: >-
certonly --reinstall --webroot --webroot-path=/var/www/certbot
--email ${EMAIL} --agree-tos --no-eff-email
-d ${DOMAIN}
volumes:
- /data/certbot/conf/:/etc/letsencrypt/
- /data/certbot/www:/var/www/certbot/
/data/nginx/templates/default.template.conf
server {
listen [::]:80;
listen 80;
server_name $DOMAIN; #$DOMAIN must be defined in the environment
return 301 https://$host$request_uri;
}
./etc/nginx/templates/default.conf.template
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name $DOMAIN;
ssl_certificate /etc/letsencrypt/live/$DOMAIN/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/$DOMAIN/privkey.pem;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://app:80;
}
}
If not, then I think you need to split the process into two phases: an initiation phase and a production phase. I describe that in details here. The idea is to have a docker-compose file to initiate the letsencrypt certificate, and another docker-compose file to run the system and renew the certificate.
So without further ado, here is the file structure and content that is working really well for me (you still need to adapt the files locations and content to suit your needs):
./setup.sh
./docker-compose-initiate.yaml
./docker-compose.yaml
./etc/nginx/templpates/default.conf.template
./etc/nginx/templpates-initiation/default.conf.template
The setup in 2 phases:
In the first phase "the initiation phase" we will run an nginx container, and a certbot container just to obtain the ssl certificate for the first time and store it on the host ./etc/letsencrypt folder
I the second phase "the operation phase" we run all necessary services for the app including nginx that will use the letsencrypt folder this time to serve https on port 443, a certbot container will also run (on demand) to renew the certificate. We can add a cron job for that. So the setup.sh script is a simple convenience script that runs the commands one after another:
#!/bin/bash
# the script expects two arguments:
# - the domain name for which we are obtaining the ssl certificatee
# - the Email address associated with the ssl certificate
echo DOMAIN=$1 >> .env
echo EMAIL=$2 >> .env
# Phase 1 "Initiation"
docker-compose -f ./docker-compose-first.yaml up -d nginx
docker-compose -f ./docker-compose-first.yaml up certbot
docker-compose -f ./docker-compose-first.yaml down
# Phase 2 "Operation"
crontab ./etc/crontab
docker-compose -f ./docker-compose.yaml up -d
Phase 1: The ssl certificate initiation phase:
./docker-compose-initiate.yaml
version: "3"
services:
nginx:
container_name: nginx
image: nginx:latest
environment:
- DOMAIN
ports:
- 80:80
volumes:
- ./etc/nginx/templates-initiate:/etc/nginx/templates:ro
- ./etc/letsencrypt:/etc/letsencrypt:ro
- ./certbot/data:/var/www/certbot
certbot:
container_name: certbot
image: certbot/certbot:latest
depends_on:
- nginx
command: >-
certonly --reinstall --webroot --webroot-path=/var/www/certbot
--email ${EMAIL} --agree-tos --no-eff-email
-d ${DOMAIN}
volumes:
- ./etc/letsencrypt:/etc/letsencrypt
- ./certbot/data:/var/www/certbot
./etc/nginx/templates-initiate/default.conf.template
server {
listen [::]:80;
listen 80;
server_name $DOMAIN;
location ~/.well-known/acme-challenge {
allow all;
root /var/www/certbot;
}
}
Phase 2: The operation phase
./docker-compose.yaml
services:
app:
{{your_configurations_here}}
{{other_services...}}:
{{other_services_configuraitons}}
nginx:
container_name: nginx
image: nginx:latest
restart: always
environment:
- DOMAIN
depends_on:
- matomo
ports:
- 80:80
- 443:443
volumes:
- ./etc/nginx/templates:/etc/nginx/templates:ro
- ./etc/letsencrypt:/etc/letsencrypt
- ./certbot/data:/var/www/certbot
- /var/log/nginx:/var/log/nginx
certbot:
container_name: certbot
image: certbot/certbot:latest
depends_on:
- nginx
command: >-
certonly --reinstall --webroot --webroot-path=/var/www/certbot
--email ${EMAIL} --agree-tos --no-eff-email
-d ${DOMAIN}
volumes:
- ./etc/letsencrypt:/etc/letsencrypt
- ./certbot/data:/var/www/certbot
server {
listen [::]:80;
listen 80;
server_name $DOMAIN;
return 301 https://$host$request_uri;
}
./etc/nginx/templates/default.conf.template
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name $DOMAIN;
ssl_certificate /etc/letsencrypt/live/$DOMAIN/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/$DOMAIN/privkey.pem;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://app:80;
}
}

How do I fix nginx reverse proxy to a wordpress docker redirecting port for the user?

So I have a raspberry pi web server I have been experimenting with, which runs nginx to serve multiple sites and such. I want to run wordpress in a docker container as a blog, but I am having issues configuring the nginx+docker wordpress setup correctly.
Here is my docker-compose.yml:
version: "3"
services:
db:
image: hypriot/rpi-mysql
restart: always
volumes:
- db_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: <password>
networks:
- wp
wordpress:
depends_on:
- db
image: wordpress
restart: always
volumes:
- ./:/var/www/html/wp-content
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: <password>
ports:
- 8082:80
networks:
- wp
networks:
wp:
volumes:
db_data:
Here is my current nginx .conf for example.com:
server {
client_max_body_size 32M;
# Listen HTTP
listen 80;
server_name www.example.com example.com;
# Redirect HTTP to HTTPS
return 301 https://$host$request_uri;
}
server {
client_max_body_size 32M;
# Listen HTTP
listen 443 ssl;
server_name example.com www.example.com;
# SSL config
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# does not fix the issue
port_in_redirect off;
# Proxy Config
location / {
# My attempts at fixing the port issue (did not work in any combination)
proxy_bind $host:443;
proxy_redirect off;
port_in_redirect off;
absolute_redirect off;
proxy_set_header Location $host:443;
proxy_set_header Host $http_host:443;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8082/;
# an extra try despite my 8082 port not being open
proxy_redirect https://example.com:8082/ https://example.com/;
}
# testing and looking at just the /wp-login.php "works" but without any of the content
location ~ \.php {
proxy_pass http://127.0.0.1:8082;
}
}
My issue: upon visiting my example.com domain, I am getting redirected to example.com:8082 and not getting any of the content and I've had a lot of issue trying to figure out a way of fixing it. I have also tried just using http on port 80 but that doesn't make a difference (unless I am on the local network, for which it gets the files locally)
Is there a simple thing that I am missing from the above nginx setup?
Is there a way to make the docker forward it on a different virtual port?
Ok so it looks like the problem was not so much with the docker/nginx setup, but with the wordpress. I made the mistake of filling out the initial wordpress setup over [rpi.local.ip.address]:8082 and this was saved in the config.
I ended up just resetting the volumes with docker-compose down --volumes, though this deletes all your data.
The real answer is the solution the the problem found here:
Docker: I can't map ports other than 80 to my WordPress container
I also made some modifications to the files, so the ones that work are below:
If these do not work for you either, then you can:
reset the container with docker-compose down --volumes,
remove ports: - 8082:80
forward to the ip address found by docker inspect [id-of-wordpress-container] with proxy_pass http://[docker-ip]:80/;
Then set up the wordpress install, and only after re-add ports: - 8082:80 as this ip can change after a reboot
docker-compose.yml
version: "3"
services:
db:
image: mysql/mysql-server:8.0
restart: always
volumes:
- db_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: VNz5EHiZkec9mn
networks:
- wp
command: '--default-authentication-plugin=mysql_native_password'
wordpress:
depends_on:
- db
image: wordpress
restart: always
volumes:
- ./wp-content/:/var/www/html/wp-content
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: VNz5EHiZkec9mn
networks:
- wp
ports:
- 8082:80
networks:
wp
volumes:
db_data:
/etx/nginx/sites-available/example.com.conf
There is an added redirect in case the 301 redirect was cached in the browser
server {
client_max_body_size 32M;
# Listen HTTP
listen 80;
server_name www.example.com example.com;
# Redirect HTTP to HTTPS
return 301 https://$http_host$request_uri;
}
server {
listen 8082 ssl;
server_name example.com www.example.com;
# SSL config
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
return 301 https://scienceangles.com;
}
server {
client_max_body_size 32M;
# Listen HTTP
listen 443 ssl;
server_name example.com www.example.com;
# SSL config
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_stapling on;
ssl_stapling_verify on;
port_in_redirect off;
# Proxy Config
location / {
proxy_pass http://localhost:8082;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

POST requests to Nexus throught Nginx in a customized web context return error 400 POST is not supported

I'm trying to setup Nexus 3 behind Nginx reverse proxy. Nexus and Nginx are in docker containers launched with docker-compose on a Centos 7.3 host. All docker images are the latest available.
Nexus listen on default port 8081 into its container. This port is exposed as 18081 on the docker host. Nexus is configured to be in /nexus web context.
Nginx listen on port 80 into its container which is also exposed on the docker host.
I just want to access the Nexus Repository Manager in a local Firefox with the address "localhost/nexus"
Here is the configuration:
docker-compose.yml:
version: '2'
networks:
picnetwork:
driver: bridge
services:
nginx:
image: nginx:latest
restart: always
hostname: nginx
ports:
- "80:80"
networks:
- picnetwork
volumes:
- /opt/cip/nginx:/etc/nginx/conf.d
depends_on:
- nexus
nexus:
image: sonatype/nexus3:latest
restart: always
hostname: nexus
ports:
- "18081:8081"
networks:
- picnetwork
volumes:
- /opt/cip/nexus:/nexus-data
environment:
- NEXUS_CONTEXT=nexus
Nginx default.conf (/opt/cip/nginx/default.conf in docker host which is /etc/nginx/conf.d/default.conf in Nginx container) :
proxy_send_timeout 120;
proxy_read_timeout 300;
proxy_buffering off;
tcp_nodelay on;
server {
listen 80;
server_name localhost;
client_max_body_size 1G;
location /nexus {
proxy_pass http://nexus:8081/nexus/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
The strange thing is that when web context is / (location /, proxy_pass without /nexux, NEXUS_CONTEXT=) it works fine, when the web context is /nexus (as configuration shown here) POST requests return "400 HTTP methop POST is not supported by this URL". But if I use "localhost:18081/nexus" in the second case it works fine.
Is that a Nexus bug, a Nginx bug, or am I missing something ?

Flask, Gunicorn, NGINX, Docker : What is properly the way to config SERVER_NAME and proxy_pass?

I setup docker project using Flask, gunicorn, NGINX and Docker, which works fine if I didn't add SERVER_NAME in Flask's setting.py.
The current config is :
gunicorn
gunicorn -b 0.0.0.0:5000
docker-compose.yml
services:
application:
#restart: always
build: .
expose:
- "5000"
ports:
- "5000:5000"
volumes:
- .:/code
links:
- db
nginx:
restart: always
build: ./nginx
links:
- application
expose:
- 8080
ports:
- "8880:8080"
NGINX .conf
server {
listen 8080;
server_name application;
charset utf-8;
location / {
proxy_pass http://application:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Then I set the SERVER_NAME in Flask's setting.py
SERVER_NAME = '0.0.0.0:5000'
When I enter url 0.0.0.0:8880 to my browser, I get response 404 from nginx. What should be properly SERVER_NAME in Flask's setting.py ?
Thanks in advance.
Finally find the solution,
I have to specific port for proxy_set_header
proxy_set_header Host $host:5000;
It doesn't make sense to set an IP for SERVER_NAME. SERVER_NAME will redirect the requests to that hostname, and is useful for setting subdomains and also supporting URL generation with an application context (for instance lets say you have a background thread which needs to generate URLs but has no request context).
SERVER_NAME should match your domain where the application is deployed.

Resources