As many people I have the problem with the following error when I call the website (blog.mydomain.de):
502 Bad Gateway
nginx/1.14.2
2020/03/14 23:59:08 [error] 7#7: *1 connect() failed (111: Connection refused) while connecting to upstream, client: $IP, server: blog.mydomain.de, request: "GET / HTTP/2.0", upstream: "https://192.168.160.5:443/", host: "blog.mydomain.de"
So my problem is using WordPress. I also show you the NextCloud config because this works without any problems. I also know that the WordPress nginx config should contain more but I tried to find it if I even get this error with minimal config missing fastcgi and stuff.
worker_processes auto;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 1024;
}
http {
gzip on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml
application/javascript application/json application/xml application/rss+xml image/svg+xml;
server_names_hash_bucket_size 64;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
more_clear_headers 'server';
ssl_certificate /etc/letsencrypt/live/mydomain.de-0001/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.de-0001/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/mydomain.de-0001/chain.pem;
ssl_dhparam /etc/ssl/dhparam.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_ciphers "EECDH-AESGCM:EDH+ESGCM:AES256+EECDH:AES256+EDH";
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload";
server {
listen 80;
listen [::]:80;
server_name blog.mydomain.de cloud.mydomain.de;
return 301 https://$host$request_uri;
#return 301 https://$server_name$request_uri;
}
# NextCloudPi
server {
server_name cloud.mydomain.de;
listen 443 ssl http2;
listen [::]:443 ssl http2;
client_max_body_size 100G;
underscores_in_headers on;
location / {
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 64;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header Front-End-Https on;
proxy_pass https://nextcloudpi;
}
}
# NextCloudPi Konfiguration Web-Interface
server {
server_name cloud.mydomain.de;
listen 4443 ssl http2;
listen [::]:4433 ssl http2;
location / {
more_clear_headers 'upgrade';
more_clear_headers 'Strict-Transport-Security';
proxy_ssl_verify off;
proxy_pass https://nextcloudpi:4443;
proxy_pass_header Authorization;
proxy_set_header 'X-Forwarded-Host' cloud.mydomain.de;
proxy_set_header 'X-Forwarded-Proto' https;
proxy_set_header 'X-Forwarded-For' $remote_addr;
proxy_set_header 'X-Forwarded-IP' $remote_addr;
}
}
# WordPress
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name blog.mydomain.de;
client_max_body_size 200m;
underscores_in_headers on;
location / {
proxy_pass http://wordpress;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
}
}
}
I'm using this with docker containers. The compose file looks like this:
version: "3"
networks:
nextcloudpi:
services:
nginx:
restart: always
container_name: nginx
image: cptdaydreamer/nginx:latest
ports:
- 80:80
- 443:443
- 4443:4443
- 6800:6800
volumes:
- /media/storage/nginx:/var/log/nginx
- /etc/ssl:/etc/ssl
- /etc/letsencrypt/live:/etc/letsencrypt/live
- /etc/letsencrypt/archive:/etc/letsencrypt/archive
links:
- wordpress
depends_on:
- nextcloudpi
networks:
- nextcloudpi
- default
nextcloudpi:
restart: always
container_name: nextcloudpi
image: cptdaydreamer/nextcloudpi:latest
expose:
- 80
- 443
- 4443
- 6800
volumes:
- /media/storage/data:/data
- /etc/localtime:/etc/localtime:ro
networks:
- nextcloudpi
portainer:
image: portainer/portainer
command: -H unix:///var/run/docker.sock
restart: always
ports:
- 9001:9000
- 8000:8000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /media/storage/portainer:/data
container_name: portainer
db:
container_name: mariadb
image: mariadb:latest
#ports:
# - 3306:3306
volumes:
- /media/storage/mariadb:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: $PRIVATE
MYSQL_DATABASE: $PRIVATE
MYSQL_USER: $PRIVATE
MYSQL_PASSWORD: $PRIVATE
wordpress:
container_name: wordpress
links:
- db
#ports:
# - 9000:9000
depends_on:
- db
image: wordpress:latest
expose:
- "80"
restart: always
volumes:
- /media/storage/wordpress:/var/www/html
environment:
WORDPRESS_DB_HOST: db:3306
#WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: $PRIVATE
WORDPRESS_DB_PASSWORD: $PRIVATE
WORDPRESS_DB_NAME: $PRIVATE
WORDPRESS_TABLE_PREFIX: $PRIVATE
I don't know what the exact problem is. The logs of the docker container of wordpress shows:
[15-Mar-2020 00:50:24] NOTICE: fpm is running, pid 1
[15-Mar-2020 00:50:24] NOTICE: ready to handle connections
Any ideas?
Updated on request:
Wordpress image is now latest instead of 7.3-fpm
Current used nginx.conf
Try edit to wordpress:9000 in the proxy pass script and change the Nginx config to this.
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass wordpress:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
}
There's a mistake in your understanding. The wordpress-php-fpm image just expose the PHP-FPM service at port 9000, nothing running at https://wordpress:443 so Nginx will throw the 502 status. That's why you should use the fastcgi_pass to connect to PHP-FPM instead of proxy_pass like NextCloud API expose the https at port 4443 already.
When you split your stacks to 2 docker-compose.yml, everything will become more clearly and seperately.
-- wordpress/
--- docker-compose.yml
--- data/
-- nextcloud/
--- docker-compose.yml
--- data/
This is how Docker works.
From my experience, when using Docker, just keep a stack standalone. I mean Nextcloud going with an database, good. Then make another stack with WordPess and another database instance. It's take all the advantage of Docker and seperate the application each other.
Related
I have local docker setup with nginx as reverse proxy, self-signed SSL certs, mariadb, and wordpress.
Everything works well except when fetching resources on the local domain.
Let's say the domain name is myapp.local. I have added this in the /etc/hosts and the site is loading on this domain over https.
Problem occurs when php functions like file_get_contents() or simplexml_load_file() are fetching local assets.
For an example: file_get_contents('https://myapp.local/icon.svg');
Then I get a warning:
Failed to open stream: Connection refused
Here is my docker-compose file:
${DOMAIN} is set to myapp.local in .env file.
version: '3.6'
services:
nginx:
container_name: myapp-nginx
image: nginx:latest
ports:
- 80:80
- 443:443
volumes:
- ./config/nginx.conf:/tmp/default.template
- ./certs:/etc/certs
- wp_data:/var/www/html:rw,cached
- ./www:/var/www/html/wp-content
depends_on:
- wordpress
restart: always
entrypoint: /bin/bash -c 'cat /tmp/default.template | sed "s/\\\$$domain/${DOMAIN}/g" > /etc/nginx/conf.d/default.conf && nginx -g "daemon off;"'
networks:
webnet:
aliases:
- myapp.local
mysql:
container_name: myapp-mysql
image: mariadb:latest
volumes:
- ./db_data:/var/lib/mysql
- ./config/db.cnf:/etc/mysql/conf.d/db.cnf
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_DATABASE: myapp
restart: always
ports:
- 3306:3306
networks:
- webnet
wordpress:
container_name: myapp-wordpress
image: wordpress:php8.0-fpm
volumes:
- ./config/php.ini:/usr/local/etc/php/conf.d/php.ini
- wp_data:/var/www/html:rw,cached
- ./www:/var/www/html/wp-content
depends_on:
- mysql
restart: always
environment:
WORDPRESS_DB_NAME: myapp
WORDPRESS_TABLE_PREFIX: wp_
WORDPRESS_DB_HOST: mysql
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: root
WORDPRESS_DEBUG: 1
networks:
- webnet
extra_hosts:
- "myapp.local:127.0.0.1"
networks:
webnet:
external: true
driver: bridge
volumes:
db_data: {}
wp_data: {}
nginx conf:
server {
listen 80;
listen [::]:80;
server_name $domain;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name $domain www.$domain;
ssl_certificate /etc/certs/$domain.pem;
ssl_certificate_key /etc/certs/$domain-key.pem;
add_header Strict-Transport-Security "max-age=31536000" always;
ssl_session_cache shared:SSL:20m;
ssl_session_timeout 10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "ECDH+AESGCM:ECDH+AES256:ECDH+AES128:!ADH:!AECDH:!MD5;";
root /var/www/html;
index index.php;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_min_length 0;
gzip_types text/plain application/javascript text/css text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype;
client_max_body_size 100M;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass wordpress:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_read_timeout 300;
}
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off; access_log off;
}
location = /robots.txt {
log_not_found off; access_log off; allow all;
}
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
}
I'm struggling with this for weeks. I've tried numerous options and I'm stuck. What am I missing here? Any help is appreciated.
p.s.: rewriting/swapping functions isn't an option since these are coming from third-party plugins.
I have successfully setup a wordpress site running on a dockerized nginx. When the wordpress site is up and running, I can go to the home page: https://my_domain.com or any links or at after wp-admin/... without any problem.
But when I go to https://my_domain.com/sample-page or https://my_domain.com/post-id it immediately redirects to the root domain http://my_domain.com
wordpress nginx post, page url automatically redirects to root domain
with exception route /wp-admin/ when accessed redirects correctly to https://my_domain.com/wp-admin/login.php if not logged in and to https://my_domain.com/wp-admin/ if logged in
Here is my nginx config at /nginx/default.conf:
server {
listen 80;
listen [::]:80;
server_name my_domain.com www.my_domain.com;
location / {
return 301 https://my_domain.com$request_uri;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name my_domain.com www.my_domain.com;
index index.php index.html index.htm;
root /var/www/html/wordpress;
ssl on;
server_tokens off;
ssl_certificate /etc/nginx/ssl/live/my_domain.com/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/my_domain.com/privkey.pem;
ssl_dhparam /etc/nginx/dhparam/dhparam-2048.pem;
ssl_buffer_size 8k;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
location / {
try_files $uri $uri/ /index.php$is_args$args;
proxy_pass http://wordpress_host:80;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
proxy_pass http://wordpress_host:80;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off; access_log off;
}
location = /robots.txt {
log_not_found off; access_log off; allow all;
}
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
}
I also config at wp-config.php:
define('FORCE_SSL_ADMIN', true);
if ( isset( $_SERVER['HTTP_X_FORWARDED_PROTO'] ) && $_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https')
$_SERVER['HTTPS']='on';
define('WP_SITEURL', 'https://www.my_domain.com/');
define('WP_HOME', 'https://www.my_domain.com/');
Update:
Here the docker compose file:
version: '3';
services:
nginx:
image: nginx:stable-alpine
ports:
- "80:80" # nginx listen on 80
- "443:443"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
- ./wordpress/app:/var/www/html/wordpress
db:
image: mysql:8.0
container_name: db-example
restart: unless-stopped
env_file: ./wordpress/app/.env
environment:
- MYSQL_DATABASE=example
volumes:
- ./wordpress/dbdata:/var/lib/mysql
#- ./wordpress/db/db.sql:/docker-entrypoint-initdb.d/install_wordpress.sql #if you have db.sql of project input here
command: '--default-authentication-plugin=mysql_native_password'
wordpress_host:
depends_on:
- db
image: wordpress
container_name: wordpress_host
ports:
- "8080:80"
restart: unless-stopped
env_file: ./wordpress/app/.env
environment:
- WORDPRESS_DB_HOST=db:3306
- WORDPRESS_DB_USER=root
- WORDPRESS_DB_PASSWORD=root
- WORDPRESS_DB_NAME=example
volumes:
- ./wordpress/app:/var/www/html/wordpress
volumes:
wordpress-host:
dbdata
:
.env file:
MYSQL_ROOT_PASSWORD=root
MYSQL_USER=example
MYSQL_PASSWORD=password
I have a fresh installation of Artifactory 7.2.1(docker based)which is working fine, but I want to access it via nginx proxy, and that's not working.
my artifactory is running under http://192.168.211.207:8082/
Custom base URL is set to: http://192.168.211.207:8081/artifactory ->which is redirecting me to http://192.168.211.207:8082/
Now, I have an nginx server which is running on the same server, also via docker.
When I try to access:
http://192.168.211.207 -> redirects me to https://192.168.211.207/artifactory + 502 Bad Gateway
https://192.168.211.207 ->redirects me to https://192.168.211.207/ui + 502 Bad Gateway
http://192.168.211.207/artifactory -> redirects to https + 502 Bad Gateway
https://192.168.211.207/artifactory -> 502 Bad Gateway
I do not really understand what is behind port 8081 since I am not able to use it in any circumstances. The port 8082 is working, but not behind a nginx proxy.
Here is my docker-compose file:
version: '2'
services:
artifactory:
image: docker.bintray.io/jfrog/artifactory-pro:7.2.1
container_name: artifactory
ports:
- 8081:8081
- 8082:8082
volumes:
- /data/artifactory:/var/opt/jfrog/artifactory
restart: always
ulimits:
nproc: 65535
nofile:
soft: 32000
hard: 40000
nginx:
image: docker.bintray.io/jfrog/nginx-artifactory-pro:7.2.1
container_name: nginx
ports:
- 80:80
- 443:443
depends_on:
- artifactory
links:
- artifactory
volumes:
- /data/nginx:/var/opt/jfrog/nginx
environment:
- ART_BASE_URL=http://localhost:8081/artifactory
- SSL=true
# Set SKIP_AUTO_UPDATE_CONFIG=true to disable auto loading of NGINX conf
#- SKIP_AUTO_UPDATE_CONFIG=true
restart: always
ulimits:
nproc: 65535
nofile:
soft: 32000
hard: 40000
and here is my nginx config file:
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_certificate /var/opt/jfrog/nginx/ssl/example.crt;
ssl_certificate_key /var/opt/jfrog/nginx/ssl/example.key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
## server configuration
server {
listen 443 ssl;
listen 80 ;
server_name ~(?<repo>.+)\.artifactory artifactory;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
## access_log /var/log/nginx/artifactory-access.log timing;
## error_log /var/log/nginx/artifactory-error.log;
if ( $repo != "" ){
rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2;
}
rewrite ^/$ /ui/ redirect;
rewrite ^/ui$ /ui/ redirect;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
chunked_transfer_encoding on;
client_max_body_size 0;
location / {
proxy_read_timeout 2400s;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://localhost:8082;
proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location ~ ^/artifactory/ {
proxy_pass http://localhost:8082;
}
}
}
I can't figure out what I am doing wrong here, but is possible to miss something since I am not an nginx expert.
Does someone spot the issue?
Does someone have an example config file for nginx and artifactory 7.x?
Thank you all for the answers. I have been able to get in touch with support, and after talking with a specialist they confirmed that in the version 7.x they don't support webcontext anymore, therefore in my case, the only way to run two artifactory was to create separate subdomains.
In order to be clear for future visitors of this topic, the jFrog Support confirmed me that starting with version 7.0 and newer, Artifactory does not support /webcontext feature anymore, and they don't plan to support it.
Therefore mydomain.com/artifactory-one and mydomain.com/artifactory-two is not anymore possible, you have to do it using subdomains.
mydomain.com/artifactory-one -> artifactory-one.mydomain.com
mydomain.com/artifactory-two -> artifactory-two.mydomain.com
Probably the issue is here. As you are running it in docker container nginx in container doesn't correctly process this - >proxy_pass http://localhost:8082;
Use IP instead. It worked for me
try this
location ~ ^/artifactory/ {
proxy_pass http://127.0.0.1:8081;
}
Here is my Nginx reverse proxy configuration with AWS NLB in front of the Nginx reverse proxy
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_certificate /var/opt/jfrog/nginx/ssl/tls.crt;
ssl_certificate_key /var/opt/jfrog/nginx/ssl/tls.key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
## server configuration
server {
listen 443 ssl;
listen 80;
server_name ~(?<repo>.+)\.artifactory artifactory;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
## access_log /var/log/nginx/artifactory-access.log timing;
## error_log /var/log/nginx/artifactory-error.log;
rewrite ^/$ /ui/ redirect;
rewrite ^/ui$ /ui/ redirect;
rewrite ^/artifactory/?$ / redirect;
if ( $repo != "" ) {
rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2 break;
}
chunked_transfer_encoding on;
client_max_body_size 0;
location / {
proxy_read_timeout 2400;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_buffer_size 128k;
proxy_buffers 40 128k;
proxy_busy_buffers_size 128k;
proxy_pass http://artifactory:8082/;
proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host;
proxy_set_header Host $http_host;
add_header Strict-Transport-Security always;
location /artifactory/ {
if ( $request_uri ~ ^/artifactory/(.*)$ ) {
proxy_pass http://artifactory:8081/artifactory/$1;
}
proxy_pass http://artifactory:8081/artifactory/;
}
}
}
I setted up docker-nginx with docker-gen in a docker-compose file
version: '2'
services:
nginx:
image: nginx
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
container_name: nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ${NGINX_FILES_PATH}/conf.d:/etc/nginx/conf.d
- ${NGINX_FILES_PATH}/vhost.d:/etc/nginx/vhost.d
- ${NGINX_FILES_PATH}/html:/usr/share/nginx/html
- ${NGINX_FILES_PATH}/certs:/etc/nginx/certs:ro
nginx-gen:
image: jwilder/docker-gen
command: -notify-sighup nginx -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
container_name: nginx-gen
restart: unless-stopped
volumes:
- ${NGINX_FILES_PATH}/conf.d:/etc/nginx/conf.d
- ${NGINX_FILES_PATH}/vhost.d:/etc/nginx/vhost.d
- ${NGINX_FILES_PATH}/html:/usr/share/nginx/html
- ${NGINX_FILES_PATH}/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
restart: unless-stopped
volumes:
- ${NGINX_FILES_PATH}/conf.d:/etc/nginx/conf.d
- ${NGINX_FILES_PATH}/vhost.d:/etc/nginx/vhost.d
- ${NGINX_FILES_PATH}/html:/usr/share/nginx/html
- ${NGINX_FILES_PATH}/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
NGINX_DOCKER_GEN_CONTAINER: "nginx-gen"
NGINX_PROXY_CONTAINER: "nginx"
networks:
default:
external:
name: nginx-proxy
everything works fine, I do have a default.conf folder generated, depending on my others containers, here it is:
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# bnbkeeper.thibautduchene.fr
upstream bnbkeeper.thibautduchene.fr {
## Can be connect with "nginx-proxy" network
# bnbkeeper
server 172.20.0.12:8080;
}
server {
server_name bnbkeeper.thibautduchene.fr;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://bnbkeeper.thibautduchene.fr;
}
}
# gags.thibautduchene.fr
upstream gags.thibautduchene.fr {
## Can be connect with "nginx-proxy" network
# gogs
server 172.20.0.7:3000;
}
server {
server_name gags.thibautduchene.fr;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://gags.thibautduchene.fr;
}
}
# portainer.thibautduchene.fr
upstream portainer.thibautduchene.fr {
## Can be connect with "nginx-proxy" network
# portainer
server 172.20.0.9:9000;
}
server {
server_name portainer.thibautduchene.fr;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://portainer.thibautduchene.fr;
}
}
however, when I reach any of these proxied address, the server does'nt exist and nginx doesnt even catch the request...
It looks like nginx is not even aware of my subdomain.
Ok, for those that are as silly as me, don't forget to add the subdomain to your provider, nginx doe'st yet handle it by itself..
I'm launching the following docker-compose:
version: '2'
services:
wp_db:
image: mysql:5.7
container_name: imaxinaria_mysql2
volumes:
- "./.data/db:/var/lib/mysql"
restart: always
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: password
wp_web:
image: nginx
restart: always
ports:
- 80:80
- 443:443
#log_driver: syslog
links:
- wordpress
volumes:
- ./wp:/var/www/html
- ./etc/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./var/log/nginx:/var/log/nginx
- ./etc/letsencrypt:/etc/letsencrypt
- ./etc/nginx/certs/dhparam.pem:/etc/nginx/certs/dhparam.pem
wordpress:
depends_on:
- wp_db
image: wordpress:latest
container_name: imaxinaria2
volumes:
- "./wp:/var/www/html"
- "./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini"
links:
- wp_db:mysql
expose:
- 80
- 443
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_PASSWORD: password
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_NAME: wordpress
And getting the following ERROR on WP continer log:
Warning: mysqli::mysqli(): (HY000/2002): php_network_getaddresses: getaddrinfo failed: Name or service not known in - on line 10
Connection Error: (2002) php_network_getaddresses: getaddrinfo failed: Name or service not known
my nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
server {
listen 80;
server_name lab.imaxinaria.org;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
ssl_certificate /etc/letsencrypt/live/lab.imaxinaria.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/lab.imaxinaria.org/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/certs/dhparam.pem;
add_header Strict-Transport-Security max-age=15768000;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/letsencrypt/live/lab.imaxinaria.org/chain.pem;
resolver 8.8.8.8 8.8.4.4 valid=86400;
root /var/www/html;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
rewrite /wp-admin$ $scheme://$host$uri/ permanent;
location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
access_log off; log_not_found off; expires max;
}
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
root /var/www/html;
fastcgi_pass wp_db:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;
include fastcgi_params;
}
}
}
Could you help me to solve this? Anyway my goal is to launch several wp with persistence behind a Nginx proxy with SSL. If there is a better way let me know.
Thanks in advance.
UPDATE: I tried to use this image as well https://github.com/docker-library/wordpress but getting same results.
Also checked wp-config.php and everything seems alright with DB_USER, DB_PASSWORD and DB_HOST.
Also found that this error could be a bad linking between mysql and wp containers, but they are supposed to be linked as the rule is given on docker-compose.yml
Solved erasing WORDPRESS_DB_HOST: db:3306 from docker-compose.yml