Nginx proxy (jwilder/nginx-proxy) Connection reset by peer (502 Bad Gateway) - nginx

I have simple docker-compose.yml where I would like to be able to use nginx as a proxy to the containers. For now I have two containers admin and api which later on I want to make talking to each other.
Right now with configuration presented below, when I try to access api.host.dev I'm getting this:
nginx-proxy | nginx.1 | 2017/04/19 15:18:35 [error] 26#26: *1 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.60.1, server: api.host.dev, request: "GET / HTTP/1.1", upstream: "http://172.18.0.4:9000/", host: "api.host.dev"
nginx-proxy | nginx.1 | api.host.dev 192.168.60.1 - - [19/Apr/2017:15:18:35 +0000] "GET / HTTP/1.1" 502 576 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36"
Right now I'm kind of out of ideas. Here is all configuration:
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
admin:
container_name: admin
image: php:7.1-fpm
restart: on-failure
volumes:
- ../admin:/var/www/admin
working_dir: /var/www
env_file:
- ./variables/dev-admin.env
api:
container_name: api
image: php:7.1-fpm
restart: on-failure
volumes:
- ../api:/var/www/api
working_dir: /var/www
env_file:
- ./variables/dev-api.env
Content of *.env files:
dev-api.env:
APP_ENV=DEV
VIRTUAL_HOST=api.host.dev
VIRTUAL_PORT=9000
dev-admin.env:
APP_ENV=DEV
VIRTUAL_HOST=admin.host.dev
VIRTUAL_PORT=9000
Content of /etc/nginx/conf.d/default.conf:
# admin.host.dev
upstream admin.host.dev {
## Can be connect with "env_default" network
# admin
server 172.18.0.3:9000;
}
server {
server_name admin.host.dev;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://admin.host.dev;
}
}
# api.host.dev
upstream api.host.dev {
## Can be connect with "env_default" network
# api
server 172.18.0.4:9000;
}
server {
server_name api.host.dev;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://api.host.dev;
}
}
Full output of docker-compose up:
sudo docker-compose up --remove-orphans
Recreating admin
Recreating nginx-proxy
Recreating api
Attaching to admin, api, nginx-proxy
admin | [19-Apr-2017 15:18:24] NOTICE: fpm is running, pid 1
admin | [19-Apr-2017 15:18:24] NOTICE: ready to handle connections
api | [19-Apr-2017 15:18:24] NOTICE: fpm is running, pid 1
api | [19-Apr-2017 15:18:24] NOTICE: ready to handle connections
nginx-proxy | forego | starting dockergen.1 on port 5000
nginx-proxy | forego | starting nginx.1 on port 5100
nginx-proxy | dockergen.1 | 2017/04/19 15:18:25 Generated '/etc/nginx/conf.d/default.conf' from 3 containers
nginx-proxy | dockergen.1 | 2017/04/19 15:18:25 Running 'nginx -s reload'
nginx-proxy | dockergen.1 | 2017/04/19 15:18:25 Watching docker events
nginx-proxy | dockergen.1 | 2017/04/19 15:18:25 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
nginx-proxy | nginx.1 | 2017/04/19 15:18:35 [error] 26#26: *1 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.60.1, server: api.host.dev, request: "GET / HTTP/1.1", upstream: "http://172.18.0.4:9000/", host: "api.host.dev"
nginx-proxy | nginx.1 | api.host.dev 192.168.60.1 - - [19/Apr/2017:15:18:35 +0000] "GET / HTTP/1.1" 502 576 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36"
nginx-proxy | nginx.1 | 2017/04/19 15:18:45 [error] 26#26: *3 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.60.1, server: admin.host.dev, request: "GET / HTTP/1.1", upstream: "http://172.18.0.3:9000/", host: "admin.host.dev"
nginx-proxy | nginx.1 | admin.host.dev 192.168.60.1 - - [19/Apr/2017:15:18:45 +0000] "GET / HTTP/1.1" 502 576 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36"
^[[A^[[Anginx-proxy | nginx.1 | 2017/04/19 15:24:47 [error] 26#26: *5 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.60.1, server: api.host.dev, request: "GET / HTTP/1.1", upstream: "http://172.18.0.4:9000/", host: "api.host.dev"
nginx-proxy | nginx.1 | api.host.dev 192.168.60.1 - - [19/Apr/2017:15:24:47 +0000] "GET / HTTP/1.1" 502 576 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36"

When using compose, each service is exposed to the other containers using its service name, as if it were a DNS hostname. So you want to change references to e.g. admin.host.dev to just admin. For example, use this:
# admin.host.dev
upstream admin.host.dev {
## Can be connect with "env_default" network
# admin
server admin:9000;
}
Notice in the server statement it now uses the hostname admin. This is automatically resolved to the container IP of your admin container.
(But note I didn't change the upstream's name - that is an internal name for nginx, and you don't necessarily need to change it.)
You would want to change the server name of the other upstream as well.

Related

How to solve error 503 in Kubernetes NGINX Ingress

I'm trying to configure Kubernetes Dashboard using NGINX INGRESS but for some reason I'm getting a 503 error.
I'm running Kubernetes locally in my macbook with docker desktop.
First thing I did was apply/install NGINX INGRESS CONTROLLER
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/cloud/deploy.yaml
Second step was to apply/install kubernetes dashboard YML File
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
Third Step was to apply the ingress service
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |-
proxy_ssl_server_name on;
proxy_ssl_name $host;
spec:
rules:
- http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: kubernetes-dashboard
port:
number: 433
When I try to access http://localhost and/or https://localhost I get a 503 Service Temporarily Unavailable Error from nginx
Not sure what I'm doing wrong.
Here is part of the log from the NGINX POD
I0630 23:36:42.049398 10 main.go:112] "successfully validated configuration, accepting" ingress="dashboard-ingress/kubernetes-dashboard"
I0630 23:36:42.055306 10 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kubernetes-dashboard", Name:"dashboard-ingress", UID:"85e7bd9e-308d-4848-8b70-4a3591415464", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"47868", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0630 23:36:42.056435 10 controller.go:146] "Configuration changes detected, backend reload required"
I0630 23:36:42.124850 10 controller.go:163] "Backend successfully reloaded"
I0630 23:36:42.125333 10 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-5b74bc9868-gplcq", UID:"bbd70716-b843-403b-a8f9-2add0f63f63f", APIVersion:"v1", ResourceVersion:"46315", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
192.168.65.3 - - [30/Jun/2021:23:36:44 +0000] "GET / HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 657 0.003 [kubernetes-dashboard-kubernetes-dashboard-80] [] 10.1.0.25:8443 48 0.002 400 395aec46af3b21e79cd650f2f86722f3
2021/06/30 23:36:44 [error] 1222#1222: *17477 recv() failed (104: Connection reset by peer) while sending to client, client: 192.168.65.3, server: _, request: "GET / HTTP/1.1", upstream: "http://10.1.0.25:8443/", host: "localhost"
2021/06/30 23:36:45 [error] 1222#1222: *17512 recv() failed (104: Connection reset by peer) while sending to client, client: 192.168.65.3, server: _, request: "GET / HTTP/1.1", upstream: "http://10.1.0.25:8443/", host: "localhost"
192.168.65.3 - - [30/Jun/2021:23:36:45 +0000] "GET / HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 657 0.002 [kubernetes-dashboard-kubernetes-dashboard-80] [] 10.1.0.25:8443 48 0.001 400 a15e1e48987948cb93503b494d188654
2021/07/01 00:09:31 [error] 1224#1224: *49299 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.65.3, server: _, request: "GET / HTTP/1.1", upstream: "http://10.1.0.25:8443/", host: "localhost"
192.168.65.3 - - [01/Jul/2021:00:09:31 +0000] "GET / HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 657 0.002 [kubernetes-dashboard-kubernetes-dashboard-80] [] 10.1.0.25:8443 48 0.001 400 ac6b88ca52b73358c39371cb4422761d
2021/07/01 00:09:32 [error] 1221#1221: *49336 recv() failed (104: Connection reset by peer) while sending to client, client: 192.168.65.3, server: _, request: "GET / HTTP/1.1", upstream: "http://10.1.0.25:8443/", host: "localhost"
192.168.65.3 - - [01/Jul/2021:00:09:32 +0000] "GET / HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 657 0.001 [kubernetes-dashboard-kubernetes-dashboard-80] [] 10.1.0.25:8443 48 0.001 400 2c5cd2d9403a8e50a77fdc897c694792
2021/07/01 00:09:33 [error] 1221#1221: *49338 recv() failed (104: Connection reset by peer) while sending to client, client: 192.168.65.3, server: _, request: "GET / HTTP/1.1", upstream: "http://10.1.0.25:8443/", host: "localhost"
192.168.65.3 - - [01/Jul/2021:00:09:33 +0000] "GET / HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 657 0.001 [kubernetes-dashboard-kubernetes-dashboard-80] [] 10.1.0.25:8443 48 0.000 400 f1f630c886d20b9b9c59bd9e0e0e3860
2021/07/01 00:09:33 [error] 1224#1224: *49344 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.65.3, server: _, request: "GET / HTTP/1.1", upstream: "http://10.1.0.25:8443/", host: "localhost"
192.168.65.3 - - [01/Jul/2021:00:09:33 +0000] "GET / HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 657 0.001 [kubernetes-dashboard-kubernetes-dashboard-80] [] 10.1.0.25:8443 48 0.001 400 2ab6774dec6e2a89599c4745d24b9661
192.168.65.3 - - [01/Jul/2021:00:09:33 +0000] "GET / HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 657 0.001 [kubernetes-dashboard-kubernetes-dashboard-80] [] 10.1.0.25:8443 48 0.000 400 c9147e08203d9ec8e7b0d0debab8d556
2021/07/01 00:09:33 [error] 1222#1222: *49360 recv() failed (104: Connection reset by peer) while sending to client, client: 192.168.65.3, server: _, request: "GET / HTTP/1.1", upstream: "http://10.1.0.25:8443/", host: "localhost"
I0701 00:10:19.024220 10 main.go:112] "successfully validated configuration, accepting" ingress="dashboard-ingress/kubernetes-dashboard"
I0701 00:10:19.026772 10 controller.go:146] "Configuration changes detected, backend reload required"
I0701 00:10:19.027392 10 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kubernetes-dashboard", Name:"dashboard-ingress", UID:"85e7bd9e-308d-4848-8b70-4a3591415464", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"50637", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0701 00:10:19.102759 10 controller.go:163] "Backend successfully reloaded"
I0701 00:10:19.103246 10 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-5b74bc9868-gplcq", UID:"bbd70716-b843-403b-a8f9-2add0f63f63f", APIVersion:"v1", ResourceVersion:"46315", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
192.168.65.3 - - [01/Jul/2021:00:11:27 +0000] "GET / HTTP/1.1" 503 592 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 657 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - c449f6e8082761ddc3432f956f4701f2
192.168.65.3 - - [01/Jul/2021:00:11:29 +0000] "GET / HTTP/1.1" 503 592 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 657 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - 3a41974b01c5e63e734fce6e37b98e4c
192.168.65.3 - - [01/Jul/2021:00:11:56 +0000] "GET / HTTP/2.0" 503 592 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 408 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - c01f7bec83d3be6b26703b8808f9922a
192.168.65.3 - - [01/Jul/2021:00:11:58 +0000] "GET / HTTP/2.0" 503 592 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 24 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - dc39bcddd4ecfdefe931bf16fe3c1557
192.168.65.3 - - [01/Jul/2021:00:16:36 +0000] "GET / HTTP/1.1" 503 190 "-" "curl/7.64.1" 73 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - 82aad4321afbccb3fc54ac75d96b66ee
192.168.65.3 - - [01/Jul/2021:00:31:47 +0000] "GET / HTTP/2.0" 503 592 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 417 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - c4ab3d2f272be4d38df62c0ffd50bfe9
I0701 00:48:02.059067 10 main.go:112] "successfully validated configuration, accepting" ingress="dashboard-ingress/kubernetes-dashboard"
I0701 00:48:02.062292 10 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kubernetes-dashboard", Name:"dashboard-ingress", UID:"85e7bd9e-308d-4848-8b70-4a3591415464", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"53737", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0701 00:48:02.062876 10 controller.go:146] "Configuration changes detected, backend reload required"
I0701 00:48:02.131494 10 controller.go:163] "Backend successfully reloaded"
I0701 00:48:02.131787 10 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-5b74bc9868-gplcq", UID:"bbd70716-b843-403b-a8f9-2add0f63f63f", APIVersion:"v1", ResourceVersion:"46315", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
192.168.65.3 - - [01/Jul/2021:00:48:12 +0000] "GET / HTTP/2.0" 503 592 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 417 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - d50e3bb0db3a5fa7581c405b8c50d5c8
192.168.65.3 - - [01/Jul/2021:00:48:14 +0000] "GET / HTTP/2.0" 503 592 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 15 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - c8d8752fb4d79d5bc084839ef9a767b2
I0701 00:49:50.908720 10 main.go:112] "successfully validated configuration, accepting" ingress="dashboard-ingress/kubernetes-dashboard"
I0701 00:49:50.911044 10 controller.go:146] "Configuration changes detected, backend reload required"
I0701 00:49:50.911350 10 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kubernetes-dashboard", Name:"dashboard-ingress", UID:"85e7bd9e-308d-4848-8b70-4a3591415464", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"53896", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0701 00:49:50.979935 10 controller.go:163] "Backend successfully reloaded"
I0701 00:49:50.980213 10 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-5b74bc9868-gplcq", UID:"bbd70716-b843-403b-a8f9-2add0f63f63f", APIVersion:"v1", ResourceVersion:"46315", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
192.168.65.3 - - [01/Jul/2021:00:50:55 +0000] "GET / HTTP/2.0" 503 592 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 417 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - d62a8012bc23bbc35a47621d54d68a62
192.168.65.3 - - [01/Jul/2021:00:51:00 +0000] "GET / HTTP/2.0" 503 592 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 15 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - 0cbfd2274ad687fc1aaff76dbc483659
Here is the log for the Kubernete Dashboard Pod
kubectl logs kubernetes-dashboard-78c79f97b4-w5pw9 -n kubernetes-dashboard  ✔  docker-desktop ⎈
2021/06/30 23:01:40 Starting overwatch
2021/06/30 23:01:40 Using namespace: kubernetes-dashboard
2021/06/30 23:01:40 Using in-cluster config to connect to apiserver
2021/06/30 23:01:40 Using secret token for csrf signing
2021/06/30 23:01:40 Initializing csrf token from kubernetes-dashboard-csrf secret
2021/06/30 23:01:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2021/06/30 23:01:40 Successful initial request to the apiserver, version: v1.21.1
2021/06/30 23:01:40 Generating JWE encryption key
2021/06/30 23:01:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2021/06/30 23:01:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2021/06/30 23:01:41 Initializing JWE encryption key from synchronized object
2021/06/30 23:01:41 Creating in-cluster Sidecar client
2021/06/30 23:01:41 Auto-generating certificates
2021/06/30 23:01:41 Successful request to sidecar
2021/06/30 23:01:41 Successfully created certificates
2021/06/30 23:01:41 Serving securely on HTTPS port: 8443
Here are the endpoints for the kubernetes-dashboard namespace
kubectl get ep -n kubernetes-dashboard
NAME ENDPOINTS AGE
dashboard-metrics-scraper 10.1.0.24:8000 11h
kubernetes-dashboard 10.1.0.25:8443 11h
Any help would be greatly appreciated.
I was able to fix this issue.
In my ingress ymal file I had a typo. Port number was set to 433 instead of 443
As soon as I made and applied that change, I was able to access the dashboard login page with: https://localhost and http://localhost

docker wordpress and nginx user permission

Trying to setup wordpress and nginx in docker containers while sharing the volume from wordpress to nginx. While doing so, nginx is unable to read the files from the volume as the users are different. How do I solve this?
This is currently causing this error below:
wordpress_1 | 172.18.0.17 - 18/Feb/2018:15:39:27 +0000 "GET /index.php" 404
nginx_1 | 2018/02/18 15:39:27 [error] 7#7: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 172.18.0.1, server: galaxycard.in, request: "GET / HTTP/1.1", upstream: "fastcgi://172.18.0.13:9000", host: "127.0.0.1:3000"
nginx_1 | 172.18.0.1 - - [18/Feb/2018:15:39:27 +0000] "GET / HTTP/1.1" 404 47 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"

docker stack deploy is not working properly but docker-compose is working properly

I am new to docker swarm and docker compose.
I built an application that uses a nginx and flask docker containers. nginx acts as a reverse proxy.
when I am building this entire application using docker compose everything works fine
my docker-compose.yml file
version: '2'
services:
web:
restart: always
build: ./web
image: shivanand3939/web
expose:
- "8000"
volumes:
- ./output:/usr/src/app/static
command: /usr/local/bin/gunicorn -w 2 -b :8000 --access-logfile - classifierv2RestEndPoint_ridge_NB:create_app()
nginx:
build: ./nginx/
image: shivanand3939/nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
viz:
image: dockersamples/visualizer
ports:
- 8080:8080/tcp
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- "constraint=node.role==manager"
Below is my output:
however, now I want to take it to the next level by deploying it in 3 AWS instances
here is my docker-stack.yml file
version: '3'
networks:
mybridge:
services:
web:
restart: always
build: ./web
image: shivanand3939/web
expose:
- "8000"
volumes:
- ./output:/usr/src/app/static
command: /usr/local/bin/gunicorn -w 2 -b :8000 --access-logfile - classifierv2RestEndPoint_ridge_NB:create_app()
networks:
mybridge:
aliases:
- web
deploy:
replicas: 2
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
nginx:
restart: always
build: ./nginx/
image: shivanand3939/nginx
ports:
- "80:80"
volumes:
- /www/static
networks:
- mybridge
deploy:
replicas: 1
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
viz:
image: dockersamples/visualizer
ports:
- 8080:8080/tcp
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
placement:
constraints: [node.role == manager]
But now when I deploy this application and check the URL I am getting this
[incorrect home page]
I am not understanding why in the first case there was communication between web and nginx containers but in the second case this communication is stopped.
Can anyone please guide me on this
UPDATE 1:
Upon looking at the nginx service logs I see,
classifierbot_nginx.1.qhi4b9c1yc3n#ip-172-31-16-132 | 2017/09/11 06:49:50 [error] 5#5: *10 "/usr/share/nginx/html/phpmyadmin2013/index.html" is not found (2: No such file or directory), client: 10.255.0.2, server: localhost, request: "HEAD http://35.154.66.136:80/phpmyadmin2013/ HTTP/1.1", host: "35.154.66.136"
classifierbot_nginx.1.qhi4b9c1yc3n#ip-172-31-16-132 | 10.255.0.2 - - [11/Sep/2017:06:49:50 +0000] "HEAD http://35.154.66.136:80/phpmyadmin2014/ HTTP/1.1" 404 0 "-" "Mozilla/5.0 Jorgee" "-"
classifierbot_nginx.1.qhi4b9c1yc3n#ip-172-31-16-132 | 2017/09/11 06:49:50 [error] 5#5: *10 "/usr/share/nginx/html/phpmyadmin2014/index.html" is not found (2: No such file or directory), client: 10.255.0.2, server: localhost, request: "HEAD http://35.154.66.136:80/phpmyadmin2014/ HTTP/1.1", host: "35.154.66.136"
classifierbot_nginx.1.qhi4b9c1yc3n#ip-172-31-16-132 | 10.255.0.2 - - [11/Sep/2017:06:49:51 +0000] "HEAD http://35.154.66.136:80/phpmyadmin2016/ HTTP/1.1" 404 0 "-" "Mozilla/5.0 Jorgee" "-"
classifierbot_nginx.1.qhi4b9c1yc3n#ip-172-31-16-132 | 2017/09/11 06:49:51 [error] 5#5: *16 "/usr/share/nginx/html/phpmyadmin2016/index.html" is not found (2: No such file or directory), client: 10.255.0.2, server: localhost, request: "HEAD http://35.154.66.136:80/phpmyadmin2016/ HTTP/1.1", host: "35.154.66.136"
classifierbot_nginx.1.qhi4b9c1yc3n#ip-172-31-16-132 | 2017/09/11 06:49:52 [error] 5#5: *17 "/usr/share/nginx/html/phpmyadmin2017/index.html" is not found (2: No such file or directory), client: 10.255.0.2, server: localhost, request: "HEAD http://35.154.66.136:80/phpmyadmin2017/ HTTP/1.1", host: "35.154.66.136"
classifierbot_nginx.1.qhi4b9c1yc3n#ip-172-31-16-132 | 10.255.0.2 - - [11/Sep/2017:06:49:52 +0000] "HEAD http://35.154.66.136:80/phpmyadmin2017/ HTTP/1.1" 404 0 "-" "Mozilla/5.0 Jorgee" "-"
classifierbot_nginx.1.qhi4b9c1yc3n#ip-172-31-16-132 | 10.255.0.2 - - [11/Sep/2017:06:49:55 +0000] "HEAD http://35.154.66.136:80/phpmyadmin2018/ HTTP/1.1" 404 0 "-" "Mozilla/5.0 Jorgee" "-"
classifierbot_nginx.1.qhi4b9c1yc3n#ip-172-31-16-132 | 2017/09/11 06:49:55 [error] 5#5: *18 "/usr/share/nginx/html/phpmyadmin2018/index.html" is not found (2: No such file or directory), client: 10.255.0.2, server: localhost, request: "HEAD http://35.154.66.136:80/phpmyadmin2018/ HTTP/1.1", host: "35.154.66.136"
classifierbot_nginx.1.qhi4b9c1yc3n#ip-172-31-16-132 | 10.255.0.2 - - [11/Sep/2017:06:49:57 +0000] "HEAD http://35.154.66.136:80/phpmanager/ HTTP/1.1" 404 0 "-" "Mozilla/5.0 Jorgee" "-"
classifierbot_nginx.1.qhi4b9c1yc3n#ip-172-31-16-132 | 2017/09/11 06:49:57 [error] 5#5: *19 "/usr/share/nginx/html/phpmanager/index.html" is not found (2: No such file or directory), client: 10.255.0.2, server: localhost, request: "HEAD http://35.154.66.136:80/phpmanager/ HTTP/1.1", host: "35.154.66.136"

Using nginx as a proxy to to java web servlet

I'm trying to use nginx as a load balancer / proxy server which points to a series to tomcat servers. This is my current nginx configuration.
server {
listen 80;
server_name _;
rewrite ^ https://$http_host$request_uri? permanent;
}
server {
listen 443;
resolver 127.0.0.11 valid=5s;
ssl on;
ssl_certificate /etc/nginx/certs/default.crt; # path to your cacert.pem
ssl_certificate_key /etc/nginx/certs/default.key; # path to your privkey.pem
ssl_verify_client off;
server_name localhost;
fastcgi_param HTTPS on;
fastcgi_param HTTP_SCHEME https;
charset utf-8;
client_max_body_size 200M;
set $app https://app:8443;
set $auth https://auth:8443/authentication/;
set $discovery https://discovery:8443/discovery/;
location / {
proxy_pass $app;
}
location /authentication {
proxy_pass $auth;
}
location /discovery {
proxy_pass $discovery;
proxy_set_header Host $http_host;
proxy_set_header X_FORWARDED_PROTO https;
}
}
This is dockerized if it makes any difference, but provisioning fails to resolve correctly while docs is working fine. The only difference between docs and provisioning is that 'docs' is serving pure html files via tomcat. (The tomcat7 standard /docs/) while provisioning is actually a java servlet (JaxRS/spring etc ).
If I hit the image directly it works as expected, while if I try to hit the same endpoint via the nginx it fails to resolve.
My docker-compose configuration for reference.
version: '2'
services:
db:
image: db:nodata
expose:
- 5433
zk:
image: zookeeper
ports:
- 2181:2181
discovery:
image: services_discovery:latest
env_file: docker_environment
expose:
- 8443
ports:
- 8443:8443
links:
- db
- zk
app:
image: tomcat-jsse-ssl:7-jdk8
volumes:
- ./app/www/:/usr/local/tomcat7/webapps/ROOT/
expose:
- 8443
ports:
- 8444:8443
auth:
image: tomcat-jsse-ssl:7-jdk8
volumes:
- ./authentication/www/authentication/:/usr/local/tomcat7/webapps/authentication/
expose:
- 8443
proxy:
build: ./proxy/
depends_on:
- 'auth'
- 'app'
- 'discovery'
ports:
- 443:443
restart: always
With the images running I can resolve the following URLs just fine.
https://localhost:8443/discovery/ready
https://localhost:8444/
ie. both containers are running fine:
https://localhost/ loaded via nginx works fine.
https://localhost/authentication/ loaded via nginx works fine.
https://localhost/discovery/ready ==> 404.
Server Logs:
proxy_1 | 172.20.0.1 - - [24/Apr/2017:00:04:28 +0000] "GET /discovery/ready HTTP/1.1" 404 400 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36"
proxy_1 | 172.20.0.1 - - [24/Apr/2017:00:04:43 +0000] "GET /discovery/api/swagger.json HTTP/1.1" 404 400 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36"
proxy_1 | 172.20.0.1 - - [24/Apr/2017:00:04:57 +0000] "GET /discovery/ready HTTP/1.1" 404 400 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36"
Discovery Tomcat Access Log (when access directly)
172.20.0.1 - - [23/Apr/2017:00:02:38 +0000] "GET /discovery/ready HTTP/1.1" 200 70
172.20.0.2 - - [24/Apr/2017:00:04:28 +0000] "GET /discovery/ HTTP/1.0" 404 949
172.20.0.2 - - [24/Apr/2017:00:04:43 +0000] "GET /discovery/ HTTP/1.0" 404 949
172.20.0.2 - - [24/Apr/2017:00:04:57 +0000] "GET /discovery/ HTTP/1.0" 404 949
The first entry is when I hit the server directly via https://localhost:8443/discovery/ready everything else
is when nginx sends the request to the server. For some reason it's not translating the request correctly.
Any thoughts/suggestions would be appreciated?
Note: I simplified my example/config for the purposes of this question and any references to "provisioning" are now "discovery".
UPDATE: I figured out why it's breaking for the 'servlet'. It's actually breaking constantly. It's stripping away all of the URL except the base.
for example.
https://localhost/authentication?q=dummy
becomes
172.20.0.3 - - [24/Apr/2017:03:22:06 +0000] "GET /authentication/ HTTP/1.0" 200 28
note that the query parameters are stripped away.
The nginx documentation says that you are responsable to rebuild the url:
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass
So, you can try to capture the rest of the URI using regex and send it on the proxy_pass section:
location ~* ^/discovery/(.*) {
proxy_pass $discovery$1$is_args$args;
.... other configs....
}

Docker link between varnish and wordpress not working

This is my docker-compose file:
version: '2'
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_PASSWORD: wordpress
varnish:
image: eeacms/varnish
depends_on:
- wordpress
ports:
- 9000:6081
environment:
DNS_ENABLED: "true"
BACKENDS: wordpress
BACKENDS_PORT: 80
volumes:
db_data:
wordpress is running on 0.0.0.0:8080 and on 172.17.0.1:8080
But the /etc/hosts of varnish container is like this
root#4cc3dc214d69:/# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3 wordpress fd3f01c29d6a dockoor_wordpress_1
172.17.0.3 wordpress_1 fd3f01c29d6a dockoor_wordpress_1
172.17.0.3 dockoor_wordpress_1 fd3f01c29d6a
172.17.0.4 4cc3dc214d69
varnish is mapping wordpress to 172.17.0.3
That why while trying to access 0.0.0.0:8000 i get
Error 503 Backend fetch failed
Backend fetch failed
Guru Meditation:
XID: 3
Varnish cache server
Can someone please point out whats wrong with my compose file?
P.S docker-compose log shows that varnish do hit worpress but its getting a 302 response.
02 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:19 +0000] "GET / HTTP/1.1" 302 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:20 +0000] "GET / HTTP/1.1" 302 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:21 +0000] "GET / HTTP/1.1" 302 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:22 +0000] "GET / HTTP/1.1" 302 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:23 +0000] "GET / HTTP/1.1" 302 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:24 +0000] "GET / HTTP/1.1" 302 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:25 +0000] "GET / HTTP/1.1" 302 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:26 +0000] "GET / HTTP/1.1" 302 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:27 +0000] "GET / HTTP/1.1" 302 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:29 +0000] "GET / HTTP/1.1" 302 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:30 +0000] "GET / HTTP/1.1" 302 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:31 +0000] "GET / HTTP/1.1" 302 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:32 +0000] "GET / HTTP/1.1" 302 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:33 +0000] "GET / HTTP/1.1" 302 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:34 +0000] "GET / HTTP/1.1" 302 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:35 +0000] "GET / HTTP/1.1" 302 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:36 +0000] "GET / HTTP/1.1" 302 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:37 +0000] "GET / HTTP/1.1" 302 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:38 +0000] "GET / HTTP/1.1" 302 338 "-" "-"
wordpress_1 | 172.17.0.4 - - [25/Mar/2017:10:45:39 +0000] "G
Your link appears to be working as expected. 0.0.0.0 is not an IP address you connect to, that's a listener IP that tells the networking stack to listen on all interfaces rather than a specific IP on the host. In your case, all IP's includes 127.0.0.1 (loopback inside the container) and 172.17.0.3 (the IP reachable by other containers on that network.
Note that links are largely deprecated, it's preferred to configure the containers on a network (other than the default bridge) and use the built in DNS discovery. Similarly, compose version 1 file formats are also largely deprecated, you should consider upgrading to at least the version 2 compose file format. With that format, a network will be created by default for your containers to communicate.
Here's an example of your compose file in version 2 format:
version: '2'
services:
wordpress:
image: wordpress
ports:
- 8080:80
mysql:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: examplepass
varnish:
image: eeacms/varnish
ports:
- "8000:6081"
environment:
DNS_ENABLED: "true"
BACKENDS: "wordpress"
BACKENDS_PORT: 8080
The http 302 is a redirect, whatever you are running is able to see the url but isn't following the redirect or wordpress is not configured to give a correct redirect.
Update: The varnish error you are seeing is because you are probing / on the wordpress server which is responding with a 302 redirect. Varnish appears to need a 200 success code for the url it is probing. For that, you can add a variable like the following to your varnish environment:
BACKENDS_PROBE_URL: /wp-includes/js/jquery/jquery.js

Resources