Nginx in docker throws 502 Bad Gateway - nginx

I am trying to run a service called Grafana behind Nginx webserver,where both services are being run in a docker-compose file.
docker-compose.yml:
version: '3.1'
services:
nginx:
image: nginx
ports: ['443:443',"80:80"]
restart: always
volumes:
- ./etc/nginx.conf:/etc/nginx/nginx.conf:ro
- /home/ec2-user/certs:/etc/ssl
grafana:
image: grafana/grafana
restart: always
ports: ["3000:3000"]
nginx.conf:
events {
worker_connections 1024;
}
http {
ssl_certificate /etc/ssl/cert.pem;
ssl_certificate_key /etc/ssl/key.pem;
server {
listen 443 ssl;
server_tokens off;
location /grafana/ {
rewrite /grafana/(.*) /$1 break;
proxy_pass http://127.0.0.1:3000/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_bind $server_addr;
}
}
}
The grafana service is running on port 3000.
My goal is to access this nginx server from outside, (lets assume its public ip address is: 1.1.1.1) on the address https://1.1.1.1/grafana. With the current configuration i get 502 Bad Gateway and the error on nginx side:
(111: Connection refused) while connecting to upstream, client: <<my-public-ip-here>>,

Your containers are running on two separate IP addresses in the docker network, usually 172.17.. by default.
By using a proxy pass like this in the nginx container:
proxy_pass http://127.0.0.1:3000/
You are essentially telling it to look for a process on port 3000 local to itself, because of the 127.0.0.1 right?
You need to point it in the direction of the Grafana container, try doing:
docker inspect <grafana ID> | grep IPAddress
Then set the proxy pass to that IP:
proxy_pass http://172.0.0.?:3000/

I've solved the same issue using something like #james suggested:
docker inspect <your inaccessible container is> | grep Gateway
Then use this IP address:
proxy_pass http://172.xx.0.1:3000/

Related

Use Docker --net=host and connect to other containers by hostname

I would like to setup an Nginx reverse proxy, which works fine, but if I set network_mode: "host" it stops working because it is unable to find the hostname of other docker containers. I have a web container and an nginx container.
I get the following error:
reverseproxy_1 | nginx: [emerg] host not found in upstream "web:80" in /etc/nginx/nginx.conf:10
My Nginx conf file is:
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream docker-web {
server web:80;
}
server {
listen 8080;
location / {
proxy_pass http://docker-web;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
and my docker-compose.yml file is:
version: '2'
services:
redis:
image: "redis:alpine"
web:
depends_on:
- redis
build: .\app
volumes:
- .\app:/code
restart: always
reverseproxy:
image: reverseproxy
network_mode: "host"
ports:
- 8080:8080
depends_on:
- web
I need to set network_mode to host else the the X-Forwarded-For will wrong.
I managed to get it working by using a Linux host instead of Windows which meant I didn't need to use network_mode: "host". I also had to change my Python code to
request.environ.get('HTTP_X_REAL_IP', request.remote_addr)
from
request.environ['REMOTE_ADDR']

POST requests to Nexus throught Nginx in a customized web context return error 400 POST is not supported

I'm trying to setup Nexus 3 behind Nginx reverse proxy. Nexus and Nginx are in docker containers launched with docker-compose on a Centos 7.3 host. All docker images are the latest available.
Nexus listen on default port 8081 into its container. This port is exposed as 18081 on the docker host. Nexus is configured to be in /nexus web context.
Nginx listen on port 80 into its container which is also exposed on the docker host.
I just want to access the Nexus Repository Manager in a local Firefox with the address "localhost/nexus"
Here is the configuration:
docker-compose.yml:
version: '2'
networks:
picnetwork:
driver: bridge
services:
nginx:
image: nginx:latest
restart: always
hostname: nginx
ports:
- "80:80"
networks:
- picnetwork
volumes:
- /opt/cip/nginx:/etc/nginx/conf.d
depends_on:
- nexus
nexus:
image: sonatype/nexus3:latest
restart: always
hostname: nexus
ports:
- "18081:8081"
networks:
- picnetwork
volumes:
- /opt/cip/nexus:/nexus-data
environment:
- NEXUS_CONTEXT=nexus
Nginx default.conf (/opt/cip/nginx/default.conf in docker host which is /etc/nginx/conf.d/default.conf in Nginx container) :
proxy_send_timeout 120;
proxy_read_timeout 300;
proxy_buffering off;
tcp_nodelay on;
server {
listen 80;
server_name localhost;
client_max_body_size 1G;
location /nexus {
proxy_pass http://nexus:8081/nexus/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
The strange thing is that when web context is / (location /, proxy_pass without /nexux, NEXUS_CONTEXT=) it works fine, when the web context is /nexus (as configuration shown here) POST requests return "400 HTTP methop POST is not supported by this URL". But if I use "localhost:18081/nexus" in the second case it works fine.
Is that a Nexus bug, a Nginx bug, or am I missing something ?

Flask, Gunicorn, NGINX, Docker : What is properly the way to config SERVER_NAME and proxy_pass?

I setup docker project using Flask, gunicorn, NGINX and Docker, which works fine if I didn't add SERVER_NAME in Flask's setting.py.
The current config is :
gunicorn
gunicorn -b 0.0.0.0:5000
docker-compose.yml
services:
application:
#restart: always
build: .
expose:
- "5000"
ports:
- "5000:5000"
volumes:
- .:/code
links:
- db
nginx:
restart: always
build: ./nginx
links:
- application
expose:
- 8080
ports:
- "8880:8080"
NGINX .conf
server {
listen 8080;
server_name application;
charset utf-8;
location / {
proxy_pass http://application:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Then I set the SERVER_NAME in Flask's setting.py
SERVER_NAME = '0.0.0.0:5000'
When I enter url 0.0.0.0:8880 to my browser, I get response 404 from nginx. What should be properly SERVER_NAME in Flask's setting.py ?
Thanks in advance.
Finally find the solution,
I have to specific port for proxy_set_header
proxy_set_header Host $host:5000;
It doesn't make sense to set an IP for SERVER_NAME. SERVER_NAME will redirect the requests to that hostname, and is useful for setting subdomains and also supporting URL generation with an application context (for instance lets say you have a background thread which needs to generate URLs but has no request context).
SERVER_NAME should match your domain where the application is deployed.

nginx timeout after https proxy to localhost

I want to run one docker-compose with Nginx which will be only a proxy to other docker-compose services.
Here is my docker-compose.yml with proxy:
version: '2'
services:
storage:
image: nginx:1.11.13
entrypoint: /bin/true
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- /path_to_ssl_cert:/path_to_ssl_cert
proxy:
image: nginx:1.11.13
ports:
- "80:80"
- "443:443"
volumes_from:
- storage
network_mode: "host"
So it will grap all connections to port 80 or 443 and proxy them to services specified in ./config/nginx/conf.d directory.
Here is example service ./config/nginx/conf.d/domain_name.conf:
server {
listen 80;
listen 443 ssl;
server_name domain_name.com;
ssl_certificate /path_to_ssl_cert/cert;
ssl_certificate_key /path_to_ssl_cert/privkey;
return 301 https://www.domain_name.com$request_uri;
}
server {
listen 80;
server_name www.domain_name.com;
return 301 https://www.domain_name.com$request_uri;
# If you uncomment this section and comment return line it's works
# location ~ {
# proxy_pass http://localhost:8888;
# # or proxy to https, doesn't matter
# #proxy_pass https://localhost:4433;
# }
}
server {
listen 443 ssl;
server_name www.domain_name.com;
ssl on;
ssl_certificate /path_to_ssl_cert/cert;
ssl_certificate_key /path_to_ssl_cert/privkey;
location ~ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Client-Verify SUCCESS;
proxy_set_header X-Client-DN $ssl_client_s_dn;
proxy_set_header X-SSL-Subject $ssl_client_s_dn;
proxy_set_header X-SSL-Issuer $ssl_client_i_dn;
proxy_pass https://localhost:4433;
# like before
# proxy_pass http://localhost:8888;
}
}
It's redirect all request http://domain_name.com, https://domain_name.com and http://www.domain_name.com to https://www.domain_name.com and proxy it to specific localhost service.
Here is my specific service docker-compose.yml
version: '2'
services:
storage:
image: nginx:1.11.13
entrypoint: /bin/true
volumes:
- /path_to_ssl_cert:/path_to_ssl_cert
- ./config/nginx/conf.d:/etc/nginx/conf.d
- ./config/php:/usr/local/etc/php
- ./config/php-fpm.d:/usr/local/etc/php-fpm.d
- php-socket:/var/share/php-socket
www:
build:
context: .
dockerfile: ./Dockerfile_www
image: domain_name_www
ports:
- "8888:80"
- "4433:443"
volumes_from:
- storage
links:
- php
php:
build:
context: .
dockerfile: ./Dockerfile_php
image: domain_name_php
volumes_from:
- storage
volumes:
php-socket:
So when you go to http://www.domain_name.com:8888 or https://www.domain_name.com:4433 you will get content. When you curl to localhost:8888 or https://localhost:4433 from server where docker is running you will get content too.
And now my issue.
When I go to browser and type domain_name.com, www.domain_name.com or https://www.domain_name.com nothing happen. Even when I curl to this domain from my local machine I got timeout.
I have search some info "nginx proxy https to localhost" but noting works for me.
I have solution!
When I setup network_mode: "host" in docker-compose.yml for my proxy I thought that ports: entries still are working but not.
Now proxy works in my host network so it use local ports and omit ports: entries from docker-compose.yml. That's mean I have manually open ports 80 and 443 on my server.

Starting NGINX Load Balancer with Docker Compose

I have been following a tutorial on how to make a load balanced application using docker-compose and nginx. However, my load balancer/coordinator doesn't work - what I am trying to do is have nginx accept requests and split them between three workers, and I want nginx and the three workers to be running in separate docker containers, but I get the following error. My compilerwebservice_worker does work correctly, and I can see all three in docker ps, and I can ping them with wget on the localhost post they are listening to.
The error message
$ docker-compose up
Starting compilerwebservice_worker1_1
Starting compilerwebservice_worker3_1
Starting compilerwebservice_worker2_1
Starting compilerwebservice_nginx_1
Attaching to compilerwebservice_worker1_1, compilerwebservice_worker3_1, compilerwebservice_worker2_1, compilerwebservice_nginx_1
nginx_1 | 2016/09/06 07:17:47 [emerg] 1#1: host not found in upstream "compiler-web-service" in /etc/nginx/nginx.conf:14
nginx_1 | nginx: [emerg] host not found in upstream "compiler-web-service" in /etc/nginx/nginx.conf:14
compilerwebservice_nginx_1 exited with code 1
NGINX Config
http {
upstream compiler {
least_conn;
server worker1:4567;
server worker2:4567;
server worker3:4567;
}
server {
listen 4567;
location / {
proxy_pass http://compiler;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}
My Docker-compose file
nginx:
build: ./src/main/nginx
links:
- worker2:worker2
- worker3:worker3
- worker1:worker1
ports:
- "4567:4567"
worker1:
build: .
ports:
- "4567"
worker2:
build: .
ports:
- "4567"
worker3:
build: .
ports:
- "4567"
NGINX Docker file
# Set nginx base image
FROM nginx
# Copy custom configuration file from the current directory
COPY nginx.conf /etc/nginx/nginx.conf
In the below demo there are 2 express app running with port 1111 and 2222 in localhost, on calling http://localhost:8080 it should automatically choose any one of the port 1111 or 2222. here nginx uses round robin
index.js file
const express = require('express');
const app = express();
const appId = process.env.APPID;
const PORTNUMBER = appId;
app.get('/', (req, res) => {
res.send({
message: `Welcome to ${appId} home page running on port ${appId}`
});
});
app.listen(PORTNUMBER, () => {
console.log(`APP STARTED ON PORT ${appId} for APP id ${appId}`);
})
express app docker file
FROM node:12.13.0-alpine
WORKDIR /EXPRESSAPP
COPY ./API/package.json /EXPRESSAPP
RUN npm install
COPY ./API/. /EXPRESSAPP
CMD ["npm", "start"]
nginx file
http {
upstream backend {
server 127.0.0.1:1111;
server 127.0.0.1:2222;
}
server {
listen 8080 default_server;
listen [::]:8080 default_server;
# listen [::]:8080 default_server ipv6only=on;
server_name localhost;
proxy_read_timeout 5m;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_pass http://backend;
}
}
}
nginx dockercompose file
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
docker-compose.yml file
version: '3'
services:
myapp1:
restart: always
container_name: myapp1
build: ./APICONTAINER
environment:
- APPID=1111
ports:
- "1111:1111"
network_mode: host
myapp2:
restart: always
container_name: myapp2
build: ./APICONTAINER
environment:
- APPID=2222
ports:
- "2222:2222"
network_mode: host
myproxy:
container_name: myproxy
build: ./NGINXCONTAINER
ports:
- "127.0.0.1:8080:8080"
depends_on:
- myapp1
- myapp2
network_mode: host
to spin up the containers use the below command
sudo docker-compose down && sudo docker-compose up --build --force-recreate
go to below link to see the round robin nginx load balancer
http://localhost:8080
reference github link to get the full code
I needed to rebuild with docker-compose build between configuration changes. As I changed the name of the app, the error message indicated a server, whose name was the original one that I selected instead of the one I kept changing.

Resources