I have been following a tutorial on how to make a load balanced application using docker-compose and nginx. However, my load balancer/coordinator doesn't work - what I am trying to do is have nginx accept requests and split them between three workers, and I want nginx and the three workers to be running in separate docker containers, but I get the following error. My compilerwebservice_worker does work correctly, and I can see all three in docker ps, and I can ping them with wget on the localhost post they are listening to.
The error message
$ docker-compose up
Starting compilerwebservice_worker1_1
Starting compilerwebservice_worker3_1
Starting compilerwebservice_worker2_1
Starting compilerwebservice_nginx_1
Attaching to compilerwebservice_worker1_1, compilerwebservice_worker3_1, compilerwebservice_worker2_1, compilerwebservice_nginx_1
nginx_1 | 2016/09/06 07:17:47 [emerg] 1#1: host not found in upstream "compiler-web-service" in /etc/nginx/nginx.conf:14
nginx_1 | nginx: [emerg] host not found in upstream "compiler-web-service" in /etc/nginx/nginx.conf:14
compilerwebservice_nginx_1 exited with code 1
NGINX Config
http {
upstream compiler {
least_conn;
server worker1:4567;
server worker2:4567;
server worker3:4567;
}
server {
listen 4567;
location / {
proxy_pass http://compiler;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}
My Docker-compose file
nginx:
build: ./src/main/nginx
links:
- worker2:worker2
- worker3:worker3
- worker1:worker1
ports:
- "4567:4567"
worker1:
build: .
ports:
- "4567"
worker2:
build: .
ports:
- "4567"
worker3:
build: .
ports:
- "4567"
NGINX Docker file
# Set nginx base image
FROM nginx
# Copy custom configuration file from the current directory
COPY nginx.conf /etc/nginx/nginx.conf
In the below demo there are 2 express app running with port 1111 and 2222 in localhost, on calling http://localhost:8080 it should automatically choose any one of the port 1111 or 2222. here nginx uses round robin
index.js file
const express = require('express');
const app = express();
const appId = process.env.APPID;
const PORTNUMBER = appId;
app.get('/', (req, res) => {
res.send({
message: `Welcome to ${appId} home page running on port ${appId}`
});
});
app.listen(PORTNUMBER, () => {
console.log(`APP STARTED ON PORT ${appId} for APP id ${appId}`);
})
express app docker file
FROM node:12.13.0-alpine
WORKDIR /EXPRESSAPP
COPY ./API/package.json /EXPRESSAPP
RUN npm install
COPY ./API/. /EXPRESSAPP
CMD ["npm", "start"]
nginx file
http {
upstream backend {
server 127.0.0.1:1111;
server 127.0.0.1:2222;
}
server {
listen 8080 default_server;
listen [::]:8080 default_server;
# listen [::]:8080 default_server ipv6only=on;
server_name localhost;
proxy_read_timeout 5m;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_pass http://backend;
}
}
}
nginx dockercompose file
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
docker-compose.yml file
version: '3'
services:
myapp1:
restart: always
container_name: myapp1
build: ./APICONTAINER
environment:
- APPID=1111
ports:
- "1111:1111"
network_mode: host
myapp2:
restart: always
container_name: myapp2
build: ./APICONTAINER
environment:
- APPID=2222
ports:
- "2222:2222"
network_mode: host
myproxy:
container_name: myproxy
build: ./NGINXCONTAINER
ports:
- "127.0.0.1:8080:8080"
depends_on:
- myapp1
- myapp2
network_mode: host
to spin up the containers use the below command
sudo docker-compose down && sudo docker-compose up --build --force-recreate
go to below link to see the round robin nginx load balancer
http://localhost:8080
reference github link to get the full code
I needed to rebuild with docker-compose build between configuration changes. As I changed the name of the app, the error message indicated a server, whose name was the original one that I selected instead of the one I kept changing.
Related
I have asp.net web api application in docker container and it's working good with below commands,
docker build -t hello-aspnetcore3 -f Api.Dockerfile .
docker run -d -p 5000:5000 --name hello-aspnetcore3 hello-aspnetcore3
Browse to: http://localhost:5000/weatherforecast works perfect.
Now I am trying to use Nginx as reverse proxy and running in another container and here is nginx conf and container files,
nginx.conf
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream web-api {
server api:5000;
}
server {
listen 80;
server_name $hostname;
location / {
proxy_pass http://web-api;
proxy_redirect off;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
Nginx.Dockerfile
FROM nginx:latest
COPY nginx.conf /etc/nginx/nginx.conf
and here is my docker compose file,
docker-compose.yml
version: "3.7"
services:
reverseproxy:
build:
context: ./Nginx
dockerfile: Nginx.Dockerfile
ports:
- "80:80"
restart: always
api:
depends_on:
- reverseproxy
build:
context: ./HelloAspNetCore3.Api
dockerfile: Api.Dockerfile
expose:
- "5000"
restart: always
Building and running containers are fine docker-compose build and docker-compose up -d.
But when trying to browse http://localhost/weatherforecast, it's giving HTTP Error 404. The requested resource is not found. error. What's wrong here?
Note - when I browse using host ip address http://192.168.0.103/weatherforcast, then it's works fine.
Docker-Compose ps output is here....
I am trying to run a service called Grafana behind Nginx webserver,where both services are being run in a docker-compose file.
docker-compose.yml:
version: '3.1'
services:
nginx:
image: nginx
ports: ['443:443',"80:80"]
restart: always
volumes:
- ./etc/nginx.conf:/etc/nginx/nginx.conf:ro
- /home/ec2-user/certs:/etc/ssl
grafana:
image: grafana/grafana
restart: always
ports: ["3000:3000"]
nginx.conf:
events {
worker_connections 1024;
}
http {
ssl_certificate /etc/ssl/cert.pem;
ssl_certificate_key /etc/ssl/key.pem;
server {
listen 443 ssl;
server_tokens off;
location /grafana/ {
rewrite /grafana/(.*) /$1 break;
proxy_pass http://127.0.0.1:3000/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_bind $server_addr;
}
}
}
The grafana service is running on port 3000.
My goal is to access this nginx server from outside, (lets assume its public ip address is: 1.1.1.1) on the address https://1.1.1.1/grafana. With the current configuration i get 502 Bad Gateway and the error on nginx side:
(111: Connection refused) while connecting to upstream, client: <<my-public-ip-here>>,
Your containers are running on two separate IP addresses in the docker network, usually 172.17.. by default.
By using a proxy pass like this in the nginx container:
proxy_pass http://127.0.0.1:3000/
You are essentially telling it to look for a process on port 3000 local to itself, because of the 127.0.0.1 right?
You need to point it in the direction of the Grafana container, try doing:
docker inspect <grafana ID> | grep IPAddress
Then set the proxy pass to that IP:
proxy_pass http://172.0.0.?:3000/
I've solved the same issue using something like #james suggested:
docker inspect <your inaccessible container is> | grep Gateway
Then use this IP address:
proxy_pass http://172.xx.0.1:3000/
I would like to setup an Nginx reverse proxy, which works fine, but if I set network_mode: "host" it stops working because it is unable to find the hostname of other docker containers. I have a web container and an nginx container.
I get the following error:
reverseproxy_1 | nginx: [emerg] host not found in upstream "web:80" in /etc/nginx/nginx.conf:10
My Nginx conf file is:
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream docker-web {
server web:80;
}
server {
listen 8080;
location / {
proxy_pass http://docker-web;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
and my docker-compose.yml file is:
version: '2'
services:
redis:
image: "redis:alpine"
web:
depends_on:
- redis
build: .\app
volumes:
- .\app:/code
restart: always
reverseproxy:
image: reverseproxy
network_mode: "host"
ports:
- 8080:8080
depends_on:
- web
I need to set network_mode to host else the the X-Forwarded-For will wrong.
I managed to get it working by using a Linux host instead of Windows which meant I didn't need to use network_mode: "host". I also had to change my Python code to
request.environ.get('HTTP_X_REAL_IP', request.remote_addr)
from
request.environ['REMOTE_ADDR']
I have tried various options e.g. expose, bridge, networks options of docker-compose but can't get it to work with the nginx connection to upstream gunicorn running in separate container, I am receiving 502 Bad Gateway error from nginx. I am not sure as to what i am missing exactly. Below is my docker-compose file:
version: "3"
services:
web:
build: .
container_name: web
command: bash -c "/start_web.sh"
restart: always
depends_on:
- worker
ports:
- "80:80"
- "443:443"
worker:
build: .
container_name: worker
command: bash -c "/start_worker.sh"
restart: always
ports:
- "8000:8000"
nginx conf:
upstream worker {
server 127.0.0.1:8000;
}
server {
listen 80 default_server;
location / {
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Url-Scheme $scheme;
proxy_redirect off;
# Mitigate httpoxy attack
proxy_set_header Proxy "";
proxy_pass http://worker;
}
}
Gunicorn config:
import multiprocessing
import os
bind = '127.0.0.1:8000'
default_workers = multiprocessing.cpu_count() * 2 + 1
workers = os.getenv('GUNICORN_WORKERS', os.getenv('WEB_CONCURRENCY', default_workers))
worker_class = 'tornado'
# This is to fix issues with compressor package: broken offline manifest for
# custom domain. It randomly breaks, I think because of global variable inside.
preload_app = True
timeout = 200
graceful_timeout = 60
max_requests = 250
max_requests_jitter = max_requests
accesslog = '/tmp/gunicorn_access.log'
errorlog = '/tmp/gunicorn_error.log'
Circus ini files:
web.ini
[watcher:nginx]
cmd = /usr/sbin/nginx
stop_signal = QUIT
worker.ini
[watcher:gunicorn]
cmd = /usr/local/bin/gunicorn test:app -c /etc/gunicorn/app.py
working_dir = /opt/app
copy_env = True
uid = www-data
The whole code is available on github as well at repository docker_test for your ease to test it.
Gunicon config:
bind = '127.0.0.1:8000'
This will bind to loopback interface (localhost only), change it to 0.0.0.0 to bind to every available interface in the container. This will make it reachable from nginx.
Nginx config:
upstream worker {
server 127.0.0.1:8000;
}
You need to change loopback ip to DNSname/IP of worker container. I recommend creating an user-defined network, Then put all containers that are related in that network and call them by DNS names. You wont have internal DNS in default bridge network so following nginx config wont work.
upstream worker {
server worker:8000;
}
I have application with a couple of running on docker containers (nginx, db, php, ..) connected together with docker-compose. Now i want to use jenkins to build this app on production. Im am not sure how to connect jenkins container with nginx and limit it only to localhost.
nginx.conf
upstream jenkins {
server jenkins:8080;
}
sites-enabled/default.conf
server {
listen 80;
server_name jenkins.example.com;
location / {
proxy_pass http://jenkins;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
jenkins Dockerfile
FROM jenkins
ENV JENKINS_OPTS --httpListenAddress=172.17.0.1
docker-compose.yml
jenkins:
ports:
- "8080:8080"
nginx:
links:
- jenkins
ports:
- "80:80"
...
I get 502 error. When i change --httpListenAddress to 0.0.0.0 it works but then is not limited only to localhost. 172.17.0.1 is the docker gateway.
Remove the ports entry from jenkins. The ports entry is only needed to expose the port of the docker container to the localhost.
To expose the port to another docker container, linking them is sufficient. In your nginx links: you have already mentioned jenkins. Hence you don't need to have the ports entry in your jenkins.
...
jenkins:
...
nginx:
links:
- jenkins
ports:
- "80:80"
...
Try use it
jenkins:
ports:
- "127.0.0.1:8080:8080"
nginx:
links:
- jenkins
ports:
- "80:80"