docker running nginx as proxy to another webserver (on pi) - nginx

I am kind of in over my head with my current small project.
(although it should not be that hard)
I am trying to run multiple webpages using docker on my Pi (for testing purposes) which should all be reachable using the PI's IP.
I currently run a minimL LIGHTTPD: (based on the resin/rpi-raspbian image)
docker run -d -v <testconfig>:/etc/lighttpd -p <pi-ip>:8080:80 <image name>
(this server is reachable using the browser on pi and on other computers in the network)
For nginx I run another container with with a simple config
(starting with http://nginx.org/en/docs/beginners_guide.html),
containing a webpage and images to test the container config.
this container is reachable using <pi-ip>:80
then I tried to add a proxy to the locations:
(I played around so now there are 3 locations for the same redirect)
location /prox1/{
proxy_pass http://<pi-ip>:8080
}
location /prox2/{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://<pi-ip>:8080
}
location /prox3/{
fastcgi_pass <pi-ip>:8080;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string
}
Version 1&2 give a 404 (I tried adding a rewrite, but then I ´nginx redirected on itself due to the /prox1/ being cut).
Version 3 yields a timeout.
Now I am not sure if I still have to dig on the nginx side, or I have to add a connection on the docker side between the containers.
PS: the Pi is running ArchForArm (using Xfce as desktop) because I couldn't find docker-compose in the raspberian repository.
-- EDIT ---:
I currently start everything manually. (so no compose file)
the LIGHTTPD is started with:
docker run -d --name mylighttpd -v <testconfig>:/etc/lighttpd -p <pi-ip>:8080:80 <image name>
if I understood it correctly it is now listening on the local network (in the range of <pi-ip>) port 8080, which represents the test web-servers port 80. (I have added ..name so it is easier to stop it.)
the nginx is started like:
docker run --name mynginx --rm -p <pi-ip>:80:80 -v <config>:/data <image name>
The 8080 was added in the expose in the Docker file.
I current think I misunderstood the connection for two clients on the same machine, and should add a Virtual network, I am currently trying to find some docks there.
PS: I am not using the already existing nginx-zeroconf from the repo because it tells me it cant read the installed docker version. (and the only example for using that with composer also needs another container which seems unavailable for my architecture.)
-- edit2 --:
For the simple proxy_pass the problem could be the URL.
I added a deeper folder "prox1" in the "www" folder, containing an index file, and that one is schown when i ask for the page.
It seems like <pi-ip>:80/prox1/
is redirected to <pi-ip>:8080/prox1/
but if I try rewrite it (inside "location /prox1/") it seems to first delete the prox1, and then decides it now is part of the original location.
<pi-ip>:80/
PS: I am aware that it might be a better design to place the system inside another connection than "bridge" and only expose the proxy, but i am trying to learn this stuff in small steps.
-- edit3 --:
Trying compose now, but it seems I have encounters another part I don't understand (why I wanted to get it work without compose first).
I try to follow http://docs.master.dockerproject.org/compose/compose-file/#ipv4-address-ipv6-address
networks:
backbone:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.16.238.0/16
gateway: 172.16.238.1
services:
nginx:
image: <nginx-image>
ports: 80:80
volumes:
- <config>:/data
depends_on:
- lighttpd
networks:
backbone:
ipv4_address: 172.16.238.2
lighttpd:
image: <lighttpd-image>
ports: 8080:80
volumes:
- <testconfig>:/etc/lighttpd
networks:
backbone:
ipv4_address: 172.16.238.3
Now I have to find out why i get "User specific IP address is supported only when connecting to networks with user configured subnets", I assume the main networks block creates a network called "backbone".
-- edit4 --:
It seems ip blocks have to be written different to all the docks I have seen, the correct form is:
...
networks:
backbone:
ipv4_address: 172.16.0.2/16
...
now I have to figure out how to drop the part of the URL, and I am good to go.

The core problem seems to have been missing nginx parameter proxy_redirect, i found rambling trough the docks, the current nginx.conf is:
(/data/www contains a index.html with a relative link to some images in /data/images)
worker_processes auto;
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
root /data/www;
}
location /images/ {
root /data;
}
location /prox0/{
proxy_pass http://lighttpd:80;
proxy_redirect default;
proxy_buffering off;
}
}
}
manual starting on local Ip seems to work, but docker-compose is easyer:
(if compose is not used replace lighttpd:80 with the ip & port used for starting the server.)
networks:
backbone:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.16.238.0/16
gateway: 172.16.238.1
services:
nginx:
image: <nginx-image>
ports: 80:80
volumes:
- <config>:/data
depends_on:
- lighttpd
networks:
backbone:
ipv4_address: 172.16.0.2
lighttpd:
image: <lighttpd-image>
ports: 8080:80
volumes:
- <testconfig>:/etc/lighttpd
networks:
backbone:
ipv4_address: 172.16.0.3

Related

How can I use an ngnix proxy to scrape Prometheus metrics using a custom HTTP header?

I need to scrape Prometheus metrics from an endpoint that requires a custom HTTP header, x-service-token.
Prometheus does not include an option to scrape using a custom HTTP header, only the Authorization header.
One user shared a workaround for using nginx to create a reverse proxy
Just in case others come looking here for how to do this (there are at least 2 other issues on it), I've got a little nginx config that works. I'm not an nginx expert so don't mock! ;)
I run it in docker. A forward proxy config file for nginx listening on 9191:
http {
map $request $targetport {
~^GET\ http://.*:([^/]*)/ "$1";
}
server {
listen 0.0.0.0:9191;
location / {
proxy_redirect off;
proxy_set_header NEW-HEADER-HERE "VALUE";
proxy_pass $scheme://$host:$targetport$request_uri;
}
}
}
events {
}
Run the transparent forward proxy:
docker run -d --name=nginx --net=host -v /path/to/nginx.conf:/etc/nginx/nginx.conf:ro nginx
In your prometheus job (or global) add the proxy_url key
- job_name: 'somejob'
metrics_path: '/something/here'
proxy_url: 'http://proxyip:9191'
scheme: 'http'
static_configs:
- targets:
- '10.1.3.31:2004'
- '10.1.3.31:2005'
Originally posted by #sra in https://github.com/prometheus/prometheus/issues/1724#issuecomment-282418757
I have tried configuring this, but without 'host' networking and using host.docker.internal instead of localhost, but nginx is not able to connect
nginx | 172.26.0.4 - - [31/Oct/2022:16:07:38 +0000] "GET http://host.docker.internal:8080/actuator/prometheus HTTP/1.1" 502 157 "-" "Prometheus/2.39.1"
This workaround also requires saving the API key in a file, which is not ideal, as this could accidentally be committed to a repo.
Prometheus locked the GitHub issue, so users are not able to ask for help or follow up questions.
There are two other StackOverflow questions on this topic, but the answers do not attempt to provide workarounds:
Prometheus scrape /metric with custom header
Adding custom header in HTTP request of prometheus
I've managed to get this working with an nginx proxy that runs on the same Docker network as the Prometheus instance.
.
├── config/
│ ├── nginx.conf
│ └── prometheus.yml
└── docker-compose.yml
Prometheus is configured to scrape the Prometheus metrics from nginx.
URLs
I have 3 environments, 'local', 'dev', and 'prod'.
The Prometheus metrics are available at the following URLs. Note that dev and prod require HTTPS and an API key, but local does not.
local - http://localhost:8080/metrics/prometheus
dev - https://dev.my-app.website.com/metrics/prometheus
prod - http://prod.my-app.website.com/metrics/prometheus
nginx config
The nginx server has been configured to forward the requests to each environment based on the port.
:9191 - local
:9192 - dev
:9193 - prod
I have manually defined the URLs for each environment in each nginx server { } block (except for 'localhost'), because nginx or Prometheus doesn't seem to like resolving the correct URL otherwise. It's a mystery.
http {
resolver 127.0.0.11 ipv6=off; # use the docker DNS, to resolve host.docker.internal
map $request $target_port {
~^GET\ http://.*:([^/]*)/ "$1";
}
# local
server {
listen 9191;
location / {
# no need for API key on local env
# proxy_set_header x-api-key ...;
proxy_set_header Host localhost;
proxy_pass http://$host:$target_port$request_uri;
}
}
# dev
server {
listen 9192;
location / {
proxy_set_header x-api-key DEV_API_KEY_123_ABC;
proxy_set_header Host dev.my-app.website.com;
proxy_pass https://dev.my-app.website.com:443$request_uri;
}
}
# prod
server {
listen 9193;
location / {
proxy_set_header x-api-key PROD_API_KEY_999_XYZ;
proxy_set_header Host prod.my-app.website.com;
proxy_pass https://prod.my-app.website.com:443$request_uri;
}
}
}
events {
}
Prometheus config
Prometheus is configured to use the nginx container as a proxy URL.
Because nginx and Prometheus are running in the same Docker network, I can specify nginx by the container name.
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: "my-backend-local"
proxy_url: "http://nginx:9191"
metrics_path: "/monitor/prometheus"
scrape_interval: 2s
static_configs:
- targets: [ "host.docker.internal:6060" ]
labels:
application: "my-backend"
env: "local"
- job_name: "my-backend-dev"
proxy_url: "http://nginx:9192"
metrics_path: "/monitor/prometheus"
scrape_interval: 2s
static_configs:
- targets: [ "dev.my-app.website.com" ]
labels:
application: "my-backend"
env: "dev"
- job_name: "my-backend-prod"
proxy_url: "http://nginx:9193"
metrics_path: "/monitor/prometheus"
scrape_interval: 2s
static_configs:
- targets: [ "prod.my-app.website.com" ]
labels:
application: "my-backend"
env: "prod"
Docker Compose config
Finally, the Prometheus and nginx Docker instances are configured to read the ./config/prometheus.yml and ./config/nginx.conf files.
version: "3.9"
services:
prometheus:
image: prom/prometheus:v2.39.1
container_name: prometheus
volumes:
- "./config/prometheus.yml:/etc/prometheus/prometheus.yml"
- "./data/prometheus:/prometheus"
command:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--web.console.libraries=/etc/prometheus/console_libraries"
- "--web.console.templates=/etc/prometheus/consoles"
- "--web.enable-lifecycle"
ports:
- "9090:9090"
backend-proxy:
image: nginx
container_name: nginx
restart: unless-stopped
volumes:
- "./config/nginx.conf:/etc/nginx/nginx.conf:ro"
I brought you a complete setup with an app, a forward proxy, and prometheus in docker-compose. It's quite long, so I'm putting it after the explanation. Please note that, just as with your solution, it does not work with host.docker.internal as it seems that NGINX does not use /etc/hosts when resolving hosts: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/259#issuecomment-1125197753 . All other hosts should work fine and you can use host's IP address instead of host.docker.internal if you need so.
You can run this by saving the contents into docker-compose.yml and running docker-compose up from the same directory. After 10 seconds or so you should see in logs how requests go through the proxy to the app and the app will show you the headers that it got. You can then proceed to Prometheus UI (localhost:9090) and query for metric the_answer_is to further check that everything is in place.
The proxy works as following:
the target param (e.g. GET /?target=example.com) is the host or IP where the actual metrics are, this is the only mandatory parameter;
if there is a scheme param, use it as the protocol, default - "http";
if there is a host param, use it as Host header, default - the value of target param;
if there is a port param, use it as a TCP port, default is "" (determined by the scheme);
if there is a secret_token param, it gets injected into X-Custom-Header, default is "";
I recommend testing the proxy with curl, like this:
curl 'localhost/something/here?target=someapp&port=8000&secret_token=foo
Now goes the docker-compose.yml:
version: '3'
networks:
app: # a network where the app is
no_app: # a network where there is no app so that prometheus can't reach it directly
services:
# A basic http server that exposes one metric and prints some headers along the way
someapp:
image: tiangolo/uwsgi-nginx-flask:python3.8-alpine
entrypoint: ["/usr/local/bin/python", "-c"]
networks:
- app
ports:
- 8000:8000
command:
- |-
import json
from flask import Flask, request
app = Flask(__name__)
import logging
log = logging.getLogger('werkzeug')
#app.route('/', defaults={'path': ''})
#app.route('/<path:path>')
def print_request(path):
log.info(f"{'-'*78}\n{str(request.headers).strip()}")
if request.path == "/something/here":
return "the_answer_is 42\n"
else:
return "OK\n"
app.run("0.0.0.0", port=8000)
# A forward proxy for Prometheus
forward_proxy:
image: nginx
networks:
- app
- no_app
ports:
- 80:80
entrypoint: ["/bin/bash", "-c"]
environment:
# Note that single "$" is considered by docker-compose as its variable,
# double "$$" is just an escape here
config: |
# Default scheme
map $$arg_scheme $$target_scheme {
~.+ $$arg_scheme;
default http;
}
# Default host (header) from target
map $$arg_host $$target_host {
~.+ $$arg_host;
default $$arg_target;
}
# This is to add ":" between target ip or host and port
map $$arg_port $$has_port {
~.+ ":";
default "";
}
# use docker internal DNS to resolve "someapp"
resolver 127.0.0.11 ipv6=off;
server {
listen 80;
location / {
proxy_set_header Host $$target_host;
proxy_set_header X-Custom-Header $$arg_secret_token;
proxy_pass $$target_scheme://$$arg_target$$has_port$$arg_port$$request_uri;
}
}
command:
- |-
set -euo pipefail
echo -e "$$config" >/etc/nginx/conf.d/default.conf
echo -e "==== NGINX Config ====\n$$(cat /etc/nginx/conf.d/default.conf)"
nginx -g 'daemon off;'
prometheus:
image: prom/prometheus:v2.29.2
entrypoint: ["/bin/sh", "-c"]
ports:
- 9090:9090
command:
- |
echo -e "$$config" > /etc/prometheus/prometheus.yml
echo -e "==== Prometheus Config ====\n$$(cat /etc/prometheus/prometheus.yml)"
/bin/prometheus --config.file=/etc/prometheus/prometheus.yml \
--storage.tsdb.path=/prometheus \
--web.console.libraries=/usr/share/prometheus/console_libraries \
--web.console.templates=/usr/share/prometheus/consoles
networks:
- no_app
environment:
config: |
scrape_configs:
- job_name: 'somejob'
scrape_interval: 10s
metrics_path: '/something/here'
# set params for the proxy ($$arg_NAME)
params:
port: ["8000"]
secret_token: ["foo"] # beware, this will be visible in Prometheus UI under "config" section
static_configs:
- targets:
- 'someapp'
# Here we replace actual target with the address and port of our forward_proxy
# If you're familiar with it, this is exactly the same as for blackbox exporter
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: forward_proxy:80 # The forward proxy address and port

Wordpress Docker behind Nginx Reverse Proxy

I use this Page and their threads to solve problems for years, but know I have to make a question.
I have tried to install WordPress Docker on my Vserver Machine. It pretty works but the only HTTP.
To install the Wordpress Docker I have to use the tutorial from the following Link.
Additionally, I added --restart always at docker run -e ... command.
Then I installed nginx 1.12.xxx to have a Reverse Proxy. But SSL didn't work. After that, I tried to install a newer version 1.15.xx from nginx repository with no better results.
I installed a certificate with Let's Encrypt and Certbot.
After that WordPress was running and the wp-admin.php was accessible.
But I don't get SSL/HTTPS working. I already tried many codes and my workmates at my workplace even can't get a solution.
I hope you can get one :)
I tried to configure wp-config.php to enable https with commands like "$_SERVER['HTTPS'] = 'on';" and others with no working rather destroying effects.
I also tried to enable "X-Forwared-Proto $scheme;" and "FastCGI" which didn't work as well. I tried many variations of them.
I tried some SSL Plugins from Wordpress but none of them are working.
https://www.bilder-upload.eu/upload/a0eb85-1554884646.png
https://www.bilder-upload.eu/upload/028dc9-1554883515.png
I hope its a little fault and you can help me easily.
First Install Docker on Ubuntu
Either you go with a docker provider like Bluemix or you get a virtual machine from softlayer or any other provider. In my case I have chosen a virtual server so I had to install docker on Ubuntu LTS. Which is really easy. Basically you add a new repository entry to your apt sources and install latest stable docker packages. There is also a script available on get.docker.com but I don’t feel comfortable to execute a shell script right from the net with root access. But it’s up to you.
wget -qO- https://get.docker.com/ | sh
Docker on linux does not contain docker-compose compared to the docker installation for example on mac. Installing docker compose is straightforward. The docker compose script can be downloaded from github here: https://github.com/docker/compose/releases.
Docker-compose
Docker-compose takes care of a docker setup containing more than one docker container, including network and also basic monitoring. The following script starts and builds all docker container with nginx, mysql and wordpress. It also exports the volumes on the host file system for easy backup and persistence along docker container rebuilds and monitors if the docker containers are up and running.
version: '3'
services:
db:
image: mysql:latest
volumes:
- ./db:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: easytoguess
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: eveneasier
wordpress:
depends_on:
- db
image: wordpress:latest
restart: always
volumes:
- ./wordpress:/var/www/html/wp-content
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: eveneasier
WORDPRESS_DB_NAME: wordpress
nginx:
depends_on:
- wordpress
restart: always
build:
context: .
dockerfile: Dockerfile-nginx
ports:
- "80:80"
Mysql is the first container we bring up with environment variables for the database like username, password and database name. Line 7 takes care to save the database file outside the docker container so you can delete the docker container, start a new one and still have the same database up and running. Point this where you want to have it. In this case in “db” under the same directory. Also make sure you come up with decent passwords.
The second container is wordpress. Same here with the host folder on line 21. Furthermore make sure you have the same user, password and db name configured as in the mysql container configuration.
Last one is nginx as internet facing container. You expose the port 80 here. While you just specify a container in the other two, in this one you configure a Dockerfile and a build context to customize your nginx regarding to the network setup. If you only want to host static files you can add this via volume mounts, but in our case we need to configure nginx itself so we need a customized Dockerfile as described below.
Dockerfile for nginx setup
FROM nginx:latest
COPY default.conf /etc/nginx/conf.d/default.conf
VOLUME /var/log/nginx/log/
EXPOSE 80
This dockerfile inherits everything from the latest nginx and copies the default.conf file into it. See next chapter for how to setup the config file.
Nginx config file
server {
listen 80;
listen [::]:80;
server_name www.23-5.eu ansi.23-5.eu;
access_log /var/log/nginx/log/unsecure.access.log main;
location / {
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_redirect off;
proxy_pass http://wordpress;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
}
Line 2 and 3 configures the port we want to listen on. We need one for ip4 and one for ip6. Important is the proxy configuration in line 8 to 15. Line 11 redirect all calls to “/” (so without a path in the URL) to the server wordpress. As we used docker-compose for it docker takes care to make the address available via the internal DNS server. Line 13-15 rewrites the http header in order to map everything to the different URL, otherwise we would end up with auto generated links in docker pointing to http://wordpress
Start the System
If everything is configured and the docker-compose.yml, default.conf, Dockerfile-nginx and the folders db and wordpress are in the same folder, we can start everything being in this folder with:
docker-compose up --build -d
The parameter “-d” starts the setup in the background (daemon). For the very first run I would recommend using it without the “-d” parameter to see all debug messages.

docker-compose scale with sticky sessions

I have a webserver that requires websocket connection in production. I deploy it using docker-compose with nginx as proxy.
So my compose file look like this:
version: '2'
services:
app:
restart: always
nginx:
restart: always
ports:
- "80:80"
Now if I scale "app" service to multiple instances, docker-compose will perform round robin on each call to the internal dns "app".
Is there a way to tell docker-compose load balancer to apply sticky sessions?
Another solution - is there a way to solve it using nginx?
Possible solution that I don't like:
multiple definitions of app
version: '2'
services:
app1:
restart: always
app2:
restart: always
nginx:
restart: always
ports:
- "80:80"
(And then on nginx config file I can define sticky sessions between app1 and app2).
Best result I got from searching:
https://github.com/docker/dockercloud-haproxy
But this requires me to add another service (maybe replace nginx?) and the docs is pretty poor about sticky sessions there.
I wish docker would just allow configuring it with simple line in the compose file.
Thanks!
Take a look at jwilder/nginx-proxy. This image provides an nginx reverse proxy that listens for containers that define the VIRTUAL_HOST variable and automatically updates its configuration on container creation and removal. tpcwang's fork allows you to use the IP_HASH directive on a container level to enable sticky sessions.
Consider the following Compose file:
nginx:
image: tpcwang/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
app:
image: tutum/hello-world
environment:
- VIRTUAL_HOST=<your_ip_or_domain_name>
- USE_IP_HASH=1
Let's get it up and running and then scale app to three instances:
docker-compose up -d
docker-compose scale app=3
If you check the nginx configuration file you'll see something like this:
docker-compose exec nginx cat /etc/nginx/conf.d/default.conf
...
upstream 172.16.102.132 {
ip_hash;
# desktop_app_3
server 172.17.0.7:80;
# desktop_app_2
server 172.17.0.6:80;
# desktop_app_1
server 172.17.0.4:80;
}
server {
server_name 172.16.102.132;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://172.16.102.132;
}
}
The nginx container has automatically detected the three instances and has updated its configuration to route requests to all of them using sticky sessions.
If we try to access the app we can see that it always reports the same hostname on each refresh. If we remove the USE_IP_HASH environment variable we'll see that the hostname actually changes, this is, the nginx proxy is using round robin to balance our requests.

how to reach another container from a dockerised nginx

I have nginx in a docker container, and a nodejs webapp in another docker container.
The nodejs server is reachable from the host server on port 8080.
The nginx docker container is listening to port 80 (will do the certificate later, first this base must be working).
And now I want a subdomain to be forwarded to this 8080 nodejs app. lets say app1.example.com
From outside I can reach the app by the server ip (or hostname) and port 8080 but not on app1.example.com. And it does work on app1.example.com:8080 (I have opened up port 8080 on the host server).
I get a bad gateway nginx message when approaching the app1.example.com So I get in the first nginx container, but how do i get back to the host server to proxy pass it to the port 8080 of the host server (and not port 8080 of the nginx container). looking for the reverse EXPOSE syntax.
the main problem is, of course if I use the ip and port 127.0.0.1:8080 it will try on the nginx container....
So how do I let the nginx container route back to the host 127.0.0.1:8080?
I have tried 0.0.0.0 and defining an upstream, actually been googling a lot, and have tried a lot of configurations... but not yet found a working one....
Edit
Just found out, this command of docker might help:
sudo docker network inspect bridge
This shows the Ip address used inside the containers (in my case 172.17..0.2), but not sure this address stays the same every time the docker will restart... (e.g. server reboot)
Edit
Following alkaline answer I now have (but still not working):
my docker-compose.yml file:
version: "2"
services:
nginx:
container_name: nginx
image: nginx_img
build: ../docker-nginx-1/
ports:
- "80:80"
networks:
- backbone
nodejs:
container_name: nodejs
image: merites/docker-simple-node-server
build: ../docker-simple-node-server/
networks:
- backbone
expose:
- 8080
networks:
backbone:
driver: bridge
and my nginx (skipped the include in the conf.d folder for simplicity):
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream upsrv {
server nodejs:8080;
}
server {
listen 80;
server_name app1.example.com;
location / {
proxy_pass http://upsrv;
}
}
}
edit 31-08-2016
this might be the problem, the name is not backbone, but called after the folder started the service from:
sudo docker network ls
out puts:
NETWORK ID NAME DRIVER SCOPE
1167c2b0ec31 bridge bridge local
d06ffaf26fe2 dockerservices1_backbone bridge local
5e4ec13d790a host host local
7d1f8c32f259 none null local
edit 01-09-2016
It might be caused by the way I have my nginx docker container setup?
this is the docker file I used:
############################################################
# Dockerfile to build Nginx Installed Containers
# Based on Ubuntu
############################################################
# Set the base image to Ubuntu
FROM ubuntu
# File Author / Maintainer
MAINTAINER Maintaner Name
# Install Nginx
# Add application repository URL to the default sources
# RUN echo "deb http://archive.ubuntu.com/ubuntu/ raring main universe" >> /etc/apt/sources.list
# Update the repository
RUN apt-get update
# Install necessary tools
RUN apt-get install -y nano wget dialog net-tools
# Download and Install Nginx
RUN apt-get install -y nginx
# Remove the default Nginx configuration file
RUN rm -v /etc/nginx/nginx.conf
# Copy a configuration file from the current directory
ADD nginx.conf /etc/nginx/
# Append "daemon off;" to the beginning of the configuration
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
# Expose ports
EXPOSE 80
# Set the default command to execute
# when creating a new container
CMD service nginx start
My final solution 1th sept. 2016
I used this compose file now:
version: "2"
services:
nginx:
image: nginx
container_name: nginx
volumes:
- ./nginx-configs:/etc/nginx/conf.d
ports:
- "80:80"
networks:
- backbone
nodejs:
container_name: nodejs
image: merites/docker-simple-node-server
build: ../docker-simple-node-server/
networks:
- backbone
expose:
- 8080
networks:
backbone:
driver: bridge
In the project folder, from which you run docker-compose up -d, I added a folder named nginx-configs. This folder will 'override' all the files in the nginx container named /etc/nginx/conf.d
Therefor I copied the default.cfg from the nginx container before I added this volume mount. using the command:
docker exec -t -i container_name /bin/bash
and than cat /etc/nginx/conf.d/default.conf
and added the same default.conf in the project folder with nginx configs.
Besides the default I added app1.conf with this content:
upstream upsrv1 {
server nodejs:8080;
}
server {
listen 80;
server_name app1.example.com;
location / {
proxy_pass http://upsrv1;
}
}
This way, I can easily add a second app... third and so on.
So the basics is working now.
Here's a best practice. Only expose port 80 outside of the host. The nodejs app can be in a private network only accessible through nginx.
version: "2"
services:
nginx:
...
ports:
- "80:80"
networks:
- backbone
nodejs:
...
networks:
- backbone
expose:
- 8080
networks:
backbone:
driver: bridge
In your nginx.conf file, the upstream servers can be listed as nodejs:8080. The docker daemon will resolve it to the correct internal ip.

Docker Networking - nginx: [emerg] host not found in upstream

I have recently started migrating to Docker 1.9 and Docker-Compose 1.5's networking features to replace using links.
So far with links there were no problems with nginx connecting to my php5-fpm fastcgi server located in a different server in one group via docker-compose. Newly though when I run docker-compose --x-networking up my php-fpm, mongo and nginx containers boot up, however nginx quits straight away with [emerg] 1#1: host not found in upstream "waapi_php_1" in /etc/nginx/conf.d/default.conf:16
However, if I run the docker-compose command again while the php and mongo containers are running (nginx exited), nginx starts and works fine from then on.
This is my docker-compose.yml file:
nginx:
image: nginx
ports:
- "42080:80"
volumes:
- ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
php:
build: config/docker/php
ports:
- "42022:22"
volumes:
- .:/var/www/html
env_file: config/docker/php/.env.development
mongo:
image: mongo
ports:
- "42017:27017"
volumes:
- /var/mongodata/wa-api:/data/db
command: --smallfiles
This is my default.conf for nginx:
server {
listen 80;
root /var/www/test;
error_log /dev/stdout debug;
access_log /dev/stdout;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
# Referencing the php service host (Docker)
fastcgi_pass waapi_php_1:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
# We must reference the document_root of the external server ourselves here.
fastcgi_param SCRIPT_FILENAME /var/www/html/public$fastcgi_script_name;
fastcgi_param HTTPS off;
}
}
How can I get nginx to work with only a single docker-compose call?
This can be solved with the mentioned depends_on directive since it's implemented now (2016):
version: '2'
services:
nginx:
image: nginx
ports:
- "42080:80"
volumes:
- ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
depends_on:
- php
php:
build: config/docker/php
ports:
- "42022:22"
volumes:
- .:/var/www/html
env_file: config/docker/php/.env.development
depends_on:
- mongo
mongo:
image: mongo
ports:
- "42017:27017"
volumes:
- /var/mongodata/wa-api:/data/db
command: --smallfiles
Successfully tested with:
$ docker-compose version
docker-compose version 1.8.0, build f3628c7
Find more details in the documentation.
There is also a very interesting article dedicated to this topic: Controlling startup order in Compose
There is a possibility to use "volumes_from" as a workaround until depends_on feature (discussed below) is introduced. All you have to do is change your docker-compose file as below:
nginx:
image: nginx
ports:
- "42080:80"
volumes:
- ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
volumes_from:
- php
php:
build: config/docker/php
ports:
- "42022:22"
volumes:
- .:/var/www/html
env_file: config/docker/php/.env.development
mongo:
image: mongo
ports:
- "42017:27017"
volumes:
- /var/mongodata/wa-api:/data/db
command: --smallfiles
One big caveat in the above approach is that the volumes of php are exposed to nginx, which is not desired. But at the moment this is one docker specific workaround that could be used.
depends_on feature
This probably would be a futuristic answer. Because the functionality is not yet implemented in Docker (as of 1.9)
There is a proposal to introduce "depends_on" in the new networking feature introduced by Docker. But there is a long running debate about the same # https://github.com/docker/compose/issues/374 Hence, once it is implemented, the feature depends_on could be used to order the container start-up, but at the moment, you would have to resort to one of the following:
make nginx retry until the php server is up - I would prefer this one
use volums_from workaround as described above - I would avoid using this, because of the volume leakage into unnecessary containers.
If you are so lost for read the last comment. I have reached another solution.
The main problem is the way that you named the services names.
In this case, if in your docker-compose.yml, the service for php are called "api" or something like that, you must ensure that in the file nginx.conf the line that begins with fastcgi_pass have the same name as the php service. i.e fastcgi_pass api:9000;
Lets say the php service name is php_service then the code will be:
In the file docker-compose.yml
php_service:
build:
dockerfile: ./docker/php/Dockerfile
In the file nginx.conf
location ~ \.php$ {
fastcgi_pass php_service:9000;
fastcgi_param SCRIPT_FILENAME$document_root$fastcgi_script_name;
include fastcgi_params;
}
You can set the max_fails and fail_timeout directives of nginx to indicate that the nginx should retry the x number of connection requests to the container before failing on the upstream server unavailability.
You can tune these two numbers as per your infrastructure and speed at which the whole setup is coming up. You can read more details about the health checks section of the below URL:
http://nginx.org/en/docs/http/load_balancing.html
Following is the excerpt from http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server
max_fails=number
sets the number of unsuccessful attempts to communicate with the
server that should happen in the duration set by the fail_timeout
parameter to consider the server unavailable for a duration also set
by the fail_timeout parameter. By default, the number of unsuccessful
attempts is set to 1. The zero value disables the accounting of
attempts. What is considered an unsuccessful attempt is defined by the
proxy_next_upstream, fastcgi_next_upstream, uwsgi_next_upstream,
scgi_next_upstream, and memcached_next_upstream directives.
fail_timeout=time
sets the time during which the specified number of unsuccessful
attempts to communicate with the server should happen to consider the
server unavailable; and the period of time the server will be
considered unavailable. By default, the parameter is set to 10
seconds.
To be precise your modified nginx config file should be as follows (this script is assuming that all the containers are up by 25 seconds at least, if not, please change the fail_timeout or max_fails in below upstream section):
Note: I didn't test the script myself, so you could give it a try!
upstream phpupstream {
server waapi_php_1:9000 fail_timeout=5s max_fails=5;
}
server {
listen 80;
root /var/www/test;
error_log /dev/stdout debug;
access_log /dev/stdout;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
# Referencing the php service host (Docker)
fastcgi_pass phpupstream;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
# We must reference the document_root of the external server ourselves here.
fastcgi_param SCRIPT_FILENAME /var/www/html/public$fastcgi_script_name;
fastcgi_param HTTPS off;
}
}
Also, as per the following Note from docker (https://github.com/docker/docker.github.io/blob/master/compose/networking.md#update-containers), it is evident that the retry logic for checking the health of the other containers is not docker's responsibility and rather the containers should do the health check themselves.
Updating containers
If you make a configuration change to a service and run docker-compose
up to update it, the old container will be removed and the new one
will join the network under a different IP address but the same name.
Running containers will be able to look up that name and connect to
the new address, but the old address will stop working.
If any containers have connections open to the old container, they
will be closed. It is a container's responsibility to detect this
condition, look up the name again and reconnect.
My problem was that I forgot to specify network alias in
docker-compose.yml in php-fpm
networks:
- u-online
It is works well!
version: "3"
services:
php-fpm:
image: php:7.2-fpm
container_name: php-fpm
volumes:
- ./src:/var/www/basic/public_html
ports:
- 9000:9000
networks:
- u-online
nginx:
image: nginx:1.19.2
container_name: nginx
depends_on:
- php-fpm
ports:
- "80:8080"
- "443:443"
volumes:
- ./docker/data/etc/nginx/conf.d/default.conf:/etc/nginx/conf.d/default.conf
- ./docker/data/etc/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./src:/var/www/basic/public_html
networks:
- u-online
#Docker Networks
networks:
u-online:
driver: bridge
I believe Nginx dont take in account Docker resolver (127.0.0.11), so please, can you try adding:
resolver 127.0.0.11
in your nginx configuration file?
I had the same problem because there was two networks defined in my docker-compose.yml: one backend and one frontend.
When I changed that to run containers on the same default network everything started working fine.
I found solution for services, that may be disabled for local development. Just use variables, that prevents emergency shutdown and works after service is available.
server {
location ^~ /api/ {
# other config entries omitted for breavity
set $upstream http://api.awesome.com:9000;
# nginx will now start if host is not reachable
fastcgi_pass $upstream;
fastcgi_index index.php;
}
}
source: https://sandro-keil.de/blog/let-nginx-start-if-upstream-host-is-unavailable-or-down/
Had the same problem and solved it. Please add the following line to docker-compose.yml nginx section:
links:
- php:waapi_php_1
Host in nginx config fastcgi_pass section should be linked inside docker-compose.yml nginx configuration.
At the first glance, I missed, that my "web" service didn't actually start, so that's why nginx couldn't find any host
web_1 | python3: can't open file '/var/www/app/app/app.py': [Errno 2] No such file or directory
web_1 exited with code 2
nginx_1 | [emerg] 1#1: host not found in upstream "web:4044" in /etc/nginx/conf.d/nginx.conf:2
Two things worth to mention:
Using same network bridge
Using links to add hosts resol
My example:
version: '3'
services:
mysql:
image: mysql:5.7
restart: always
container_name: mysql
volumes:
- ./mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: tima#123
network_mode: bridge
ghost:
image: ghost:2
restart: always
container_name: ghost
depends_on:
- mysql
links:
- mysql
environment:
database__client: mysql
database__connection__host: mysql
database__connection__user: root
database__connection__password: xxxxxxxxx
database__connection__database: ghost
url: https://www.itsfun.tk
volumes:
- ./ghost-data:/var/lib/ghost/content
network_mode: bridge
nginx:
image: nginx
restart: always
container_name: nginx
depends_on:
- ghost
links:
- ghost
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/conf.d:/etc/nginx/conf.d
- ./nginx/letsencrypt:/etc/letsencrypt
network_mode: bridge
If you don't specify a special network bridge, all of them will use the same default one.
Add the links section to your nginx container configuration.
You have to make visible the php container to the nginx container.
nginx:
image: nginx
ports:
- "42080:80"
volumes:
- ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
links:
- php:waapi_php_1
With links there is an order of container startup being enforced. Without links the containers can start in any order (or really all at once).
I think the old setup could have hit the same issue, if the waapi_php_1 container was slow to startup.
I think to get it working, you could create an nginx entrypoint script that polls and waits for the php container to be started and ready.
I'm not sure if nginx has any way to retry the connection to the upstream automatically, but if it does, that would be a better option.
You have to use something like docker-gen to dynamically update nginx configuration when your backend is up.
See:
https://hub.docker.com/r/jwilder/docker-gen/
https://github.com/jwilder/nginx-proxy
I believe Nginx+ (premium version) contains a resolve parameter too (http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream)
Perhaps the best choice to avoid linking containers issues are the docker networking features
But to make this work, docker creates entries in the /etc/hosts for each container from assigned names to each container.
with docker-compose --x-networking -up is something like
[docker_compose_folder]-[service]-[incremental_number]
To not depend on unexpected changes in these names you should use the parameter
container_name
in your docker-compose.yml as follows:
php:
container_name: waapi_php_1
build: config/docker/php
ports:
- "42022:22"
volumes:
- .:/var/www/html
env_file: config/docker/php/.env.development
Making sure that it is the same name assigned in your configuration file for this service. I'm pretty sure there are better ways to do this, but it is a good approach to start.
My Workaround (after much trial and error):
In order to get around this issue, I had to get the full name of the 'upstream' Docker container, found by running docker network inspect my-special-docker-network and getting the full name property of the upstream container as such:
"Containers": {
"39ad8199184f34585b556d7480dd47de965bc7b38ac03fc0746992f39afac338": {
"Name": "my_upstream_container_name_1_2478f2b3aca0",
Then used this in the NGINX my-network.local.conf file in the location block of the proxy_pass property: (Note the addition of the GUID to the container name):
location / {
proxy_pass http://my_upsteam_container_name_1_2478f2b3aca0:3000;
As opposed to the previously working, but now broken:
location / {
proxy_pass http://my_upstream_container_name_1:3000
Most likely cause is a recent change to Docker Compose, in their default naming scheme for containers, as listed here.
This seems to be happening for me and my team at work, with latest versions of the Docker nginx image:
I've opened issues with them on the docker/compose GitHub here
this error appeared to me because my php-fpm image enabled cron, and I have no idea why
In my case it was nginx: [emerg] host not found in upstream as well, so I managed to solve it, by adding depends_on directive to nginx service in docker-compose.yml file.
(new to nginx)
In my case it was wrong folder name
For config
upstream serv {
server ex2_app_1:3000;
}
make sure the app folder is in ex2 folder:
ex2/app/...

Resources