Use environment vars of container in command key of docker-compose - nginx

I have two services in my docker-compose.yml: docker-gen and nginx. Docker-gen is linked to nginx. In order for docker-gen to work I must pass the actual name or hash of nginx container so that docker-gen can restart nginx on change.
When I link docker-gen to nginx, a set of environment variables appears in the docker-gen container, the most interesting to me is NGINX_NAME – it's the name of nginx container.
So it should be straightforward to put $NGINX_NAME in command field of service and get it to work. But $NGINX_NAME doesn't expand when I start the services. Looking through docker-gen logs I see the lines:
2015/04/24 12:54:27 Sending container '$NGINX_NAME' signal '1'
2015/04/24 12:54:27 Error sending signal to container: No such container: $NGINX_NAME
My docker_config.yml is as follows:
nginx:
image: nginx:latest
ports:
- '80:80'
volumes:
- /tmp/nginx:/etc/nginx/conf.d
dockergen:
image: jwilder/docker-gen:latest
links:
- nginx
volumes_from:
- nginx
volumes:
- /var/run/docker.sock:/tmp/docker.sock
- ./extra:/etc/docker-gen/templates
- /etc/nginx/certs
tty: true
command: >
-watch
-only-exposed
-notify-sighup "$NGINX_NAME"
/etc/docker-gen/templates/nginx.tmpl
/etc/nginx/conf.d/default.conf
Is there a way to put environment variable placeholder in command so it could expand to actual value when the container is up?

I've added entrypoint setting to dockergen service and changed command a bit:
dockergen:
image: jwilder/docker-gen:latest
links:
- nginx
volumes_from:
- nginx
volumes:
- /var/run/docker.sock:/tmp/docker.sock
- ./extra:/etc/docker-gen/templates
- /etc/nginx/certs
tty: true
entrypoint: ["/bin/sh", "-c"]
command: >
"
docker-gen
-watch
-only-exposed
-notify-sighup $(echo $NGINX_NAME | tail -c +2)
/etc/docker-gen/templates/nginx.tmpl
/etc/nginx/conf.d/default.conf
"
Container names injected by Docker linking start with '/', but when I send SIGHUP to containers with leading slash, the signal doesn't arrive:
$ docker kill -s SIGHUP /myproject_dockergen_1/nginx
If I strip it though, nginx restarts as it should. So this $(echo $NGINX_NAME | tail -c +2) part is here to remove first char from $NGINX_NAME.

Related

How to configure nginx proxy inside a docker-compose?

I want to configure client_max_body_size of a dockerized nginx proxy directly inside the docker-compose.yml.
I find this ressources : https://github.com/nginx-proxy/nginx-proxy/issues/690,but it seems none of the solution are adapted.
Here is my docker-compose.yml (version: '3.7') :
web-proxy:
container_name: test-proxy
image: nginxproxy/nginx-proxy
depends_on:
- web
volumes:
- /nginx/conf:/etc/nginx/conf.d
- /nginx/vhost:/etc/nginx/vhost.d
- /nginx/html:/usr/share/nginx/html
- /nginx/dhparam:/etc/nginx/dhparam
- /nginx/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- 80:80
- 443:443
Alternative way should be to add an aditional configuration file in the volume already mounted, but I would prefer configuring it entirely inside the docker-compose file.
this is from the image docs Custom Nginx Configuration:
Proxy-wide
To add settings on a proxy-wide basis, add your configuration file under /etc/nginx/conf.d using a name ending in .conf.
This can be done in a derived image by creating the file in a RUN command or by COPYing the file into conf.d:
FROM nginxproxy/nginx-proxy
RUN { \
echo 'server_tokens off;'; \
echo 'client_max_body_size 100m;'; \
} > /etc/nginx/conf.d/my_proxy.conf

Docker + jwilder/nginx-proxy + jwilder/docker-gen + jrcs/letsencrypt-nginx-proxy-companion + php:7-fpm + wordpress:fpm

I know I'm really close on this, but I can't get the last part
working. I'm almost positive it has to do with the WordPress
container and the PHP container needing to be the same directory? So
PHP can process files in that directory? I have been working on this
for a week and a half and I'm breaking down, asking for help.
I can most of this working and different combinations - but not this
particular combination.
What I'm trying to do is have separate containers for MySQL (and
share the database) nginx-proxy WordPress using Nginx (each site with
their own WordPress container) PHP 7
I've gotten this working with WordPress using Apache, but that's not
what I want.
I have done a lot of reading and a lot of testing and did find that I
was originally missing VIRTUAL_PROTO=fastcgi. I see the configs that
populate in the nginx-proxy container...they seem right, but I think
my confusion has to do with the paths and the virtual environments.
I create docker network create nginx-proxy
These are the files and directories I have...
/home/tj/db/docker-compose.yml /home/tj/mysite.com
/home/tj/mysite.com/.env /home/tj/nginx-proxy/docker-compose.yml
/home/tj/db/docker-compose.yml
version: "3"
services:
db:
image: mysql:5.7
volumes:
- ../_shared/db:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
container_name: db
networks:
- nginx-proxy
networks:
nginx-proxy:
external:
name: nginx-proxy
/home/tj/mysite.com/.env
MYSQL_SERVER_CONTAINER=db
VIRTUAL_HOST=mysite.com
DBIP="$(docker inspect ${MYSQL_SERVER_CONTAINER} | grep -i 'ipaddress' | grep -oE '((1?[0-9][0-9]?|2[0-4][0-9]|25[0-5])\.){3}(1?[0-9][0-9]?|2[0-4][0-9]|25[0-5])')"
EMAIL_ADDRESS=tj#mysite.com
WORDPRESS_DB_NAME=wordpress
WORDPRESS_DB_NAME=wordpress
WORDPRESS_DB_USER=wordpress
/home/tj/mysite.com/docker-compose.yml
version: "3"
services:
wordpress:
image: wordpress:fpm
expose:
- 80
restart: always
environment:
VIRTUAL_HOST: ${VIRTUAL_HOST}
LETSENCRYPT_HOST: ${VIRTUAL_HOST}
LETSENCRYPT_EMAIL: ${EMAIL_ADDRESS}
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: ${WORDPRESS_DB_USER}
WORDPRESS_DB_PASSWORD: ${WORDPRESS_DB_USER}
WORDPRESS_DB_NAME: ${WORDPRESS_DB_NAME}
VIRTUAL_PROTO: fastcgi
VIRTUAL_PORT: 3030
VIRTUAL_ROOT: /usr/share/nginx/html
container_name: ${VIRTUAL_HOST}
volumes:
- ../nginx-proxy/html:/usr/share/nginx/html:rw
networks:
default:
external:
name: nginx-proxy
/home/tj/nginx-proxy/docker-compose.yml
version: '3'
services:
nginx:
image: nginx:1.17.7
container_name: nginx-proxy
ports:
- 80:80
- 443:443
volumes:
- conf:/etc/nginx/conf.d:ro
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- certs:/etc/nginx/certs
labels:
- com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true
restart: always
dockergen:
image: jwilder/docker-gen:0.7.3
container_name: nginx-proxy-gen
depends_on:
- nginx
command: -notify-sighup nginx-proxy -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- certs:/etc/nginx/certs
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro
restart: always
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-proxy-le
depends_on:
- nginx
- dockergen
environment:
NGINX_PROXY_CONTAINER: nginx-proxy
NGINX_DOCKER_GEN_CONTAINER: nginx-proxy-gen
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- certs:/etc/nginx/certs
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: always
php-fpm:
image: php:7-fpm
container_name: php
environment:
- VIRTUAL_HOST=docker.nevistechnology.com
- VIRTUAL_ROOT=/usr/share/nginx/html
- VIRTUAL_PORT=9000
- VIRTUAL_PROTO=fastcgi
restart: always
ports:
- 9000
volumes:
- ./html:/usr/share/nginx/html
volumes:
conf:
vhost:
html:
certs:
networks:
default:
external:
name: nginx-proxy
Now, what i was able to get working is if I use "wordpress:latest"
instead of "wordpress:fpm", but I don't want to use Nginx and
Apache...Apache uses a lot of memory and I have all of my old configs
and notes in Nginx, so I'd like to get this working.
I have some Dockerfile things I'm trying to figure out too - like
running commands, but let me see if you all can help me with this
first.
Another thing - this is more of a generic Linux issue, but over the
years I've never been able to figure it out and I just default to
using root, which I know is bad practice. So, I have my user "tj"
which I created like:
sudo useradd tj sudo usermod -aG sudo tj sudo usermod -aG docker tj
sudo usermod -aG www-data tj sudo g+w /home/tj -R *
For Docker, I started working out of my /home/tj directory. When I
try to go edit a file or upload, I get a permission issue. But if I
change directories and files from www-data:www-data to tj:www-data or
tj:tj, it works for me in SFTP or terminal, but then there are web
issues, like when I try to upload - www-data has permission issues on
the WordPress sid.
So, I know I'm late to the party here but I might have some answers so here goes nothing:
I eventually got that running and more, all in a swarm but I had to tweak the proxy quite a bit:
https://github.com/PiTiLeZarD/nginx-proxy
Something I had to wrap my mind around was that the fpm image runs php only! any assets or files has to be bound as a volume in the nginx-proxy and configured so nginx gets the files and not fpm. In my tweaked nginx-proxy I have added something about this in the templates:
{{ if (exists (printf "/etc/nginx/static_files/%s" $host)) }}
root {{ printf "/etc/nginx/static_files/%s" $host }};
{{ end }}
vhost.d/default: I added a section:
location / {
location ~ \.php$ {
try_files /dev/null #upstream;
}
try_files /assets/$uri $uri #upstream;
}
I tweaked everything to have a LOCATION_PATH=#upstream environment variable (I have many services so some still use the default "/")
vhost.d/default_location, I added the fastcgi config there:
index index.php;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
There's a lot to configure and keep track of, but hang on, it's possible to make it work.
Regarding the root/user issues, I run my php-fpm images as www-data:www-data (which is 1000:1000) I also make sure that 1000:1000 links to the admin user on the host, this way I don't end up with issues all the time.
fpm's www.conf has a user/group section where you can specify www-data/www-data and I build my images with a trick for users:
# add a NOPASSWD to sudo for www-data
RUN printf 'www-data ALL=(ALL:ALL) NOPASSWD: ALL' | tee /etc/sudoers.d/www-data
# bind www-data user and group from 33:33 to 1000:1000
RUN rmdir /var/www/html \
&& userdel -f www-data \
&& if getent group www-data ; then groupdel www-data; fi \
&& groupadd -g 1000 www-data \
&& useradd -l -u 1000 -g www-data -G sudo www-data \
&& install -d -m 0755 -o www-data -g www-data /home/www-data \
&& find / -group 33 -user 33 2>/dev/null || echo "/var/www" | xargs chown -R 1000:1000
This step will take care of switching everything from root:root to www-data:www-data. I also install sudo in my docker images, I used to not but I had issues which were really hard to fix without it.
Not sure if any of this helps, it's a little disjointed but then again, running this requires a lot of moving pieces to fit perfectly together ;)

Docker - how do i restart nginx to apply custom config?

I am trying to configure a LEMP dev environment with docker and am having trouble with nginx because I can't seem to restart nginx once it has it's new configuration.
docker-compose.yml:
version: '3'
services:
nginx:
image: nginx
ports:
- '8080:80'
volumes:
- ./nginx/log:/var/log/nginx
- ./nginx/config/default:/etc/nginx/sites-available/default
- ../wordpress:/var/www/wordpress
php:
image: php:fpm
ports:
- 9000:9000
mysql:
image: mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: secret
volumes:
- ./mysql/data:/var/lib/mysql
I have a custom nginx config that replaces /etc/nginx/sites-available/default, and in a normal Ubuntu environment, I would run service nginx restart to pull in the new config.
However, if I try to do that this Docker environment, the nginx container exits with code 1.
docker-compose exec nginx sh
service nginx restart
-exit with code 1-
How would I be able use nginx with a custom /etc/nginx/sites-available/default file?
Basically you can reload nginx configuration by invoking this command:
docker exec <nginx-container-name-or-id> nginx -s reload
To reload nginx with docker-compose specifically (rather than restart the whole container, causing downtime):
docker-compose exec nginx nginx -s reload
Docker containers should be running a single application in the foreground. When that process it launches as pid 1 inside the container exits, so does the container (similar to how killing pid 1 on a linux server will shutdown that machine). This process isn't managed by the OS service command.
The normal way to reload a configuration in a container is to restart the container. Since you're using docker-compose, that would be docker-compose restart nginx. Note that if this config was part of your image, you would need to rebuild and redeploy a new container, but since you're using a volume, that isn't necessary.

Docker Compose: Mock external services

I have the following situation:
My application consists of a single web service that calls an
external API (say, some SaaS service, ElasticSearch or so). For non-unit-testing purposes we want to control the external service and later also inject faults. The application and the "mocked" API are dockerized and
now I want to use docker-compose to spin all containers up.
Because the application has several addresses hardcoded (e.g. the hostname of external services) I cannot change them and need to work around.
The service container makes a call to http://external-service.com/getsomestuff.
My idea was to use some features that are provided by docker to reroute all outgoing traffic to the external http://external-service.com/getsomestuff to the mock container without changing the URL.
My docker-compose.yaml looks like:
version: '2'
services:
service:
build: ./service
container_name: my-service1
ports:
- "5000:5000"
command: /bin/sh -c "python3 app.py"
api:
build: ./api-mock
container_name: my-api-mock
ports:
- "5001:5000"
command: /bin/sh -c "python3 app.py"
Finally, I have a driver that just does the following:
curl -XGET localhost:5000/
curl -XPUT localhost:5001/configure?delay=10
curl -XGET localhost:5000/
where the second curl just sets the delay in the mock to 10 seconds.
There are several options I have considered:
Using iptables-fu (would require modifying Dockerfiles to install it)
Using docker networks (this is really unclear to me)
Is there any simple option to achieve what I want?
Edit:
For clarity, here is the relevant part of the service code:
import requests
#app.route('/')
def do_stuff():
r = requests.get('http://external-service.com/getsomestuff')
return process_api_response(r.text())
Docker runs an internal DNS server for user defined networks. Any unknown host lookups are forwarded to you normal DNS servers.
Version 2+ compose files will automatically create a network for compose to use so there's a number of ways to control the hostnames it resolves.
The simplest way is to name your container with the hostname:
version: "2"
services:
external-service.com:
image: busybox
command: sleep 100
ping:
image: busybox
command: ping external-service.com
depends_on:
- external-service.com
If you want to keep container names you can use links
version: "2"
services:
api:
image: busybox
command: sleep 100
ping:
image: busybox
links:
- api:external-service.com
command: ping external-service.com
depends_on:
- api
Or network aliases
version: "2"
services:
api:
image: busybox
command: sleep 100
networks:
pingnet:
aliases:
- external-service.com
ping:
image: busybox
command: ping external-service.com
depends_on:
- api
networks:
- pingnet
networks:
pingnet:
I'm not entirely clear what the problem is you're trying to solve, but if you're trying to make external-service.com inside the container direct traffic to your "mock" service, I think you should be able to do that using the extra_hosts directive in your docker-compose.yml file. For example, if I have this:
version: "2"
services:
example:
image: myimage
extra_hosts:
- google.com:172.23.254.1
That will result in /etc/hosts in the container containing:
172.23.254.1 google.com
And attempts to access http://google.com will hit my web server at 172.23.254.1.
I was able to solve this with -links, is there a way to do networks in docker-compose?
version: '3'
services:
MOCK:
image: api-mock:latest
container_name: api-mock-container
ports:
- "8081:80"
api:
image: my-service1:latest
links:
- MOCK:external-service.com

Docker Nginx Reverse Proxy

I need to run multiple WordPress containers linked all to a single MySQL container + Nginx Reverse Proxy to easy handle VIRTUAL_HOSTS.
Here is what I'm trying to do (with only one WP for now):
Wordpress (hub.docker.com/_/wordpress/)
Mysql (hub.docker.com/_/mysql/)
Nginx Reverse Proxy (github.com/jwilder/nginx-proxy)
I'm working on OSX and this is what I run on terminal:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
docker run --name some-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=root -d mysql:latest
docker run -e VIRTUAL_HOST=wordpress.mylocal.com --name wordpress --link some-mysql:mysql -p 8080:80 -d wordpress
My Docker is running on 192.168.99.100 and that brings me to a 503 nginx/1.9.12 error ofc.
Then 192.168.99.100:8080 brings me to the WordPress as expected.
But http://wordpress.mylocal.com it's not working; it's not redirecting to 192.168.99.100:8080 and I don't understand what I'm doing wrong.
Any suggestions? Thanks!
First of all I recommend you start using docker-compose , running your containers and finding errors will become much easier.
As for your case it seems that you should be using VIRTUAL_PORT to direct to your container on 8080.
Secondly you cannot have two containers(the nginx-proxy + wordpress) napped to the same port on the host.
Good luck!
One:
Use docker compose.
vi docker-compose.yaml
Two:
paste this into the file:
version: '3'
services:
nginx-proxy:
image: budry/jwilder-nginx-proxy-arm:0.6.0
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- certs:/etc/nginx/certs:ro
- confd:/etc/nginx/conf.d
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
labels:
- com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy
environment:
- DEFAULT_HOST=example2.com
networks:
- frontend
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion:stable
restart: always
volumes:
- certs:/etc/nginx/certs:rw
- confd:/etc/nginx/conf.d
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
# - LETSENCRYPT_SINGLE_DOMAIN_CERTS=true
# - LETSENCRYPT_RESTART_CONTAINER=true
- DEFAULT_EMAIL=example#mail.com
networks:
- frontend
depends_on:
- nginx-proxy
#########################################################
..The rest of the containers go here..
#########################################################
networks:
frontend:
driver: bridge
backend:
driver: bridge
volumes:
certs:
html:
vhostd:
confd:
dbdata:
maildata:
mailstate:
maillogs:
Three:
Configure as many docker as you need and configure them to your liking. Here are some examples:
mysql (MariaDB):
mysql:
image: jsurf/rpi-mariadb:latest #MARIADB -> 10 #82eec62cce90
restart: always
environment:
MYSQL_DATABASE: nameExample
MYSQL_USER: user
MYSQL_PASSWORD: password
MYSQL_RANDOM_ROOT_PASSWORD: passwordRoot
MYSQL_ROOT_HOST: '%'
ports:
- "3306:3306"
networks:
- backend
command: --init-file /data/application/init.sql
volumes:
- /path_where_it_will_be_saved_on_your_machine/init.sql:/data/application/init.sql
- /physical_route/data:/var/lib/mysql
nginx-php7.4:
nginx_php:
image: tobi312/php:7.4-fpm-nginx-alpine-arm
hostname: example1.com
restart: always
expose:
- "80"
volumes:
- /physical_route:/var/www/html:rw
environment:
- VIRTUAL_HOST=example1.com
- LETSENCRYPT_HOST=example1.com
- LETSENCRYPT_EMAIL=example1#mail.com
- ENABLE_NGINX_REMOTEIP=1
- PHP_ERRORS=1
depends_on:
- nginx-proxy
- letsencrypt
- mysql
networks:
- frontend
- backend
WordPress:
wordpress:
image: wordpress
restart: always
ports:
- 8080:80
environment:
- WORDPRESS_DB_HOST=db
- WORDPRESS_DB_USER=exampleuser
- WORDPRESS_DB_PASSWORD=examplepass
- WORDPRESS_DB_NAME=exampledb
- VIRTUAL_HOST=example2.com
- LETSENCRYPT_HOST=example2.com
- LETSENCRYPT_EMAIL=example2#mail.com
volumes:
- wordpress:/var/www/html #This must be added in the volumes label of step 2
You can find many examples and documentation here
You must be careful since in some examples I put images that are for rpi and it is very likely that they will give problems in amd64 and intel32 systems.You should search and select the images that interest you according to your cpu and operating system
Four:
Run this command to launch all dockers
docker-compose up -d --remove-orphans
"--remove-orphans" serves to remove dockers that are no longer in your docker-compose file
Five:
When you have the above steps done you can come and ask what you want, we will be happy to read your dockerFile without dying trying to read a lot of commands
According to your case I think that the best solution for you is to use an nginx reverse proxy that is listening on the docker socket and can pass request to different virtual hosts.
for example, let's say you have 3 WPs.
WP1 -> port binding to 81:80
WP2 -> port binding to 82:80
WP3 -> port binding to 83:80
for each one of them you should use a docker environment variable with the virtual host name you want to use.
WP1-> foo.bar1
WP2-> foo.bar2
WP3-> foo.bar3
After doing so you should have 3 differnt WP with ports exposed on 81 82 83.
Now download and start this nginx docker container (reverse proxy) here
it should listen on the docker socket and retrives all data coming to you machine on port 80.
and when you started the WP container and by the environment variable that you provide he will be able to detect which request shouuld get to which WP instance...
This is an example of how you should run one of you WP docker images
> docker run -e VIRTUAL_HOST=foo.bar1.com -p 81:80 -d wordpres:tag
In this case the virtual host will be the virtual host coming from the http request

Resources