How to configure nginx proxy inside a docker-compose? - nginx

I want to configure client_max_body_size of a dockerized nginx proxy directly inside the docker-compose.yml.
I find this ressources : https://github.com/nginx-proxy/nginx-proxy/issues/690,but it seems none of the solution are adapted.
Here is my docker-compose.yml (version: '3.7') :
web-proxy:
container_name: test-proxy
image: nginxproxy/nginx-proxy
depends_on:
- web
volumes:
- /nginx/conf:/etc/nginx/conf.d
- /nginx/vhost:/etc/nginx/vhost.d
- /nginx/html:/usr/share/nginx/html
- /nginx/dhparam:/etc/nginx/dhparam
- /nginx/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- 80:80
- 443:443
Alternative way should be to add an aditional configuration file in the volume already mounted, but I would prefer configuring it entirely inside the docker-compose file.

this is from the image docs Custom Nginx Configuration:
Proxy-wide
To add settings on a proxy-wide basis, add your configuration file under /etc/nginx/conf.d using a name ending in .conf.
This can be done in a derived image by creating the file in a RUN command or by COPYing the file into conf.d:
FROM nginxproxy/nginx-proxy
RUN { \
echo 'server_tokens off;'; \
echo 'client_max_body_size 100m;'; \
} > /etc/nginx/conf.d/my_proxy.conf

Related

Nginx not routing browser request to wsgi(python server running)

I am running my flask project from uwsgi on nginx. But my nginx is not routing the request to uwsgi when i hit localhost:80/
My nginx.conf looks like this
server {
listen 80;
server_name <your machine ip/domain>;(if on local it would be localhost but I was running on WSL so I put it IP)
location / {
include uwsgi_params;
uwsgi_pass web_app:5000; (you might see suggestion of .sock files or suffixing http:// or unix: but none work for me plain simple your python server's service name which you would provide in docker-compose)
}
}
docker-compose looks like this
version: '3.7'
services:
web_app:
build: .
container_name: kpi-dashboard
ports:
- 5000:5000
depends_on:
- db
nginx:
build: ./nginx
container_name: nginx
restart: always
ports:
- "80:80"
depends_on:
- web_app
db:
image: postgres:13-alpine
container_name: postgresql
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
ports:
- 5432:5432
volumes:
postgres_data:
nginx dockerfile
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf (it is important to remove the default conf as it would not take your custom conf no matter where you copy it)
COPY nginx.conf /etc/nginx/conf.d/
(there are answers online to copy it no other places but this only works)
EXPOSE 80
web app dockerfile
FROM python:3.8.16-slim-buster
RUN apt-get update
RUN apt-get install gcc -y && apt-get install python3-dev -y && apt-get install libpq-dev -y
ENV PYTHONPATH=${PYTHONPATH}:${PWD}
RUN pip install poetry
WORKDIR /app
COPY pyproject.toml /app/
COPY . /app/
RUN poetry config virtualenvs.create false
RUN poetry install --no-dev
EXPOSE 5000
CMD ["uwsgi", "--ini", "wsgi.ini"]
wsgi.ini file
[uwsgi]
module = app (this is when you are writing you project entrypoint in app.py. if you are writing in wsgi.py then this would become wsgi:app)
socket = 0.0.0.0:5000
callable = app (this is important as wsgi by default considers your app instance as application either handle it in your main file or just add this configuration)
processes = 1
threads = 1
master = true
vacuum = true
die-on-term = true
This is what the nginx container output looks like
Editing question as the 404 issue was solved. But nginx is still not routing to wsgi.
The solution
changed the location of copying the nginx.conf file in nginx dockerfile
COPY nginx.conf /etc/nginx/nginx.config
Editing question again as nginx routing to wsgi issue also resolved.
The solution
updated files as mentioned above
Yes so this worked for me. There are n number of configurations available online and almost all are same yet a slight difference causes the issue.
I am updating my question to change files with the content that worked. Hope it helps someone.

Docker with Symfony 4 - Unable to see the file changes

I'm working on a docker image for dev environment for a Symfony 4 application. I'm building it on alpine, php-fpm and nginx.
I have configured an application, but the performance was not great (~700ms) even for the simple hello world application, so I thought I can make it faster somehow.
First of all, I went for semantics configuration and configured the volumes to use cached configuration. Then, I moved vendor to separate volume as it caused the most of performance issues.
As a second thing I wanted to use docker-sync as the benchmarks looked amazing. I configured it and everything ran smoothly. But now I realized that the docker is not reacting to changes in code.
First, I thought that it has something to do with Symfony 4 cache, so I did connect to php's container and ran php bin/console cache:clear. Cache has been cleared, but the docker did not react to anything. I double-check the files on both web and php containers and the files are changed there. I'm wondering if there is something more I need to configure or why is Symfony not reacting to changes.
UPDATE
Symfony/Container does not react to changes even after complete image re-build and removal of semantics configuration and docker-sync. So, basically, it's plain docker with hello-world symfony 4 application and it does not react to changes. Changes are not even synced with container
Configuration:
# docker-compose-dev.yml
version: '3'
volumes:
symfony-sync:
external: true
services:
php:
build: build/php
expose:
- 9000
volumes:
- symfony-sync:/var/www/html/symfony
- ./vendor:/var/www/html/vendor
web:
build: build/nginx
restart: always
expose:
- 80
- 443
ports:
- 8080:80
- 8081:443
depends_on:
- php
volumes:
- symfony-sync:/var/www/html/symfony
- ./vendor:/var/www/html/vendor
networks:
default:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.4.0.0/16
# docker-sync.yml
version: "2"
options:
verbose: true
syncs:
symfony-sync:
src: './symfony'
sync_excludes:
- '.git'
- 'composer.lock'
Makefile I use for running the app
start:
docker-sync stop
docker-sync clean
cd symfony
docker volume create --name=symfony-sync
cd ..
docker-compose -f docker-compose-dev.yml down
docker-compose -f docker-compose-dev.yml up -d
docker-sync start
stop:
docker-compose stop
docker-sync stop
I recommend to use dinghy instead docker4mac: https://github.com/codekitchen/dinghy
Have a try to this repo for example too: https://github.com/jorge07/symfony-4-es-cqrs-boilerplate
If this doesn't work the problem will be in you host or dockerfile. Be sure you don't enable opcache for development.

Docker - how do i restart nginx to apply custom config?

I am trying to configure a LEMP dev environment with docker and am having trouble with nginx because I can't seem to restart nginx once it has it's new configuration.
docker-compose.yml:
version: '3'
services:
nginx:
image: nginx
ports:
- '8080:80'
volumes:
- ./nginx/log:/var/log/nginx
- ./nginx/config/default:/etc/nginx/sites-available/default
- ../wordpress:/var/www/wordpress
php:
image: php:fpm
ports:
- 9000:9000
mysql:
image: mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: secret
volumes:
- ./mysql/data:/var/lib/mysql
I have a custom nginx config that replaces /etc/nginx/sites-available/default, and in a normal Ubuntu environment, I would run service nginx restart to pull in the new config.
However, if I try to do that this Docker environment, the nginx container exits with code 1.
docker-compose exec nginx sh
service nginx restart
-exit with code 1-
How would I be able use nginx with a custom /etc/nginx/sites-available/default file?
Basically you can reload nginx configuration by invoking this command:
docker exec <nginx-container-name-or-id> nginx -s reload
To reload nginx with docker-compose specifically (rather than restart the whole container, causing downtime):
docker-compose exec nginx nginx -s reload
Docker containers should be running a single application in the foreground. When that process it launches as pid 1 inside the container exits, so does the container (similar to how killing pid 1 on a linux server will shutdown that machine). This process isn't managed by the OS service command.
The normal way to reload a configuration in a container is to restart the container. Since you're using docker-compose, that would be docker-compose restart nginx. Note that if this config was part of your image, you would need to rebuild and redeploy a new container, but since you're using a volume, that isn't necessary.

Symfony app deployment with docker

I come here because I develop an app with Symfony3. And I've some questions about the deployment of the app.
Actually I use docker-compose:
version: '2'
services:
nginx:
build: ./docker/nginx/
ports:
- 8081:80
volumes:
- .:/home/docker:ro
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- default
php:
build: ./docker/php/
volumes:
- .:/home/docker:rw
- ./docker/php/php.ini:/usr/local/etc/php/conf.d/custom.ini:ro
working_dir: /home/docker
networks:
- default
dns_search:
- php
db:
image: mariadb:latest
ports:
- 3307:3306
environment:
- MYSQL_ROOT_PASSWORD=collectionManager
- MYSQL_USER=collectionManager
- MYSQL_PASSWORD=collectionManager
- MYSQL_DATABASE=collectionManager
volumes:
- mariadb_data:/var/lib/mysql
networks:
- default
dns_search:
- db
search:
build: ./docker/search/
ports:
- 9200:9200
- 9300:9300
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- default
dns_search:
- search
volumes:
mariadb_data:
driver: local
elasticsearch_data:
driver: local
networks:
default:
nginx is clear, engine is PHP-FPM with some extensions and composer, db is MariaDB, and search ElasticSearch with some plugins.
Before I don't use Docker and to deploy I used Megallanes or Deployer, when I want to deploy webapp.
With Docker I can use the docker-compose file and recreate images and container on the server, I also can save my containers in images and in tar archive and load it on the server. It's okay for nginx, and php-fpm, but what about elasticsearch and the db ? Because I need to keep data in for future update of the code. Then when I deploy the code I need to execute a Doctrine Migration and maybe some commands, and Deployer do it perfectly with some other interresting things. And how I deploy the code with Docker ? Can we use both ? Deployer for code and Docker for services ?
Thanks a lot for your help.
First of all , Please try using user-defined networks, they have additional features vs legacy linking like Embedded DNS. Meaning you can call other containers on the same network with their names in your applications. Containers on a User defined network are isolate from containers on another User defined network.
To create a user defined network:
docker network create --driver bridge <networkname>
Dockerfile to use user defined network example:
search:
restart: unless-stopped
build: ./docker/search/
ports:
- "9200:9200"
- "9300:9300"
networks:
- <networkname>
Second: I noticed you didnt use data volumes for you DB and ElasticSearch.
You need to mount volumes at certain points to keep your persistant data.
Third: When you export your containers it wont contain mounted volumes. You need to back up volume data and migrate it manually.
To backup volume data:
docker run --rm --volumes-from db -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
The above command will create a container, mounts volumes from DB container and mounts current directory in container as /backup , uses ubuntu image and tar command to create a backup of /dbdata in container (consider changing this to your dbdirectory) in the /backup that is mounted from your docker hosts). after the operation completes the transient container will be removed (ubuntu container we used for creating the backup with --rm switch).
To restore:
You copy the tar archive to the remote location and create your container with empty mounted volume. then extract the tar archive in that volume using the following command.
docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"

Use environment vars of container in command key of docker-compose

I have two services in my docker-compose.yml: docker-gen and nginx. Docker-gen is linked to nginx. In order for docker-gen to work I must pass the actual name or hash of nginx container so that docker-gen can restart nginx on change.
When I link docker-gen to nginx, a set of environment variables appears in the docker-gen container, the most interesting to me is NGINX_NAME – it's the name of nginx container.
So it should be straightforward to put $NGINX_NAME in command field of service and get it to work. But $NGINX_NAME doesn't expand when I start the services. Looking through docker-gen logs I see the lines:
2015/04/24 12:54:27 Sending container '$NGINX_NAME' signal '1'
2015/04/24 12:54:27 Error sending signal to container: No such container: $NGINX_NAME
My docker_config.yml is as follows:
nginx:
image: nginx:latest
ports:
- '80:80'
volumes:
- /tmp/nginx:/etc/nginx/conf.d
dockergen:
image: jwilder/docker-gen:latest
links:
- nginx
volumes_from:
- nginx
volumes:
- /var/run/docker.sock:/tmp/docker.sock
- ./extra:/etc/docker-gen/templates
- /etc/nginx/certs
tty: true
command: >
-watch
-only-exposed
-notify-sighup "$NGINX_NAME"
/etc/docker-gen/templates/nginx.tmpl
/etc/nginx/conf.d/default.conf
Is there a way to put environment variable placeholder in command so it could expand to actual value when the container is up?
I've added entrypoint setting to dockergen service and changed command a bit:
dockergen:
image: jwilder/docker-gen:latest
links:
- nginx
volumes_from:
- nginx
volumes:
- /var/run/docker.sock:/tmp/docker.sock
- ./extra:/etc/docker-gen/templates
- /etc/nginx/certs
tty: true
entrypoint: ["/bin/sh", "-c"]
command: >
"
docker-gen
-watch
-only-exposed
-notify-sighup $(echo $NGINX_NAME | tail -c +2)
/etc/docker-gen/templates/nginx.tmpl
/etc/nginx/conf.d/default.conf
"
Container names injected by Docker linking start with '/', but when I send SIGHUP to containers with leading slash, the signal doesn't arrive:
$ docker kill -s SIGHUP /myproject_dockergen_1/nginx
If I strip it though, nginx restarts as it should. So this $(echo $NGINX_NAME | tail -c +2) part is here to remove first char from $NGINX_NAME.

Resources