Docker compose : error copying nginx.conf fron nginx image - nginx

I am using a docker-compose.yml in which I describe 2 services , one for frontend app and the second one is for backend :
version: '2'
services:
cdl-front:
build: cdl-web/.
ports:
- "80:80"
depends_on:
- cdl-rest
cdl-rest:
build: cdl-rest/.
ports:
- "7777:7777"
When I try to launch my docker-compose configuration using IntelliJ IDEA configured with Docker plugin I get this error :
ERROR: Service 'cdl-front' failed to build: COPY failed: stat
/var/lib/docker/tmp/docker-builder939775883/nginx.conf: no such file
or directory Failed to deploy 'Compose: docker-compose.yml':
docker-compose process finished with exit code 1
Below is my 2 dockerfiles describing the 2 services :
cdl-front contain this dockerfile :
FROM nginx
WORKDIR .
COPY nginx.conf /etc/nginx/nginx.conf
COPY cdl-frontend/cdl /usr/share/nginx/html
cdl-rest contain this dockerfile
# Start with a base image containing Java runtime
FROM openjdk:8-jdk-alpine
# Add Maintainer Info
LABEL maintainer="ghassen1khalil#gmail.com"
# Add a volume pointing to /tmp
VOLUME /tmp
# Make port 8080 available to the world outside this container
EXPOSE 9091
# The application's jar file
ARG JAR_FILE=target/cdl-rest-1.0-SNAPSHOT.jar
# Add the application's jar to the container
ADD ${JAR_FILE} cdl-rest.jar
# Run the jar file
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom -Dfile.encoding=UTF-8","-jar","/cdl-rest.jar"]
CMD ["--env=prod"]

Related

Nginx not routing browser request to wsgi(python server running)

I am running my flask project from uwsgi on nginx. But my nginx is not routing the request to uwsgi when i hit localhost:80/
My nginx.conf looks like this
server {
listen 80;
server_name <your machine ip/domain>;(if on local it would be localhost but I was running on WSL so I put it IP)
location / {
include uwsgi_params;
uwsgi_pass web_app:5000; (you might see suggestion of .sock files or suffixing http:// or unix: but none work for me plain simple your python server's service name which you would provide in docker-compose)
}
}
docker-compose looks like this
version: '3.7'
services:
web_app:
build: .
container_name: kpi-dashboard
ports:
- 5000:5000
depends_on:
- db
nginx:
build: ./nginx
container_name: nginx
restart: always
ports:
- "80:80"
depends_on:
- web_app
db:
image: postgres:13-alpine
container_name: postgresql
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
ports:
- 5432:5432
volumes:
postgres_data:
nginx dockerfile
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf (it is important to remove the default conf as it would not take your custom conf no matter where you copy it)
COPY nginx.conf /etc/nginx/conf.d/
(there are answers online to copy it no other places but this only works)
EXPOSE 80
web app dockerfile
FROM python:3.8.16-slim-buster
RUN apt-get update
RUN apt-get install gcc -y && apt-get install python3-dev -y && apt-get install libpq-dev -y
ENV PYTHONPATH=${PYTHONPATH}:${PWD}
RUN pip install poetry
WORKDIR /app
COPY pyproject.toml /app/
COPY . /app/
RUN poetry config virtualenvs.create false
RUN poetry install --no-dev
EXPOSE 5000
CMD ["uwsgi", "--ini", "wsgi.ini"]
wsgi.ini file
[uwsgi]
module = app (this is when you are writing you project entrypoint in app.py. if you are writing in wsgi.py then this would become wsgi:app)
socket = 0.0.0.0:5000
callable = app (this is important as wsgi by default considers your app instance as application either handle it in your main file or just add this configuration)
processes = 1
threads = 1
master = true
vacuum = true
die-on-term = true
This is what the nginx container output looks like
Editing question as the 404 issue was solved. But nginx is still not routing to wsgi.
The solution
changed the location of copying the nginx.conf file in nginx dockerfile
COPY nginx.conf /etc/nginx/nginx.config
Editing question again as nginx routing to wsgi issue also resolved.
The solution
updated files as mentioned above
Yes so this worked for me. There are n number of configurations available online and almost all are same yet a slight difference causes the issue.
I am updating my question to change files with the content that worked. Hope it helps someone.

How can my Nginx Docker container, created through the GitLab CI/CD, use the html-files inside my repository?

to understand more about this topic I have set up multiple Docker container on my Raspberry Pi4 with the target of creating a functioning workflow.
Setup
Firstly, I have a working GitLab Community Edition with this image (due to compatibility for ARM).
Secondly, there is also the GitLab Runner I use, which is connected to the GitLab as well.
Lastly, I have created myself a docker-compose file with which an Nginx-Container is being created from this image. The creation of the Nginx-Container without the use of CI/CD works perfectly fine.
Problem
Now to the problem itself:
The CI/CD is enabled and the Runner is assigned to the pipeline. Inside the repository is the index.html (in the folder "html"), a .gitlab-ci.yml file and the docker-compose.yml. Here are the contents of the two .yml files:
.gitlab-ci.yml:
image: docker:dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
services:
- docker:dind
build:
stage: build
script:
- apk add --no-cache docker-compose
- docker-compose up -d
docker-compose.yml:
version: '3'
services:
nginx:
image: nginx:latest
container_name: nginx
restart: always
volumes:
- /builds/Dennis/first-project/html:/usr/share/nginx/html
ports:
- "20080:80"
- "20022:22"
- "20443:443"
privileged: true
The pipeline installs docker-compose and creates the container. I can even access the Nginx-container through the IP and Port, but receive the error message "403 Forbidden". A look into the logs of this container outputs the following error:
directory index of "/usr/share/nginx/html/" is forbidden
I took a look inside the directory of this container while running, however there is no content inside "/usr/share/nginx/html/", which led me to believe that the pipeline or docker-compose don't have access to the files inside the repository or the path is configured falsely (most likely the second ). I tried to tinker a bit with the path in the docker-compose.yml (the first part of "volumes"), but to no avail.
In which way do I have to edit my configuration, maybe only my path in docker-compose.yml, so that the creation of the Nginx container takes the files from the repository?

Permission denied when executing Symfony Demo app through Docker

In my first attempt at running a more complex application through Docker, I selected the Symfony Demo app and assembled a docker build structure to accommodate it.
The first image is httpd: it runs as root (dropping to www-data afterwards) and talks through the 'server' custom network.
The second image is php (fpm): it runs as root (dropping to www-data afterwards) and also talks through the 'server' custom network.
The third image is composer: it runs as UID and GID 1000. Its entrypoint command is composer create-project symfony/symfony-demo symfony-demo
All containers share the same bind mount, where the symfony-demo app is located.
Then I go to localhost:8080 in the browser just to end up with a Symfony error:
The stream or file "/usr/local/apache2/htdocs/symfony-demo/var/log/dev.log" could not be opened: failed to open stream: Permission denied
The thing is... this file mentioned doesn't even exist at /var/log/. That folder is empty.
All files in the bind mount have permissions 1000:1000 (my user UID/GID) and are configured like this: -rw-r--r--.
I've tried running httpd and php as: UID 33 (www-data) and GID 33; UID 0 (root) and GID 33 (and vice-versa); and also as 1000:1000 or 1000:33, but all these combinations (when they successfully get httpd/php to start up) result in the same error.
docker-compose.yml:
version: "3"
services:
httpd:
build: "./httpd/"
container_name: "webserver"
depends_on:
- php
ports:
- "8080:80"
networks:
- server
volumes:
- ../app:/usr/local/apache2/htdocs/
php:
build: "./php/"
depends_on:
- composer
container_name: "php"
networks:
- server
volumes:
- ../app:/usr/local/apache2/htdocs/
composer:
build: "./composer/"
container_name: "composer"
user: "1000:1000"
volumes:
- ../app:/usr/local/apache2/htdocs/
networks:
server:
driver: bridge
composer Dockerfile:
FROM composer:1.8
WORKDIR /usr/local/apache2/htdocs/
CMD ["composer", "create-project", "symfony/symfony-demo", "symfony-demo"]
httpd Dockerfile:
FROM httpd:2.4
COPY ./config/httpd.conf /usr/local/apache2/conf/httpd.conf
COPY ./config/httpd-vhosts.conf /usr/local/apache2/conf/extra/httpd-vhosts.conf
COPY ./config/php-fpm.conf /usr/local/apache2/conf/extra/php-fpm.conf
WORKDIR /usr/local/apache2/htdocs
php Dockerfile:
FROM php:7.3-fpm
RUN cp "$PHP_INI_DIR/php.ini-development" "$PHP_INI_DIR/php.ini"
COPY ./config/timezone.ini $PHP_INI_DIR/conf.d/
COPY ./config/www.conf /usr/local/etc/php-fpm.d/www.conf
RUN apt-get update && \
apt-get install -y libicu-dev
RUN docker-php-ext-install intl
WORKDIR /usr/local/apache2/htdocs
just give the write permission
chmod -R 777 /usr/local/apache2/htdocs/symfony-demo/var/log/dev.log
here symfony doc for file permission: https://symfony.com/doc/current/setup/file_permissions.html
On second thoughts: my previous solution (as is) doesn't work in RHEL/Fedora/CentOS, because www-data does not exist there by default, causing Docker to fail to start.
My new solution - distro agnostic
For simplicity, I've decided to simply write composer's entrypoint script to set -rw-rw---- permissions at /app. That way, I can run composer as user 1000 and the same group PHP runs as (a new user and group was created just for that). Now PHP can write to SQLite3 database files inside the project and composer writes as user 1000, which I can edit.
It's basically what #habibun said, but I only need to give group write permissions, not full write permissions.
Be aware that SELinux will deny composer write access to your bind mount. You must configure SELinux to allow this operation.
This is my repository where this project is stored, if you're looking for a reference: https://github.com/o-alquimista/symfony-demo-docker/
User namespace solution - works fine for Debian/Ubuntu hosts
Composer should write to /app as user 33 (www-data), and so should php and httpd after they drop privileges. I was able to keep present permission settings (only owner can write) by making use of user namespaces. The user www-data is now mapped to the range 967 and beyond, which will result in user 33 being = me (user 1000).
Now all containers can write where they need to, and I can edit the project files as an unprivileged user.

Docker - how do i restart nginx to apply custom config?

I am trying to configure a LEMP dev environment with docker and am having trouble with nginx because I can't seem to restart nginx once it has it's new configuration.
docker-compose.yml:
version: '3'
services:
nginx:
image: nginx
ports:
- '8080:80'
volumes:
- ./nginx/log:/var/log/nginx
- ./nginx/config/default:/etc/nginx/sites-available/default
- ../wordpress:/var/www/wordpress
php:
image: php:fpm
ports:
- 9000:9000
mysql:
image: mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: secret
volumes:
- ./mysql/data:/var/lib/mysql
I have a custom nginx config that replaces /etc/nginx/sites-available/default, and in a normal Ubuntu environment, I would run service nginx restart to pull in the new config.
However, if I try to do that this Docker environment, the nginx container exits with code 1.
docker-compose exec nginx sh
service nginx restart
-exit with code 1-
How would I be able use nginx with a custom /etc/nginx/sites-available/default file?
Basically you can reload nginx configuration by invoking this command:
docker exec <nginx-container-name-or-id> nginx -s reload
To reload nginx with docker-compose specifically (rather than restart the whole container, causing downtime):
docker-compose exec nginx nginx -s reload
Docker containers should be running a single application in the foreground. When that process it launches as pid 1 inside the container exits, so does the container (similar to how killing pid 1 on a linux server will shutdown that machine). This process isn't managed by the OS service command.
The normal way to reload a configuration in a container is to restart the container. Since you're using docker-compose, that would be docker-compose restart nginx. Note that if this config was part of your image, you would need to rebuild and redeploy a new container, but since you're using a volume, that isn't necessary.

Symfony app deployment with docker

I come here because I develop an app with Symfony3. And I've some questions about the deployment of the app.
Actually I use docker-compose:
version: '2'
services:
nginx:
build: ./docker/nginx/
ports:
- 8081:80
volumes:
- .:/home/docker:ro
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- default
php:
build: ./docker/php/
volumes:
- .:/home/docker:rw
- ./docker/php/php.ini:/usr/local/etc/php/conf.d/custom.ini:ro
working_dir: /home/docker
networks:
- default
dns_search:
- php
db:
image: mariadb:latest
ports:
- 3307:3306
environment:
- MYSQL_ROOT_PASSWORD=collectionManager
- MYSQL_USER=collectionManager
- MYSQL_PASSWORD=collectionManager
- MYSQL_DATABASE=collectionManager
volumes:
- mariadb_data:/var/lib/mysql
networks:
- default
dns_search:
- db
search:
build: ./docker/search/
ports:
- 9200:9200
- 9300:9300
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- default
dns_search:
- search
volumes:
mariadb_data:
driver: local
elasticsearch_data:
driver: local
networks:
default:
nginx is clear, engine is PHP-FPM with some extensions and composer, db is MariaDB, and search ElasticSearch with some plugins.
Before I don't use Docker and to deploy I used Megallanes or Deployer, when I want to deploy webapp.
With Docker I can use the docker-compose file and recreate images and container on the server, I also can save my containers in images and in tar archive and load it on the server. It's okay for nginx, and php-fpm, but what about elasticsearch and the db ? Because I need to keep data in for future update of the code. Then when I deploy the code I need to execute a Doctrine Migration and maybe some commands, and Deployer do it perfectly with some other interresting things. And how I deploy the code with Docker ? Can we use both ? Deployer for code and Docker for services ?
Thanks a lot for your help.
First of all , Please try using user-defined networks, they have additional features vs legacy linking like Embedded DNS. Meaning you can call other containers on the same network with their names in your applications. Containers on a User defined network are isolate from containers on another User defined network.
To create a user defined network:
docker network create --driver bridge <networkname>
Dockerfile to use user defined network example:
search:
restart: unless-stopped
build: ./docker/search/
ports:
- "9200:9200"
- "9300:9300"
networks:
- <networkname>
Second: I noticed you didnt use data volumes for you DB and ElasticSearch.
You need to mount volumes at certain points to keep your persistant data.
Third: When you export your containers it wont contain mounted volumes. You need to back up volume data and migrate it manually.
To backup volume data:
docker run --rm --volumes-from db -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
The above command will create a container, mounts volumes from DB container and mounts current directory in container as /backup , uses ubuntu image and tar command to create a backup of /dbdata in container (consider changing this to your dbdirectory) in the /backup that is mounted from your docker hosts). after the operation completes the transient container will be removed (ubuntu container we used for creating the backup with --rm switch).
To restore:
You copy the tar archive to the remote location and create your container with empty mounted volume. then extract the tar archive in that volume using the following command.
docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"

Resources