Docker - how do i restart nginx to apply custom config? - nginx

I am trying to configure a LEMP dev environment with docker and am having trouble with nginx because I can't seem to restart nginx once it has it's new configuration.
docker-compose.yml:
version: '3'
services:
nginx:
image: nginx
ports:
- '8080:80'
volumes:
- ./nginx/log:/var/log/nginx
- ./nginx/config/default:/etc/nginx/sites-available/default
- ../wordpress:/var/www/wordpress
php:
image: php:fpm
ports:
- 9000:9000
mysql:
image: mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: secret
volumes:
- ./mysql/data:/var/lib/mysql
I have a custom nginx config that replaces /etc/nginx/sites-available/default, and in a normal Ubuntu environment, I would run service nginx restart to pull in the new config.
However, if I try to do that this Docker environment, the nginx container exits with code 1.
docker-compose exec nginx sh
service nginx restart
-exit with code 1-
How would I be able use nginx with a custom /etc/nginx/sites-available/default file?

Basically you can reload nginx configuration by invoking this command:
docker exec <nginx-container-name-or-id> nginx -s reload

To reload nginx with docker-compose specifically (rather than restart the whole container, causing downtime):
docker-compose exec nginx nginx -s reload

Docker containers should be running a single application in the foreground. When that process it launches as pid 1 inside the container exits, so does the container (similar to how killing pid 1 on a linux server will shutdown that machine). This process isn't managed by the OS service command.
The normal way to reload a configuration in a container is to restart the container. Since you're using docker-compose, that would be docker-compose restart nginx. Note that if this config was part of your image, you would need to rebuild and redeploy a new container, but since you're using a volume, that isn't necessary.

Related

Nginx not routing browser request to wsgi(python server running)

I am running my flask project from uwsgi on nginx. But my nginx is not routing the request to uwsgi when i hit localhost:80/
My nginx.conf looks like this
server {
listen 80;
server_name <your machine ip/domain>;(if on local it would be localhost but I was running on WSL so I put it IP)
location / {
include uwsgi_params;
uwsgi_pass web_app:5000; (you might see suggestion of .sock files or suffixing http:// or unix: but none work for me plain simple your python server's service name which you would provide in docker-compose)
}
}
docker-compose looks like this
version: '3.7'
services:
web_app:
build: .
container_name: kpi-dashboard
ports:
- 5000:5000
depends_on:
- db
nginx:
build: ./nginx
container_name: nginx
restart: always
ports:
- "80:80"
depends_on:
- web_app
db:
image: postgres:13-alpine
container_name: postgresql
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
ports:
- 5432:5432
volumes:
postgres_data:
nginx dockerfile
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf (it is important to remove the default conf as it would not take your custom conf no matter where you copy it)
COPY nginx.conf /etc/nginx/conf.d/
(there are answers online to copy it no other places but this only works)
EXPOSE 80
web app dockerfile
FROM python:3.8.16-slim-buster
RUN apt-get update
RUN apt-get install gcc -y && apt-get install python3-dev -y && apt-get install libpq-dev -y
ENV PYTHONPATH=${PYTHONPATH}:${PWD}
RUN pip install poetry
WORKDIR /app
COPY pyproject.toml /app/
COPY . /app/
RUN poetry config virtualenvs.create false
RUN poetry install --no-dev
EXPOSE 5000
CMD ["uwsgi", "--ini", "wsgi.ini"]
wsgi.ini file
[uwsgi]
module = app (this is when you are writing you project entrypoint in app.py. if you are writing in wsgi.py then this would become wsgi:app)
socket = 0.0.0.0:5000
callable = app (this is important as wsgi by default considers your app instance as application either handle it in your main file or just add this configuration)
processes = 1
threads = 1
master = true
vacuum = true
die-on-term = true
This is what the nginx container output looks like
Editing question as the 404 issue was solved. But nginx is still not routing to wsgi.
The solution
changed the location of copying the nginx.conf file in nginx dockerfile
COPY nginx.conf /etc/nginx/nginx.config
Editing question again as nginx routing to wsgi issue also resolved.
The solution
updated files as mentioned above
Yes so this worked for me. There are n number of configurations available online and almost all are same yet a slight difference causes the issue.
I am updating my question to change files with the content that worked. Hope it helps someone.

How can my Nginx Docker container, created through the GitLab CI/CD, use the html-files inside my repository?

to understand more about this topic I have set up multiple Docker container on my Raspberry Pi4 with the target of creating a functioning workflow.
Setup
Firstly, I have a working GitLab Community Edition with this image (due to compatibility for ARM).
Secondly, there is also the GitLab Runner I use, which is connected to the GitLab as well.
Lastly, I have created myself a docker-compose file with which an Nginx-Container is being created from this image. The creation of the Nginx-Container without the use of CI/CD works perfectly fine.
Problem
Now to the problem itself:
The CI/CD is enabled and the Runner is assigned to the pipeline. Inside the repository is the index.html (in the folder "html"), a .gitlab-ci.yml file and the docker-compose.yml. Here are the contents of the two .yml files:
.gitlab-ci.yml:
image: docker:dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
services:
- docker:dind
build:
stage: build
script:
- apk add --no-cache docker-compose
- docker-compose up -d
docker-compose.yml:
version: '3'
services:
nginx:
image: nginx:latest
container_name: nginx
restart: always
volumes:
- /builds/Dennis/first-project/html:/usr/share/nginx/html
ports:
- "20080:80"
- "20022:22"
- "20443:443"
privileged: true
The pipeline installs docker-compose and creates the container. I can even access the Nginx-container through the IP and Port, but receive the error message "403 Forbidden". A look into the logs of this container outputs the following error:
directory index of "/usr/share/nginx/html/" is forbidden
I took a look inside the directory of this container while running, however there is no content inside "/usr/share/nginx/html/", which led me to believe that the pipeline or docker-compose don't have access to the files inside the repository or the path is configured falsely (most likely the second ). I tried to tinker a bit with the path in the docker-compose.yml (the first part of "volumes"), but to no avail.
In which way do I have to edit my configuration, maybe only my path in docker-compose.yml, so that the creation of the Nginx container takes the files from the repository?

Local Wordpress env with Docker Compose - cURL error 7: Failed to connect to localhost port 8080: Connection refused

I am trying to set up a local Wordpress environment using Docker Compose for the first time. I am currently able to access my Wordpress instance on localhost:8080 and have the files mapped locally.
I purchased a theme, added it to wp-content/themes, and was then prompted to install some required plugins for it. When I click Install, this is the error I receive:
Download failed. cURL error 7: Failed to connect to localhost port 8080: Connection refused
Here is my configuration file:
version: "2"
services:
my-wpdb:
image: mariadb
ports:
- "8081:3306"
environment:
MYSQL_ROOT_PASSWORD: password
my-wp:
image: wordpress:latest
volumes:
- ./:/var/www/html
ports:
- "8080:80"
links:
- my-wpdb:mysql
environment:
WORDPRESS_DB_PASSWORD: password
Probably a simple fix, but I can't seem to figure it out. Thanks!
Following on from papey's answer. curl is trying to connect on the outside port (8080 in your case) whilst inside the container (80).
After much Googling and the only solution people gave was changing the inside and outside ports to 80:80. This is not feasible if you are running another service on port 80.
My solution was to modify the Apache2 conf inside the container so that Apache will respond inside on the outside port. There may be better ways but this is working.
/etc/apache2/ports.conf
Listen 80
Listen 8080
/etc/apache2/sites-available/000-default.conf
<VirtualHost *:*>
According to you docker-compose :
- "8080:80"
8080 is OUTSIDE the container
80 is INSIDE the container
I agree with PaulH's solution.
Execute the following commands inside the running WordPress Linux Docker container then restart the container.
echo -e "\nListen 8080\n" >> /etc/apache2/ports.conf
echo -e "\n<VirtualHost *:*>\n</VirtualHost>\n" >> /etc/apache2/sites-available/000-default.conf
cat /etc/apache2/ports.conf && cat /etc/apache2/sites-available/000-default.conf

how to reach another container from a dockerised nginx

I have nginx in a docker container, and a nodejs webapp in another docker container.
The nodejs server is reachable from the host server on port 8080.
The nginx docker container is listening to port 80 (will do the certificate later, first this base must be working).
And now I want a subdomain to be forwarded to this 8080 nodejs app. lets say app1.example.com
From outside I can reach the app by the server ip (or hostname) and port 8080 but not on app1.example.com. And it does work on app1.example.com:8080 (I have opened up port 8080 on the host server).
I get a bad gateway nginx message when approaching the app1.example.com So I get in the first nginx container, but how do i get back to the host server to proxy pass it to the port 8080 of the host server (and not port 8080 of the nginx container). looking for the reverse EXPOSE syntax.
the main problem is, of course if I use the ip and port 127.0.0.1:8080 it will try on the nginx container....
So how do I let the nginx container route back to the host 127.0.0.1:8080?
I have tried 0.0.0.0 and defining an upstream, actually been googling a lot, and have tried a lot of configurations... but not yet found a working one....
Edit
Just found out, this command of docker might help:
sudo docker network inspect bridge
This shows the Ip address used inside the containers (in my case 172.17..0.2), but not sure this address stays the same every time the docker will restart... (e.g. server reboot)
Edit
Following alkaline answer I now have (but still not working):
my docker-compose.yml file:
version: "2"
services:
nginx:
container_name: nginx
image: nginx_img
build: ../docker-nginx-1/
ports:
- "80:80"
networks:
- backbone
nodejs:
container_name: nodejs
image: merites/docker-simple-node-server
build: ../docker-simple-node-server/
networks:
- backbone
expose:
- 8080
networks:
backbone:
driver: bridge
and my nginx (skipped the include in the conf.d folder for simplicity):
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream upsrv {
server nodejs:8080;
}
server {
listen 80;
server_name app1.example.com;
location / {
proxy_pass http://upsrv;
}
}
}
edit 31-08-2016
this might be the problem, the name is not backbone, but called after the folder started the service from:
sudo docker network ls
out puts:
NETWORK ID NAME DRIVER SCOPE
1167c2b0ec31 bridge bridge local
d06ffaf26fe2 dockerservices1_backbone bridge local
5e4ec13d790a host host local
7d1f8c32f259 none null local
edit 01-09-2016
It might be caused by the way I have my nginx docker container setup?
this is the docker file I used:
############################################################
# Dockerfile to build Nginx Installed Containers
# Based on Ubuntu
############################################################
# Set the base image to Ubuntu
FROM ubuntu
# File Author / Maintainer
MAINTAINER Maintaner Name
# Install Nginx
# Add application repository URL to the default sources
# RUN echo "deb http://archive.ubuntu.com/ubuntu/ raring main universe" >> /etc/apt/sources.list
# Update the repository
RUN apt-get update
# Install necessary tools
RUN apt-get install -y nano wget dialog net-tools
# Download and Install Nginx
RUN apt-get install -y nginx
# Remove the default Nginx configuration file
RUN rm -v /etc/nginx/nginx.conf
# Copy a configuration file from the current directory
ADD nginx.conf /etc/nginx/
# Append "daemon off;" to the beginning of the configuration
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
# Expose ports
EXPOSE 80
# Set the default command to execute
# when creating a new container
CMD service nginx start
My final solution 1th sept. 2016
I used this compose file now:
version: "2"
services:
nginx:
image: nginx
container_name: nginx
volumes:
- ./nginx-configs:/etc/nginx/conf.d
ports:
- "80:80"
networks:
- backbone
nodejs:
container_name: nodejs
image: merites/docker-simple-node-server
build: ../docker-simple-node-server/
networks:
- backbone
expose:
- 8080
networks:
backbone:
driver: bridge
In the project folder, from which you run docker-compose up -d, I added a folder named nginx-configs. This folder will 'override' all the files in the nginx container named /etc/nginx/conf.d
Therefor I copied the default.cfg from the nginx container before I added this volume mount. using the command:
docker exec -t -i container_name /bin/bash
and than cat /etc/nginx/conf.d/default.conf
and added the same default.conf in the project folder with nginx configs.
Besides the default I added app1.conf with this content:
upstream upsrv1 {
server nodejs:8080;
}
server {
listen 80;
server_name app1.example.com;
location / {
proxy_pass http://upsrv1;
}
}
This way, I can easily add a second app... third and so on.
So the basics is working now.
Here's a best practice. Only expose port 80 outside of the host. The nodejs app can be in a private network only accessible through nginx.
version: "2"
services:
nginx:
...
ports:
- "80:80"
networks:
- backbone
nodejs:
...
networks:
- backbone
expose:
- 8080
networks:
backbone:
driver: bridge
In your nginx.conf file, the upstream servers can be listed as nodejs:8080. The docker daemon will resolve it to the correct internal ip.

Setting up Nginx Proxy in Docker using Ansible

I am attempting to setup an nginx container that serves as a proxy to another container I have setup. I would like to automate this setup as I need to deploy a similar setup across several servers. For this I am using Ansible.
Here is my nginx.conf:
events {
worker_connections 1024;
}
http {
server {
listen 8080;
location / {
proxy_pass http://192.168.1.14:9000;
}
}
}
Here is the relevant part of my Ansible YAML file:
- name: Install Nginx
docker:
name: nginx
image: nginx
detach: True
ports:
- 8080:8080
volumes:
- /etc/docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
When I first run my playbook, nginx is running but is not bound to 8080 as seen here:
6a4f610e86d nginx "nginx -g 'daemon off" 35 minutes ago Up Less than a second 80/tcp, 443/tcp nginx
However, if I run the nginx container directly with:
docker run -d -v /etc/docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro -p 8080:8080 nginx
nginx and my proxy runs as expected and is listening on 8080:
c3a46421045c nginx "nginx -g 'daemon off" 2 seconds ago Up 1 seconds 80/tcp, 443/tcp, 0.0.0.0:8080->8080/tcp determined_swanson
Any idea why it works one way but not the other?
Update
Per the guidance given in the selected answer, I updated my YAML file thusly:
- name: Install Nginx
docker:
name: nginx
image: nginx
detach: True
ports:
- 8080:8080
expose:
- 8080
volumes:
- /etc/docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
First, you need to make sure your nginx image EXPOSE the port 8080, and you can specify directly in your ansible yaml file:
expose
(added in 1.5)
List of additional container ports to expose for port mappings or links. If the port is already exposed using EXPOSE in a Dockerfile, you don't need to expose it again.
Then, the only other difference I see when considering the Ansible docker module is that the port are inside double-quotes:
ports:
- "8080:9000"
Also, if you want to prexypass to another container in the same docker daemon, you might want to use a link instead of a fixed IP address.
links:
- "myredis:aliasedredis"
That way, your nginx.conf includes a fixed rule:
proxy_pass http://aliasedredis:9000;

Resources