We are using Nginx as a reverse proxy for docker-cloud services. A script is implemented to update the config file of Nginx whenever new service deploys on docker cloud or if service gets new url on docker-cloud.
The Nginx and the script have been run in a docker container separately.
The Nginx config file is mounted in Host(ECS). After updating the config file using script, it needs to reload the Nginx in order to apply the changes.
First, I would like to know if this is the best way of updating Nginx config file and also what is the best way to reload the Nginx without any downtime?
Shall I recreate the Nginx container after each update? if so, how?
or it's fine to reload the Nginx from Host by monitoring the changes in the config file(using a script) and reload it with below command?
docker exec NginxcontainerID | nginx -s reload
Shall I recreate the Nginx container after each update? if so, how?
No, You just need to reload nginx service most of the time.
You can use:
docker exec nginxcontainername/id nginx -s reload
or
docker kill -s HUP nginxcontainername/id
Another option would be using a custom image and check nginx config checksum and reload nginx when ever it changes. Example script:
nginx "$#"
oldcksum=`cksum /etc/nginx/conf.d/default.conf`
inotifywait -e modify,move,create,delete -mr --timefmt '%d/%m/%y %H:%M' --format '%T' \
/etc/nginx/conf.d/ | while read date time; do
newcksum=`cksum /etc/nginx/conf.d/default.conf`
if [ "$newcksum" != "$oldcksum" ]; then
echo "At ${time} on ${date}, config file update detected."
oldcksum=$newcksum
nginx -s reload
fi
done
You need to install inotifywait package.
Related
I already have a docker file for customized image for nginx and this works fine.
FROM library/nginx:1.13.2
LABEL maintainer="san#test.com"
# Remove the default Nginx configuration file
RUN rm -v /etc/nginx/nginx.conf
# Copy a configuration file from the current directory
ADD nginx.conf /etc/nginx/
# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log \
# Make PageSpeed cache writable
&& mkdir -p /var/cache/ngx_pagespeed && \
chmod -R o+wr /var/cache/ngx_pagespeed
ADD server.crt /etc/nginx/ssl/
ADD server.key /etc/nginx/ssl/
ADD conf.d/ /etc/nginx/conf.d/
ADD proxy.d/ /etc/nginx/proxy.d/
CMD ["nginx", "-g", "daemon off;"]
I am trying to also have aws cli installed so I can copy some s3 files and dynamically change nginx configuration which i will do with CMD[] once awsCli is available within the container.
I tried and read many a links from google, but the documentation or reads are not helping especially how to have credentials passed.
I am creating the image in two ways. First is via jenkins pipeline (snippet below
stages {
stage('Build Docker image') {
steps {
script {
docker.withRegistry("http://xyz-1.amazonaws.com", "ecr:eu-central-1:aws-credentials") {
def customImage = docker.build("web-proxy:${CY_RELEASE_VERSION}", ".")
customImage.push()
}
}
}
}
And other way is in local I manually build the image like following
docker build -t web-proxy-dev_san_1:1.11 .
What I am not sure is how I can have aws-cli in DockerFile and have the image take credentials automatically both locally and in jenkins. I think for jenkins it may work if I manage to have aws-cli installed as I am using aws-credentials specified in pipeline but I havent reached that stage yet.
You can use AWS plugin to interact AWS in your pipeline, check the following example: link
The idea is simple, I need to send a signal from a container to another one to restart nginx.
Connect to the nginx container from the first one in ssh is a good solution?
Do you have other recommended ways for this?
I don't recommend installing ssh, Docker containers are not virtual machines, And should respect microservices architecture to benefit from many advantages it provides.
In order to send signal from one container to another, You can use docker API.
First you need to share /var/run/docker.sock between required containers.
docker run -d --name control -v /var/run/docker.sock:/var/run/docker.sock <Control Container>
to send signal to a container named nginx you can do the following:
echo -e "POST /containers/nginx/kill?signal=HUP HTTP/1.0\r\n" | \
nc -U /var/run/docker.sock
Another option is using a custom image, with a custom script, that checks nginx config files and if the hash is changed sends reload signal. This way, each time you change config, nginx will automatically reload, or You can reload manually using comments. these kind of scripts are common among kubernetes users. Following is an example:
nginx "$#"
oldcksum=`cksum /etc/nginx/conf.d/default.conf`
inotifywait -e modify,move,create,delete -mr --timefmt '%d/%m/%y %H:%M' --format '%T' \
/etc/nginx/conf.d/ | while read date time; do
newcksum=`cksum /etc/nginx/conf.d/default.conf`
if [ "$newcksum" != "$oldcksum" ]; then
echo "At ${time} on ${date}, config file update detected."
oldcksum=$newcksum
nginx -s reload
fi
done
Don't forget to install inotifywait package.
I have a frontend-only web application hosted in Docker. The backend already exists but it has "custom IP" address, so I had to update my local /etc/hosts file to access it. So, from my local machine I am able to access the backend API without problem.
But the problem is that Docker somehow can not resolve this "custom IP", even when the host in written in the container (image?) /etc/hosts file.
When the Docker container starts up I see this error
$ docker run media-saturn:dev
2016/05/11 07:26:46 [emerg] 1#1: host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
nginx: [emerg] host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
I update the /etc/hosts file via command in Dockerfile, like this
# install wget
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
# The trick is to add the hostname on the same line as you use it, otherwise the hosts file will get reset, since every RUN command starts a new intermediate container
# it has to be https otherwise authentification is required
RUN echo "123.45.123.45 my-server-address.com" >> /etc/hosts && wget https://my-server-address.com
When I ssh into the machine to check the current content of /etc/hosts, the line "123.45.123.45 my-server-address.com" is indeed there.
Can anyone help me out with this? I am Docker newbee.
I have solved this. There are two things at play.
One is how it works locally and the other is how it works in Docker Cloud.
Local workflow
cd into root of project, where Dockerfile is located
build image: docker build -t media-saturn:dev .
run the builded image: docker run -it --add-host="my-server-address.com:123.45.123.45" -p 80:80 media-saturn:dev
Docker cloud workflow
Add extra_host directive to your Stackfile, like this
and then click Redeploy in Docker cloud, so that changes take effect
extra_hosts:
'my-server-address.com:123.45.123.45'
Optimization tip
ignore as many folders as possible to speed up process of sending data to docker deamon
add .dockerignore file
typically you want to add folders like node_modelues, bower_modules and tmp
in my case the tmp contained about 1.3GB of small files, so ignoring it sped up the process significantly
I installed nginx by passenger-install-nginx-module , and start nginx by /opt/nginx/sbin/nginx, but I don't know how to stop or restart nginx after I update my nginx conf.
I know I can use the way ps aux | grep and kill, it there a way like services restart nginx ?
nginx is build to reload configuration without the need for a restart. The common way to restart nginx is to first do
nginx -t
Which will analyse the configuration file and tell you if there are any problems (this is very convenient since syntax errors in the configuration file means downtime). And then
nginx -s reload
Will reload the configuration and restart nginx workers one by one with the new config. This simply finds the master nginx process and sends it the right signal (this is not much different from your ps axu | grep and kill, it just uses a different signal).
There are several other useful command line options for configuration and logging. Being aware of them lets you run nginx practically without any downtime.
So I want to be able to cap:deploy without having to type any passwords. I have setup all private keys so I can get to the remote servers fine, and am now using svn over ssh, so no passwords there.
I have one last problem, I need to be able to restart nginx. Right now I have sudo /etc/init.d/nginx reload. That is a problem b/c it uses the capistrano password, the one I just removed b/c I am using keys. Any ideas on how to restart nginx w\out a password?
I just spent a good hour looking at sudoer wildcards and the like trying to solve this exact problem. In truth, all you really need is a root executable script that restarts nginx.
Add this to the /etc/sudoers file
username hostname ALL=NOPASSWD: /path/to/script
Write script as root
#! /bin/bash
/bin/kill -HUP `cat /var/run/nginx.pid`
Make the script executable
Test.
sudo /path/to/script
There is a better answer on Stack Overflow that does not involve writing a custom script:
The best practice is to use /etc/sudoers.d/myusername
The /etc/sudoers.d/ folder can contain multiple files that allow users
to call stuff using sudo without being root.
The file usually contains a user and a list of commands that the user
can run without having to specify a password.
Instructions:
In all commands, replace myusername with the name of your user that you want to use to restart nginx without sudo.
Open sudoers file for your user:
$ sudo visudo -f /etc/sudoers.d/myusername
Editor will open. There you paste the following line. This will allow that user to run nginx start, restart, and stop:
myusername ALL=(ALL) NOPASSWD: /usr/sbin/service nginx start,/usr/sbin/service nginx stop,/usr/sbin/service nginx restart
Save by hitting ctrl+o. It will ask where you want to save, simply press enter to confirm the default. Then exit out of the editor with ctrl+x.
Now you can restart (and start and stop) nginx without password. Let's try it.
Open new session (otherwise, you might simply not be asked for your sudo password because it has not timed out):
$ ssh myusername#myserver
Stop nginx
$ sudo /usr/sbin/service nginx stop
Confirm that nginx has stopped by checking your website or running ps aux | grep nginx
Start nginx
$ sudo /usr/sbin/service nginx start
Confirm that nginx has started by checking your website or running ps aux | grep nginx
PS: Make sure to use sudo /usr/sbin/service nginx start|restart|stop, and not sudo service nginx start|restart|stop.
Run sudo visudo
Append with below lines (in this example you can add multiple scripts and services after comma)
# Run scripts without asking for pass
<your-user> ALL=(root) NOPASSWD: /opt/fixdns.sh,/usr/sbin/service nginx *,/usr/sbin/service docker *
Save and exit with :wq
Create a rake task in Rails_App/lib/capistrano/tasks/nginx.rake and paste below code.
namespace :nginx do
%w(start stop restart reload).each do |command|
desc "#{command.capitalize} Nginx"
task command do
on roles(:app) do
execute :sudo, "service nginx #{command}"
end
end
end
end
Then ssh to your remote server and open file
sudo vi /etc/sudoers
and the paste this line (after line %sudo ALL=(ALL:ALL) ALL)
deploy ALL=(ALL:ALL) NOPASSWD: /usr/sbin/service nginx *
Or, as in your case,
deploy ALL=(ALL:ALL) NOPASSWD: /etc/init.d/nginx *
Here I am assuming your deployment user is deploy.
You can add here other commands too for which you dont require to enter password. For example
deploy ALL=(ALL:ALL) NOPASSWD: /usr/sbin/service nginx *, /etc/init.d/mysqld, /etc/init.d/apache2