Stop a Nginx Docker container - nginx

I am trying to stop a Docker container running Nginx only after there has been no activity in the access.log of that Nginx instance for a period of time.
Is it possible to stop a Docker container from inside the container? The other solution I can think of is to have a cron running on the host OS that checks the /var/lib/docker/aufs/mnt/[container id]/ but I am planning on starting lots of containers and would prefer not to have to keep a list of IDs.

The docker container stops when the main process in the container stops.
I setup a little dockerfile and a start script to show how this could work in your case:
Dockerfile
FROM nginx
COPY start.sh /
CMD ["/start.sh"]
start.sh
#!/bin/bash
nginx &
sleep 20
# replace sleep 20 with your test of inactivity
nginx stop
Build container, run and test
$ docker build -t ng .
$ docker run -d ng
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a373e721da7 ng:latest "/start.sh" 4 seconds ago Up 3 seconds 443/tcp, 80/tcp distracted_colden
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a373e721da7 ng:latest "/start.sh" 16 seconds ago Up 16 seconds 80/tcp, 443/tcp distracted_colden
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$

You could share your docker sock within that docker image and then do any operations necessary.
to share docker sock within the docker image do something like this:
docker run -v /var/run/docker.sock:/run/docker.sock -v $(which docker):/bin/docker YOUR_IMAGE
inside ENV vars you will have your container id, example to run within container echo $HOSTNAME

I ran an nginx container and then wasn't able to fire it up again:
nginx: [emerg] bind() to unix:/var/run/nchan.sock failed (98: Address already in use)
The easiest fix was to just "prune":
docker system prune

Docker can run command in your running container using the exec command:
docker exec [-d|--detach[=false]] [--help] [-i|--interactive[=false]] [-t|--tty[=false]] CONTAINER COMMAND [ARG...]

Related

Docker and Rancher - Run multiple workers

I need to run 3 commands to run my application:
$ celery -A name worker
$ daphne name.asgi:channel_layer -b 0.0.0.0 -p 8000
$ python manage.py runworker
I need to do this for the same image, I do not know if it is viable to create a container for each command. What should I do?
Thanks for your help.
I realized that they are all services, there must be a container for each one.

How to properly start nginx in Docker

I want nginx in a Docker container to host a simple static hello world html website. I want to simply start it with "docker run imagename". In order to do that I added the run parameters to the Dockerfile. The reason I want to do that is that I would like to host the application on Cloud Foundry in a next step. Unfortunately I get the following error when doing it like this.
Dockerfile
FROM nginx:alpine
COPY . /usr/share/nginx/html
EXPOSE 5000
CMD ["nginx -d -p 5000:5000"]
Error
Error starting userland proxy: Bind for 0.0.0.0:5000: unexpected error Permission denied.
From ::
https://docs.docker.com/engine/reference/builder/#expose
EXPOSE does not make the ports of the container accessible to the host. To do that, you must use either the -p flag to publish a range of ports or the -P flag to publish all of the exposed ports. You can expose one port number and publish it externally under another number
CMD ["nginx -d -p 5000:5000"]
You add your dockerfile
FROM nginx:alpine
its already starts nginx.
after you build from your dockerfile
you should use this on
docker run -d -p 5000:5000 <your_image>
Edit:
If you want to use docker port 80 -> machine port 5000
docker run -d -p 5000:80 <your_image>

Docker version 1.13.1, Docker Swarm, jwilder/nginx-proxy will not start as a docker service

I'm trying to setup an Elasticsearch cluster on Docker following this guide: https://sematext.com/blog/2016/12/12/docker-elasticsearch-swarm/
But I'm consistently getting an error about /tmp/docker.sock after creating the jwilder/nginx-proxy service. The below console snip is from a freshly installed and updated CentOS7. I installed docker via yum following the instructions here: https://docs.docker.com/engine/installation/linux/centos/
[root#centos7]# docker -v
Docker version 1.13.1, build 092cba3
[root#centos7]#
[root#centos7]# docker service create --mode global \
> --name proxy -p 80:80 \
> --network elasticsearch-frontend \
> --network elasticsearch-backend \
> --mount type=bind,bind-propagation=rshared,src=/var/run/docker.sock,target=/tmp/docker.sock:ro \
> jwilder/nginx-proxy
xbhj4rzjyuu0k8maf1ha5fmgs
[root#centos7]# docker service ls
ID NAME MODE REPLICAS IMAGE
xbhj4rzjyuu0 proxy global 0/1 jwilder/nginx-proxy:latest
[root#centos7]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7ba303e0f8b6 jwilder/nginx-proxy#sha256:9a2d63aad9068f817c705965f41f2f32fa0bbef6b217ae5c9b2340ef23e3dcba "/app/docker-entry..." 2 seconds ago Created proxy.kifcc5gbdcxz5ixsbx7sl1cv8.zuizhtt7q94nluuudlgjgy1yi
2fe655a93aa4 jwilder/nginx-proxy#sha256:9a2d63aad9068f817c705965f41f2f32fa0bbef6b217ae5c9b2340ef23e3dcba "/app/docker-entry..." 10 seconds ago Exited (1) 3 seconds ago proxy.kifcc5gbdcxz5ixsbx7sl1cv8.baqn1204spbw5v6qxx6qjx327
7894fd0e1dee jwilder/nginx-proxy#sha256:9a2d63aad9068f817c705965f41f2f32fa0bbef6b217ae5c9b2340ef23e3dcba "/app/docker-entry..." 18 seconds ago Exited (1) 11 seconds ago proxy.kifcc5gbdcxz5ixsbx7sl1cv8.6s9u0q0y1kjelebszheius2es
51840cca0d32 jwilder/nginx-proxy#sha256:9a2d63aad9068f817c705965f41f2f32fa0bbef6b217ae5c9b2340ef23e3dcba "/app/docker-entry..." 26 seconds ago Exited (1) 19 seconds ago proxy.kifcc5gbdcxz5ixsbx7sl1cv8.wlwy723ts9kw00sgyu3s5f985
d52fd18567a9 jwilder/nginx-proxy#sha256:9a2d63aad9068f817c705965f41f2f32fa0bbef6b217ae5c9b2340ef23e3dcba "/app/docker-entry..." 34 seconds ago Exited (1) 27 seconds ago proxy.kifcc5gbdcxz5ixsbx7sl1cv8.wa5jk9xnly1tdxpbvonnjmoty
[root#centos7]# docker logs 2fe655a93aa4
ERROR: you need to share your Docker host socket with a volume at /tmp/docker.sock
Typically you should run your jwilder/nginx-proxy with: `-v /var/run/docker.sock:/tmp/docker.sock:ro`
See the documentation at http://git.io/vZaGJ
[root#centos7]#
The jwilder/nginx-proxy container works when launched as a single container using the -v option to mount docker.sock.
I've scoured google (the Docker docs, the jwilder/nginx-proxy git) looking for what would cause this and I've come up with nothing. Does anyone see something wrong? I'm new to docker, so maybe I'm missing something easy.
Thanks in advance! :-)
Instead of making a read-only mount of /var/run/docker.sock to /tmp/docker.sock, you are making a mount of /var/run/docker.sock to /tmp/docker.sock:ro , hence the application cries.
To rectify this, make a slight modification. Replace...
--mount type=bind,bind-propagation=rshared,src=/var/run/docker.sock,target=/tmp/docker.sock:ro
...with:
--mount type=bind,bind-propagation=rshared,src=/var/run/docker.sock,target=/tmp/docker.sock,ro=1
From the documentation:
readonly or ro: The Engine mounts binds and volumes read-write unless
readonly option is given when mounting the bind or volume. When true
or 1 or no value the bind or volume is mounted read-only. When false
or 0 the bind or volume is mounted read-write.

My docker container isn't starting on localhost (0.0.0.0) on Docker for Windows (Native using Hyper-V)

I'm following Digital Ocean's tutorial on how to start a nginx docker container (Currently on Step 4). Currently this is their output:
$ docker run --name docker-nginx -p 80:80 -d nginx
d3ccb73a91985651ec61231bca9f9c716f0dec807e354a29eeef2144f883a01c
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b91f3ce26553 nginx "nginx -g 'daemon off" About a minute ago Up About a minute 0.0.0.0:80->80/tcp, 443/tcp docker-nginx
But when I run it, this is my output (noticed the different IP of the container):
C:\>docker run --name docker-nginx -p 80:80 -d nginx
d3ccb73a91985651ec61231bca9f9c716f0dec807e354a29eeef2144f883a01c
C:\>docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3ccb73a9198 nginx "nginx -g 'daemon off" 14 hours ago Up 2 seconds 10.0.75.2:80->80/tcp, 443/tcp docker-nginx
Why does this happen? And how can I get the same results as Digital Ocean's? (Getting the server to start on localhost)
Edit: I'm using Docker for windows (recently released) which apparently runs native using Hyper-V. My output for docker-machine ls is this:
C:\>docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
C:\>
But when I run it, this is my output (noticed the different IP of the
container)
Since this a Windows machine, I assume that you're using Docker Toolbox Docker for Windows. 10.0.75.2 is the IP of the boot2docker virtual machine.
If you are using Windows or Mac OS, you will need some form of virtualization in
order to run Docker. The IP you just saw is the IP of that lightweight virtual machine.
And how can I get the same results as Digital Ocean's? (Getting the
server to start on localhost)
Use a Linux distribution! Also you can enable Expose container ports on localhost in Docker For Windows Settings:
Despite you created the containers in your local machine. These are actually running on a different machine (a virtual machine)
First, check what is the IP of your docker machine (the virtual machine)
$docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default * virtualbox Running tcp://192.168.99.100
Then run curl command (or open a browser) to view the default web site on your nginx web server inside the container
curl http://192.168.99.100:80
if you are using a virtual machine on windows:
docker-machine ip default
https://docs.docker.com/machine/concepts/
When I ran this command for the first time: docker run -d -p 80:80 --name docker-tutorial docker101tutorial
I got this error:
docker: Error response from daemon: Conflict. The container name
"/docker-tutorial" is already in use by container "LONG_CONTAINER_ID".
You have to remove (or rename) that container to be able to reuse that
name.
so, I tried to remove this container using: docker rm -f LONG_CONTAINER_ID
then I did: docker run -d -p 3080:80 --name docker-tutorial docker101tutorial
note 3080:80 instead of 80:80... Had I run this from the docker desktop, I would see this default option below:

Dockerized nginx is not starting

I have tried following some tutorials and documentation on dockerizing my web server, but I am having trouble getting the service to run via the docker run command.
This is my Dockerfile:
FROM ubuntu:trusty
#Update and install stuff
RUN apt-get update
RUN apt-get install -y python-software-properties aptitude screen htop nano nmap nginx
#Add files
ADD src/main/resources/ /usr/share/nginx/html
EXPOSE 80
CMD service nginx start
I create my image:
docker build -t myImage .
And when I run it:
docker run -p 81:80 myImage
it seems to just stop:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
90e54a254efa pms-gui:latest /bin/sh -c service n 3 seconds ago Exit 0 prickly_bohr
I would expect this to be running with port 81->80 but it is not. Running
docker start 90e
does not seem to do anything.
I also tried entering it directly
docker run -t -i -p 81:80 myImage /bin/bash
and from here I can start the service
service nginx start
and from another tab I can see it is working as intended (also in my browser):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
408237a5e10b myImage:latest /bin/bash 12 seconds ago Up 11 seconds 0.0.0.0:81->80/tcp mad_turing
So I assume it is something I am doing wrong with my Dockerfile? Could anyone help me out with this, I am quite new to Docker. Thank you!
SOLUTION: Based on the answer from Ivant I found another way to start nginx in the foreground. My Dockerfile CMD now looks like:
CMD /usr/sbin/nginx -g "daemon off;"
As of now, the official nginx image uses this to run nginx (see the Dockerfile):
CMD ["nginx", "-g", "daemon off;"]
In my case, this was enough to get it to start properly. There are tutorials online suggesting more awkward ways of accomplishing this but the above seems quite clean.
Docker container runs as long as the command you specify with CMD, ENTRTYPOINT or through the command line is running. In your case the service command finishes right away and the whole container is shut down.
One way to fix this is to start nginx directly from the command line (make sure you don't run it as a daemon).
Another option is to create a small script which starts the service and then sleeps forever. Something like:
#!/bin/bash
service nginx start
while true; do sleep 1d; done
and run this instead of directly running the service command.
A third option would be to use something like runit or similar program, instead of the normal service.
Using docker-compose:
To follow the recommended solution, add to docker-compose.yml:
command: nginx -g "daemon off"
I also found I could simply add to nginx.conf:
daemon off;
...and continue to use in docker-compose.yml:
command: service nginx start
...although it would make the config file less portable outside docker.
Docker as a very nice index of offical and user images. When you want to do something, chances are someone already did it ;)
Just search for 'nginx' on index.docker.io, you will see, there is an official nginx image: https://registry.hub.docker.com/_/nginx/
There you have a full guide to help you start your webserver.
Feel free to take a look at other users nginx image to see variants :)
The idea is to start nginx in foreground mode.
If you run "service nginx start", it is a parent process which will start a child process of nginx. If you run "service nginx start" as CMD in a container, the Process ID 1 for the container will be "service nginx start" or ServiceManager (SystemD), while actual nginx would be running as a child process.
If you run "service nginx start", and then "ps -ef", you will get output as below. I have run it my host OS.
root#ip-172-31-85-74:/home/ubuntu# service nginx start
root#ip-172-31-85-74:/home/ubuntu#
root#ip-172-31-85-74:/home/ubuntu# ps -ef | grep nginx
root 18593 1 0 12:27 ? 00:00:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
www-data 18595 18593 0 12:27 ? 00:00:00 nginx: worker process
root 18599 17918 0 12:27 pts/0 00:00:00 grep --color=auto nginx
So, here the process ID 18593 is the child process which has parent process 1.
Container exits when their Process ID 1 exits. And in case of CMD "service nginx start", the PID 1 is the process manager, may be SystemD. It starts nginx as a child process, and exits itself, hence the container exits.
Similarly, if you run a shell script (for eg : start.sh) in CMD, as soon as the script ends, the container will exit. Even though the script starts some services (eg - nginx) in its execution, as soon as the script ends, the container will exit, because the PID 1 will be of the shell script. The parent process will be "./start.sh", and the services started by script will be child processes. In case you want to use a shell script in CMD, and want the container to run indefinitely, you need a command at last of the script which doesn't let it end. Something as shown below:
#!/bin/bash
service nginx start
while true; do sleep 1d; done

Resources