I have tried following some tutorials and documentation on dockerizing my web server, but I am having trouble getting the service to run via the docker run command.
This is my Dockerfile:
FROM ubuntu:trusty
#Update and install stuff
RUN apt-get update
RUN apt-get install -y python-software-properties aptitude screen htop nano nmap nginx
#Add files
ADD src/main/resources/ /usr/share/nginx/html
EXPOSE 80
CMD service nginx start
I create my image:
docker build -t myImage .
And when I run it:
docker run -p 81:80 myImage
it seems to just stop:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
90e54a254efa pms-gui:latest /bin/sh -c service n 3 seconds ago Exit 0 prickly_bohr
I would expect this to be running with port 81->80 but it is not. Running
docker start 90e
does not seem to do anything.
I also tried entering it directly
docker run -t -i -p 81:80 myImage /bin/bash
and from here I can start the service
service nginx start
and from another tab I can see it is working as intended (also in my browser):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
408237a5e10b myImage:latest /bin/bash 12 seconds ago Up 11 seconds 0.0.0.0:81->80/tcp mad_turing
So I assume it is something I am doing wrong with my Dockerfile? Could anyone help me out with this, I am quite new to Docker. Thank you!
SOLUTION: Based on the answer from Ivant I found another way to start nginx in the foreground. My Dockerfile CMD now looks like:
CMD /usr/sbin/nginx -g "daemon off;"
As of now, the official nginx image uses this to run nginx (see the Dockerfile):
CMD ["nginx", "-g", "daemon off;"]
In my case, this was enough to get it to start properly. There are tutorials online suggesting more awkward ways of accomplishing this but the above seems quite clean.
Docker container runs as long as the command you specify with CMD, ENTRTYPOINT or through the command line is running. In your case the service command finishes right away and the whole container is shut down.
One way to fix this is to start nginx directly from the command line (make sure you don't run it as a daemon).
Another option is to create a small script which starts the service and then sleeps forever. Something like:
#!/bin/bash
service nginx start
while true; do sleep 1d; done
and run this instead of directly running the service command.
A third option would be to use something like runit or similar program, instead of the normal service.
Using docker-compose:
To follow the recommended solution, add to docker-compose.yml:
command: nginx -g "daemon off"
I also found I could simply add to nginx.conf:
daemon off;
...and continue to use in docker-compose.yml:
command: service nginx start
...although it would make the config file less portable outside docker.
Docker as a very nice index of offical and user images. When you want to do something, chances are someone already did it ;)
Just search for 'nginx' on index.docker.io, you will see, there is an official nginx image: https://registry.hub.docker.com/_/nginx/
There you have a full guide to help you start your webserver.
Feel free to take a look at other users nginx image to see variants :)
The idea is to start nginx in foreground mode.
If you run "service nginx start", it is a parent process which will start a child process of nginx. If you run "service nginx start" as CMD in a container, the Process ID 1 for the container will be "service nginx start" or ServiceManager (SystemD), while actual nginx would be running as a child process.
If you run "service nginx start", and then "ps -ef", you will get output as below. I have run it my host OS.
root#ip-172-31-85-74:/home/ubuntu# service nginx start
root#ip-172-31-85-74:/home/ubuntu#
root#ip-172-31-85-74:/home/ubuntu# ps -ef | grep nginx
root 18593 1 0 12:27 ? 00:00:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
www-data 18595 18593 0 12:27 ? 00:00:00 nginx: worker process
root 18599 17918 0 12:27 pts/0 00:00:00 grep --color=auto nginx
So, here the process ID 18593 is the child process which has parent process 1.
Container exits when their Process ID 1 exits. And in case of CMD "service nginx start", the PID 1 is the process manager, may be SystemD. It starts nginx as a child process, and exits itself, hence the container exits.
Similarly, if you run a shell script (for eg : start.sh) in CMD, as soon as the script ends, the container will exit. Even though the script starts some services (eg - nginx) in its execution, as soon as the script ends, the container will exit, because the PID 1 will be of the shell script. The parent process will be "./start.sh", and the services started by script will be child processes. In case you want to use a shell script in CMD, and want the container to run indefinitely, you need a command at last of the script which doesn't let it end. Something as shown below:
#!/bin/bash
service nginx start
while true; do sleep 1d; done
Related
I have a docker container which stops unexpectedly.
The important part of my docker image looks like this:
...
ENTRYPOINT ["./start.sh"]
CMD ["nginx", "-g", "daemon off;"]
It's al executed pending the build. Than I start the container with docker run -p 80:8080 myimage:latest
I see something like this when I perform docker ps
"./start.sh nginx -g "
But a few seconds later the container stops (instead of keeping running nginx)
docker logs show me the logs of the output of my start.sh
The last command in that .sh is an echo of "fine" and I see that.
What I want to obtain is that the container executes the entrypointscript and after that it executes the nginx server.
Using ENTRYPOINT and CMD does not run them both consecutively. The CMD arguments are appended to the entrypoint. Your docker ps shows this exactly. This is a decent explanation.
You need to make your start.sh handle your CMD arguments, or have your start.sh call nginx, or rework it alltogether.
I have a frontend-only web application hosted in Docker. The backend already exists but it has "custom IP" address, so I had to update my local /etc/hosts file to access it. So, from my local machine I am able to access the backend API without problem.
But the problem is that Docker somehow can not resolve this "custom IP", even when the host in written in the container (image?) /etc/hosts file.
When the Docker container starts up I see this error
$ docker run media-saturn:dev
2016/05/11 07:26:46 [emerg] 1#1: host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
nginx: [emerg] host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
I update the /etc/hosts file via command in Dockerfile, like this
# install wget
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
# The trick is to add the hostname on the same line as you use it, otherwise the hosts file will get reset, since every RUN command starts a new intermediate container
# it has to be https otherwise authentification is required
RUN echo "123.45.123.45 my-server-address.com" >> /etc/hosts && wget https://my-server-address.com
When I ssh into the machine to check the current content of /etc/hosts, the line "123.45.123.45 my-server-address.com" is indeed there.
Can anyone help me out with this? I am Docker newbee.
I have solved this. There are two things at play.
One is how it works locally and the other is how it works in Docker Cloud.
Local workflow
cd into root of project, where Dockerfile is located
build image: docker build -t media-saturn:dev .
run the builded image: docker run -it --add-host="my-server-address.com:123.45.123.45" -p 80:80 media-saturn:dev
Docker cloud workflow
Add extra_host directive to your Stackfile, like this
and then click Redeploy in Docker cloud, so that changes take effect
extra_hosts:
'my-server-address.com:123.45.123.45'
Optimization tip
ignore as many folders as possible to speed up process of sending data to docker deamon
add .dockerignore file
typically you want to add folders like node_modelues, bower_modules and tmp
in my case the tmp contained about 1.3GB of small files, so ignoring it sped up the process significantly
I've recently pulled a nginx image:
docker pull nginx
I can run it successfully and go to http://server_name and see the "Welcome to Nginx" page:
docker run -d -p 80:80 nginx
But then when I try to check logs:
docker exec 6c79549e3eb4f6e5fc06f049b67814ac4560ce2cdd7cc6ae84b44b5ae09a9a05 cat /var/log/nginx/access.log
It just hangs and outputs nothing. Same with error log. Now if I create a test.txt file in that same folder and use docker exec to (view) cat the file, I executes without hanging or any issues.
Even if I try to run it in interactive mode, it just hangs:
docker run -i -t -p 80:80 nginx
Once again the terminal hangs on the next line doing nothing, but it seems to work because I can access the nginx welcome page.
Really confused what is going on, I've tried to search for this problem, but have not found any solution so far. Without being able to view the logs, it is going to be pretty hard to debug :) Also shouldn't the access logs be moved to stdout in the nginx container since by convention docker containers log to stdout?
If you go inside the container docker exec -it <container-id> /bin/bash and check the log location ls -la /var/log/nginx/, you will see the following output:
lrwxrwxrwx 1 root root 11 Apr 30 23:05 access.log -> /dev/stdout
lrwxrwxrwx 1 root root 11 Apr 30 23:05 error.log -> /dev/stderr
Clearly, the logs are written in stdout. You can also try doing cat access.log inside the container and it still doesn't show anything.
Now, the right way to get your logs is going outside the container and doing docker logs <container-id>
Then, you would see your logs.
Hope this helps!
My stack is nginx that runs python web.py fast-cgi scripts using spawn-fcgi. I am using runit to keep the process alive as a Daemon. I am using unix sockets fior the spawed-fcgi.
The below is my runit script called myserver in /etc/sv/myserver with the run file in /etc/sv/myserver/run.
exec spawn-fcgi -n -d /home/ubuntu/Servers/rtbTest/ -s /tmp/nginx9002.socket -u www-data -f /home/ubuntu/Servers/rtbTest/index.py >> /var/log/mylog.sys.log 2>&1
I need to push changes to the sripts to the production servers. I use paramiko to ssh into the box and update the index.py script.
My question is this, how do I gracefully reload the index.py using best practice to update to the new code.
Do I use:
sudo /etc/init.d/nginx reload
Do I restart the the runit script:
sudo sv start myserver
Or do I use both:
sudo /etc/init.d/nginx reload
sudo sv start myserver
Or none of the above?
Basically you have to re-start the process that's loaded your Python script. This is spawn-cgi and not nginx itself. nginx only communicates with spawn-cgi via the Unix socket and will happily re-connect if the connection is lost due to a restart of the spawn-cgi process.
Therefore I'd suggest a simple sudo sv restart myserver. No need to re-start/re-load nginx itself.
I am using supervisor to launch and manage a nginx process. So far this works perfectly. The problem I am having is shutting down the instance.
I have tried using "supervisorctl -c shutdown [all]" and this shuts down the daemon and in the supervisorctl interactive console it says nginx is stopped. However, if I do a ps -A | grep nginx command it still appears in the list.
My config for the nginx instance is as follows:
[program:nginx]
command=./bin/nginx
-p /home/me/sites/project.domain.com/
-c project/etc/nginx.conf
directory=/home/me/sites/project.domain.com
autostart=true
autorestart=true
redirect_stderr=true
exitcodes=0
stopsignal=TERM
Any suggestion why nginx could not be shutting down?
Have you made sure you are not starting up nginx in daemonized mode? It is important you start all your child-processes of supervisor in non-daemonized mode. I currently don't have nginx boot options at hand, but this might give you a start in the right direction.