How would you restart a service, say for example 'nginx' when a config file changes? For example I've got Puppet creating some nginx cfg files and place them on a volume which is mounted to my nginx container. At the moment I am using docker-gen, but are there any other methods?
Docker containers are meant to be ephemeral. Also, Docker containers "containerize" whatever process you are running by making that process PID 1 inside your container. That means there is no traditional init system. In fact, no init system at all. And as you know, when the process inside your container exits, the container dies. So if we approach the problem from the standpoint of implementing ephemeral containers, you don't restart your service. You create a new container using your modified configuration. And as mentioned in the comments by thaJeztah, you can docker restart nginx your container to refresh the configuration.
Now, there are a couple of ways to hammer this square peg into a round hole. You are better than that... However, You've already noticed that docker-gen will get you nearly there. Likewise, if you take a dive into how the jwilder/nginx-proxy image works, you'll get a better idea of how docker-gen works in practice. But you've probably already seen that, since you're already using docker-gen.
The other option is to shoehorn in something like supervisord. There is plenty of information about doing that online. Tons of people have done this in the past. And so for other people that may not understand why that solves the problem, supervisord becomes your container's PID 1, and allows you to restart the child nginx processes "like normal", but without killing your container.
Related
I'm new to Docker and was wondering if it was possible to set the following up:
I have my personal computer on which I'm working on my WordPress site via a Dockerfile. All his well and the data is persistent.
What I'd like to do is be able to save that work on Docker hub possibly or Github (I assume the updated images would be backed up on my Docker hub) and work on a totally different computer picking up where I left off.
Is that possible ?
Generally you should be able to set up your Docker containers such that there is no persistent state inside the container at all; you can freely delete and recreate the container without losing data. The best and easiest case of this is a container that just depends on some external database, in which case you don’t need to do anything.
If you have something like a Wordpress installation with local customizations, or something that stores persistent data in the filesystem, you should use the docker run -v option or the Docker Compose volumes: option to inject parts of the host filesystem into the container. Then those volumes need to be backed up (and for all that the Docker documentation endorses named volumes, if you use host directories, your normal backup solution will work fine.
In short, I’d recommend:
Build a custom image for your application, and check the Dockerfile and any supporting artifacts into source control. They don’t need to be separately backed up; even if you lose your image you can docker build again.
Inject customizations using bind mounts, and check those customizations into source control. They don’t need to be separately backed up.
Store mutable data using volumes or bind mounts, and back these up normally.
Containers are disposable. You don’t need to back up a container per se, you should always be able to recreate it from the artifacts above.
I want to control php processes (symfony commands) by supervisor.
Symfony commands run in php-fpm docker container.
Is possible to run separated docker container with supervisor to controll processes running in the container. With php?
Funny, I was researching various approaches to this just yesterday, in need to RabbitMQ consumer command to run side by side to my Symfony-based app.
My first thought was to separate containers as they really seem independent, after all, they would target the same DBMS server. But I had a problem in my head of needing a complete copy of my app in some container whilst using only small portion of it, so I turned my head to having only one container.
General idea is to change docker startup CMD, so it does not run the php-fpm, but supervisor instead. Then, one of supervisor's programs should be original docker startup script, and another one could be your command. I am not sure if there are some blow backs from this, but one that comes to my mind is that if php crashes, you would rely on supervisor to bring it back. If this fails, your are stuck in believing that everything is in order, but in fact it is not.
The idea from above is very well described here: http://www.inanzzz.com/index.php/post/6tik/using-supervisor-withing-docker-containers
Hope this helps...
we are using
container_id=`docker ps -q --no-trunc --filter label="com.amazonaws.ecs.container-name=php" | head -n 1`; docker exec $container_id php /var/www/application/bin/console app:cronjob
to access the PHP container.
If you are not using EC2 you might have to change the filter value.
I'm getting started with running Docker on MacOS and I just was able to install a WordPress container and get that running locally.
But where the heck are the actual WordPress files?
Do I need to SSH into the container so I can view/edit them there? If so, how would one go about that?
Wordpress files are kept inside the container, for example you can find wp-content at:
/var/www/html/wp-content
But, to get "inside" your running container you will have to do something like docker container exec -it <your_container_name> bash. More here: How to get into a docker container?
Containers are considered ephemeral, which means that a good practice is to work in a way that lets you easily stop/remove a container and spin up a new one without losing your stuff. To persist your data you have the option to use volumes.
You might also want to take a look at this, which worked for me: Volume mount when setting up Wordpress with docker. If your case is to develop wordpress on docker containers, then... it's a different case.
If you have not set a binding when running the docker image for the first time you can still do the following.
docker volume ls
will list all of your volumes used by your local docker.
What you can do is the following :
docker volume inspect "VOLUME NAME"
e.g. docker volume inspect "181f5c9916a29e9f654317988f49237ea9067157bc94041176ab6ae5f9a57954"
you will find the Mountpoint of each docker volume. There could more than 1, every one of those will have a mount point.
We use docker during development and everything works well. Our software is written in PHP and dockerized with MySQL, Apache and a lot of frameworks and libraries.
For some of our customers we want to ship docker images in order to let them test, evaluate and use it. Using docker images they just need tun run the container and get a fully installed and configured system - very easy!
But: How can we avoid customers seeing our code by simply attaching to docker or making some execs inside the containers?
Are there techniques to completely lock down every kind of access to the filesystem inside a container? We just like to get access via ssh to our software.
It is possible to override almost everything about the construction of an image at runtime using the docker run command. So they wouldn't even need to do exec, they could just override cmd or entrypoint to bash or whatever. Anytime a customer has your code (even compiled / encrypted / etc...) they have your code. If this is really a big deal, think about a SaaS model.
I'm figuring out how to setup my production server the best way, but i'm a little bit stuck about how to do it correctly:
Currently, all my web applications are dockerified, i have:
One nginx front container, that route request to several backend containers:
One Symfony App
Two Wordpress blog
One NodeJS App
One MySql container for DB storage
One MongoDB container too
ALL this infrastructure is started using docker-compose.
This works fine but it sounds too much "monolitihic" for me:
I cannot stop one container without restarting all the others.
I cannot add other web applications without restarting everything
I have no way to restart container automatically after a crash...
This is the first time i'm doing this, do you know some best practices or softwares that can help me to improve my production server ?
Thanks a lot !
I cannot stop one container without restarting all the others.
What prevents you from using the docker stop command instead of the docker-compose stop command when you want to stop only one container?
I cannot add other web applications without restarting everything
I would suggest the use of the excellent jwilder/nginx-proxy nginx docker image to act as a reverse proxy in front of your other containers. This reverse proxy will adapt to the running/stopped containers. You can add a container later on and this reverse proxy will automatically route traffic to it depending on the domain name.
I have no way to restart container automatically after a crash...
Take a look at the restart: directive for the docker-compose.yml file.
The "monolithic" view of docker-compose is indeed made to allow you to manage your application stack in one way. But one needs to know that docker-compose is a "layer" on top of docker which (docker) you can still use.
As #thomasleveil says, you still can manipulate docker-compose created containers individually with docker.
$ docker exec project_web_1 ls -l /
$ docker stop project_db_1
$ docker up -d project_nginx_1
$ ...
In another hand I suggest to rely more on docker-compose which also allows to act on individual containers, separate your different applications or environments, and is aware of the dependencies between containers (not being exhaustive).
$ docker-compose exec web ls -l /
$ docker-compose stop db
$ docker-compose up -d nginx
$ ...
Booting up a new service is also very easy with docker-compose, since it can detect things based on your yml config without stopping anything if not needed.
$ docker-compose up -d
project_web_1 is up-to-date
project_db_1 is up-to-date
Creating project_newservice_1
I also find the help of a reverse proxy very useful for production installations. However I would more suggest the brand new Traefik which brings nice features like hot-reloading, service discovery, automated SSL certification with Letsencrypt and renewal (not being exhaustive).