I want to control php processes (symfony commands) by supervisor.
Symfony commands run in php-fpm docker container.
Is possible to run separated docker container with supervisor to controll processes running in the container. With php?
Funny, I was researching various approaches to this just yesterday, in need to RabbitMQ consumer command to run side by side to my Symfony-based app.
My first thought was to separate containers as they really seem independent, after all, they would target the same DBMS server. But I had a problem in my head of needing a complete copy of my app in some container whilst using only small portion of it, so I turned my head to having only one container.
General idea is to change docker startup CMD, so it does not run the php-fpm, but supervisor instead. Then, one of supervisor's programs should be original docker startup script, and another one could be your command. I am not sure if there are some blow backs from this, but one that comes to my mind is that if php crashes, you would rely on supervisor to bring it back. If this fails, your are stuck in believing that everything is in order, but in fact it is not.
The idea from above is very well described here: http://www.inanzzz.com/index.php/post/6tik/using-supervisor-withing-docker-containers
Hope this helps...
we are using
container_id=`docker ps -q --no-trunc --filter label="com.amazonaws.ecs.container-name=php" | head -n 1`; docker exec $container_id php /var/www/application/bin/console app:cronjob
to access the PHP container.
If you are not using EC2 you might have to change the filter value.
Related
I'm getting started with running Docker on MacOS and I just was able to install a WordPress container and get that running locally.
But where the heck are the actual WordPress files?
Do I need to SSH into the container so I can view/edit them there? If so, how would one go about that?
Wordpress files are kept inside the container, for example you can find wp-content at:
/var/www/html/wp-content
But, to get "inside" your running container you will have to do something like docker container exec -it <your_container_name> bash. More here: How to get into a docker container?
Containers are considered ephemeral, which means that a good practice is to work in a way that lets you easily stop/remove a container and spin up a new one without losing your stuff. To persist your data you have the option to use volumes.
You might also want to take a look at this, which worked for me: Volume mount when setting up Wordpress with docker. If your case is to develop wordpress on docker containers, then... it's a different case.
If you have not set a binding when running the docker image for the first time you can still do the following.
docker volume ls
will list all of your volumes used by your local docker.
What you can do is the following :
docker volume inspect "VOLUME NAME"
e.g. docker volume inspect "181f5c9916a29e9f654317988f49237ea9067157bc94041176ab6ae5f9a57954"
you will find the Mountpoint of each docker volume. There could more than 1, every one of those will have a mount point.
I have created a new Dockerfile based on the official WordPress image, and I have been trying to troubleshoot why I cannot remove the default themes. I discovered that the reason is because at the time the command is executed the files do not actually exist yet.
Here are the relevant lines from my Dockerfile:
FROM wordpress
RUN rm -rf /var/www/html/wp-content/themes/twenty*
The delete command works as expected if I run it manually after the container is running.
As a side note, I have also discovered that when I copy additional custom themes to the /var/www/html/wp-content/themes directory, from the Dockerfile, it does work but not quite as I would expect. Because any files in the official docker image will overwrite my custom versions of the same file. I would have imagined this to work the other way around, in case I want to supply my own config file.
So I actually have two questions:
Is this behavior Docker-related? Or is it in the WordPress-specific image?
How can I resolve this? It feels like a hack, but is there a way to asynchronously run a delayed command from the Dockerfile?
What's up, Ben!
Your issue is related to a concept introduced by Docker named entrypoint. It's typically a script that is executed when the container is run, and contains actions that need to be ran at runtime, not buildtime. That script is ran right after you run the image. They are used to make containers behave like services. The parameters set with the CMD directive are, by default, the ones passed directly to the entrypoint, and can be overwritten.
You can find the debian template of the Dockerfile of the image you are pulling here. As you can see, it calls an entrypoint named docker-entrypoint.sh. Since I don't want to dive into it too much, basically, it's performing the installation of your application.
Since you are inheritating the Wordpress image, the entrypoint of the wordpress image is being executed. Overwriting it so that it is not executed anymore is not a good idea either, since it would render your image useless.
A simple hack that would work in this case would be the following:
FROM wordpress
RUN sed -i 's/exec \"\$\#\"/exec \"rm -rf \/var\/www\/html\/wp-content\/themes\/twenty\* \&\& \$\#\"/g'
That would rewrite the entrypoint, making the last exec clause to remove those files and run whatever service it decided to run (typically apache, but I don't know which could be the case in this container).
I hope that helps! :)
We use docker during development and everything works well. Our software is written in PHP and dockerized with MySQL, Apache and a lot of frameworks and libraries.
For some of our customers we want to ship docker images in order to let them test, evaluate and use it. Using docker images they just need tun run the container and get a fully installed and configured system - very easy!
But: How can we avoid customers seeing our code by simply attaching to docker or making some execs inside the containers?
Are there techniques to completely lock down every kind of access to the filesystem inside a container? We just like to get access via ssh to our software.
It is possible to override almost everything about the construction of an image at runtime using the docker run command. So they wouldn't even need to do exec, they could just override cmd or entrypoint to bash or whatever. Anytime a customer has your code (even compiled / encrypted / etc...) they have your code. If this is really a big deal, think about a SaaS model.
How would you restart a service, say for example 'nginx' when a config file changes? For example I've got Puppet creating some nginx cfg files and place them on a volume which is mounted to my nginx container. At the moment I am using docker-gen, but are there any other methods?
Docker containers are meant to be ephemeral. Also, Docker containers "containerize" whatever process you are running by making that process PID 1 inside your container. That means there is no traditional init system. In fact, no init system at all. And as you know, when the process inside your container exits, the container dies. So if we approach the problem from the standpoint of implementing ephemeral containers, you don't restart your service. You create a new container using your modified configuration. And as mentioned in the comments by thaJeztah, you can docker restart nginx your container to refresh the configuration.
Now, there are a couple of ways to hammer this square peg into a round hole. You are better than that... However, You've already noticed that docker-gen will get you nearly there. Likewise, if you take a dive into how the jwilder/nginx-proxy image works, you'll get a better idea of how docker-gen works in practice. But you've probably already seen that, since you're already using docker-gen.
The other option is to shoehorn in something like supervisord. There is plenty of information about doing that online. Tons of people have done this in the past. And so for other people that may not understand why that solves the problem, supervisord becomes your container's PID 1, and allows you to restart the child nginx processes "like normal", but without killing your container.
I'm figuring out how to setup my production server the best way, but i'm a little bit stuck about how to do it correctly:
Currently, all my web applications are dockerified, i have:
One nginx front container, that route request to several backend containers:
One Symfony App
Two Wordpress blog
One NodeJS App
One MySql container for DB storage
One MongoDB container too
ALL this infrastructure is started using docker-compose.
This works fine but it sounds too much "monolitihic" for me:
I cannot stop one container without restarting all the others.
I cannot add other web applications without restarting everything
I have no way to restart container automatically after a crash...
This is the first time i'm doing this, do you know some best practices or softwares that can help me to improve my production server ?
Thanks a lot !
I cannot stop one container without restarting all the others.
What prevents you from using the docker stop command instead of the docker-compose stop command when you want to stop only one container?
I cannot add other web applications without restarting everything
I would suggest the use of the excellent jwilder/nginx-proxy nginx docker image to act as a reverse proxy in front of your other containers. This reverse proxy will adapt to the running/stopped containers. You can add a container later on and this reverse proxy will automatically route traffic to it depending on the domain name.
I have no way to restart container automatically after a crash...
Take a look at the restart: directive for the docker-compose.yml file.
The "monolithic" view of docker-compose is indeed made to allow you to manage your application stack in one way. But one needs to know that docker-compose is a "layer" on top of docker which (docker) you can still use.
As #thomasleveil says, you still can manipulate docker-compose created containers individually with docker.
$ docker exec project_web_1 ls -l /
$ docker stop project_db_1
$ docker up -d project_nginx_1
$ ...
In another hand I suggest to rely more on docker-compose which also allows to act on individual containers, separate your different applications or environments, and is aware of the dependencies between containers (not being exhaustive).
$ docker-compose exec web ls -l /
$ docker-compose stop db
$ docker-compose up -d nginx
$ ...
Booting up a new service is also very easy with docker-compose, since it can detect things based on your yml config without stopping anything if not needed.
$ docker-compose up -d
project_web_1 is up-to-date
project_db_1 is up-to-date
Creating project_newservice_1
I also find the help of a reverse proxy very useful for production installations. However I would more suggest the brand new Traefik which brings nice features like hot-reloading, service discovery, automated SSL certification with Letsencrypt and renewal (not being exhaustive).