I'm figuring out how to setup my production server the best way, but i'm a little bit stuck about how to do it correctly:
Currently, all my web applications are dockerified, i have:
One nginx front container, that route request to several backend containers:
One Symfony App
Two Wordpress blog
One NodeJS App
One MySql container for DB storage
One MongoDB container too
ALL this infrastructure is started using docker-compose.
This works fine but it sounds too much "monolitihic" for me:
I cannot stop one container without restarting all the others.
I cannot add other web applications without restarting everything
I have no way to restart container automatically after a crash...
This is the first time i'm doing this, do you know some best practices or softwares that can help me to improve my production server ?
Thanks a lot !
I cannot stop one container without restarting all the others.
What prevents you from using the docker stop command instead of the docker-compose stop command when you want to stop only one container?
I cannot add other web applications without restarting everything
I would suggest the use of the excellent jwilder/nginx-proxy nginx docker image to act as a reverse proxy in front of your other containers. This reverse proxy will adapt to the running/stopped containers. You can add a container later on and this reverse proxy will automatically route traffic to it depending on the domain name.
I have no way to restart container automatically after a crash...
Take a look at the restart: directive for the docker-compose.yml file.
The "monolithic" view of docker-compose is indeed made to allow you to manage your application stack in one way. But one needs to know that docker-compose is a "layer" on top of docker which (docker) you can still use.
As #thomasleveil says, you still can manipulate docker-compose created containers individually with docker.
$ docker exec project_web_1 ls -l /
$ docker stop project_db_1
$ docker up -d project_nginx_1
$ ...
In another hand I suggest to rely more on docker-compose which also allows to act on individual containers, separate your different applications or environments, and is aware of the dependencies between containers (not being exhaustive).
$ docker-compose exec web ls -l /
$ docker-compose stop db
$ docker-compose up -d nginx
$ ...
Booting up a new service is also very easy with docker-compose, since it can detect things based on your yml config without stopping anything if not needed.
$ docker-compose up -d
project_web_1 is up-to-date
project_db_1 is up-to-date
Creating project_newservice_1
I also find the help of a reverse proxy very useful for production installations. However I would more suggest the brand new Traefik which brings nice features like hot-reloading, service discovery, automated SSL certification with Letsencrypt and renewal (not being exhaustive).
Related
In my application with docker-compose I have 2 container, 1 nginx and 1 python script crontab that update some files in nginx/html folder.
With docker-compose when I declare
volumes:
- shared-volume:/usr/share/nginx/html/assets/xxx:ro
the initial files in the nginx images are copied to the shared volume.
Now I'm trying to move the application to k8s, but when I use shared volume I see that initial files in nginx/html are missing.
So the question is, is it possible to copy initial files from my nginx images to the shared volume? How?
____________________________EDIT______________________________________
To clarify, I'm new to k8s, With VM we usually run script that update an nginx assets folder. With docker-compose I use something like this:
version: '3.7'
services:
site-web:
build: .
image: "site-home:1.0.0"
ports:
- "80:80"
volumes:
- v_site-home:/usr/share/nginx/html/assets/:ro
site-cron:
build: ./cronScript
image: "site-home-cron:1.0.0"
volumes:
- v_site-home:/app/my-assets
volumes:
v_site-home:
name: v_site-home
Now I'm starting to write a deployment (with persistent volume? Because as I understand even if there is a persistent volume a stateful set is not useful in this case) to convert my docker-compose to k8s. Actually we cannot use any public cloud for security policy (data must be in our country and now there's no big company with this option). So the idea is to run vanilla k8s in multiple bare metal server and start migration with very simple application like this. I tried with the 2 docker, replica:1 and an empty volume in a single pod. In this case I see that initially the application has the nginx folder empty, and I need to wait the crontab update to see my results. So this is the first problem.
Now I read your answer and obviously I've other doubts. Is it better to split the pod, so 1 pod for container? A deployment with persistent volume is the way? In this case, I've the old problem, how to see initial nginx assets files? Thank you so much for the help!
This generally requires an initContainer which runs cp. it’s not a great solution but it gets the job done.
Kubernetes doesn't have the Docker feature that copies content into volumes when the container is first started.
The two straightforward answers to this are:
Build a custom Nginx image that contains the static assets. You can use the Dockerfile COPY --from=other/image:tag construct to copy them from your application container into the proxy container.
Store the assets somewhere outside container space altogether. If you're deploying this to AWS, you can publish them to S3, and even directly serve them from a public S3 bucket. Or if you have something like an NFS mount accessible to your cluster, have your overall build process copy the static assets there.
The Docker feature has many corner cases that are frequently ignored, most notably that content is only copied when the container is first started. If you're expecting the volume to contain static assets connected to your application, and you update the application container, the named volume will not update. As such you need some other solution to manage the shared content anyways, and I wouldn't rely on that Docker feature as a solution to this problem.
In Kubernetes you have the additional problem that you typically will want to scale HTTP proxies and application backends separately, which means putting them in different Deployments. Once you have three copies of your application, which one provides "the" static assets? You need to use something like a persistent volume to share contents, but most of the persistent volume types that are easy to get access to don't support multiple mounts.
I want to control php processes (symfony commands) by supervisor.
Symfony commands run in php-fpm docker container.
Is possible to run separated docker container with supervisor to controll processes running in the container. With php?
Funny, I was researching various approaches to this just yesterday, in need to RabbitMQ consumer command to run side by side to my Symfony-based app.
My first thought was to separate containers as they really seem independent, after all, they would target the same DBMS server. But I had a problem in my head of needing a complete copy of my app in some container whilst using only small portion of it, so I turned my head to having only one container.
General idea is to change docker startup CMD, so it does not run the php-fpm, but supervisor instead. Then, one of supervisor's programs should be original docker startup script, and another one could be your command. I am not sure if there are some blow backs from this, but one that comes to my mind is that if php crashes, you would rely on supervisor to bring it back. If this fails, your are stuck in believing that everything is in order, but in fact it is not.
The idea from above is very well described here: http://www.inanzzz.com/index.php/post/6tik/using-supervisor-withing-docker-containers
Hope this helps...
we are using
container_id=`docker ps -q --no-trunc --filter label="com.amazonaws.ecs.container-name=php" | head -n 1`; docker exec $container_id php /var/www/application/bin/console app:cronjob
to access the PHP container.
If you are not using EC2 you might have to change the filter value.
I'm getting started with running Docker on MacOS and I just was able to install a WordPress container and get that running locally.
But where the heck are the actual WordPress files?
Do I need to SSH into the container so I can view/edit them there? If so, how would one go about that?
Wordpress files are kept inside the container, for example you can find wp-content at:
/var/www/html/wp-content
But, to get "inside" your running container you will have to do something like docker container exec -it <your_container_name> bash. More here: How to get into a docker container?
Containers are considered ephemeral, which means that a good practice is to work in a way that lets you easily stop/remove a container and spin up a new one without losing your stuff. To persist your data you have the option to use volumes.
You might also want to take a look at this, which worked for me: Volume mount when setting up Wordpress with docker. If your case is to develop wordpress on docker containers, then... it's a different case.
If you have not set a binding when running the docker image for the first time you can still do the following.
docker volume ls
will list all of your volumes used by your local docker.
What you can do is the following :
docker volume inspect "VOLUME NAME"
e.g. docker volume inspect "181f5c9916a29e9f654317988f49237ea9067157bc94041176ab6ae5f9a57954"
you will find the Mountpoint of each docker volume. There could more than 1, every one of those will have a mount point.
I want my dokku host to run the main nginx for my domain (let say cooldok.ku).
On cooldok.ku for some reasons I have other Virtual Machines serving content. I want to expose this content on a subdomain (say vm.cooldok.ku, runs in a VM at 10.0.0.7 on the cooldok.ku host).
I figured the involved methodology is called reverse-proxy.
In an optimal world, there would be a dokku-only way to register and 'link'/proxy the subdomains. As an added bonus, the cooldok.ku host would do the ssl-stuff for https itself (like ssltunnel) so that I could leverage existing certificates and/or use the awesome letsencrypt on the same machine and secure applications in the VM that were not meant to be served via https.
How can this scenario be realised with dokku? How difficult would it be to write a plugin doing that?
Update
So, basically dokku (0.8) comes equipped with exactly everything it would need. The question is, how much of what dokku wants to achieve (fire up those yummy docker containers) is in the way. To hack a setup which does what I want, following can be done:
# create folder where we want it
dokku apps:create vm
Now, these files have to be created/be present (vanilla 0.8 dokku installation)
#/home/dokku/vm/DOCKER_OPTIONS_DEPLOY
--restart=on-failure:10
#/home/dokku/vm/IP.web.1
10.0.0.7
#/home/dokku/vm/PORT.web.1
80
#/home/dokku/vm/URLS
# THIS FILE IS GENERATED BY DOKKU - DO NOT EDIT, YOUR CHANGES WILL BE OVERWRITTEN - I did it nonetheless
http://vm.cooldok.ku
#/home/dokku/vm/VHOST
vm.cookdok.ku
#/home/dokku/vm/nginx.conf
# Just listing changes from another default app
[...]
proxy_pass http://vm-host;
[...]
upstream vm-host {
server 10.0.0.7:80;
}
Afterwards, nginx needs a manual restart (or ... dokku can do something for us here)
I am pretty sure that some of the (redundant) information can be left out, as dokku should puzzle the nginx.conf itself, for example. I am not sure if this setup survives a reboot/nginx restart. Also, on tests, letsencrypt would not let me install the certificates/rebuild the nginx configuration because it sees the app vm as not being deployed.
Update2
To overcome the "app not deployed" issue, it suffices to touch /home/dokku/vm/CONTAINER, but this gets messier and messier ...
I bundled the information from the updates of my post into a dirty script at https://github.com/econya/scripts/blob/master/scripts/virt-helpers/fake-dokku-app.sh .
I guess the cleanest solution as-is with upwards compatibility would be to create a Dockerfile that launches a reverse proxy itself (configured via env/ config:set variables) - but I am happy to learn a smarter and nicer solution, or that I get paid to write a proper plugin ;)
Second approach would be to use a "Null"-Docker image together with a custom nginx template I guess.
Update 2021
According to the release notes it works now (look for "Routing to non-Dokku managed apps"):
https://dokku.github.io/release/dokku-0.25.0
I still use an older dokku and the solution written above, though.
How would you restart a service, say for example 'nginx' when a config file changes? For example I've got Puppet creating some nginx cfg files and place them on a volume which is mounted to my nginx container. At the moment I am using docker-gen, but are there any other methods?
Docker containers are meant to be ephemeral. Also, Docker containers "containerize" whatever process you are running by making that process PID 1 inside your container. That means there is no traditional init system. In fact, no init system at all. And as you know, when the process inside your container exits, the container dies. So if we approach the problem from the standpoint of implementing ephemeral containers, you don't restart your service. You create a new container using your modified configuration. And as mentioned in the comments by thaJeztah, you can docker restart nginx your container to refresh the configuration.
Now, there are a couple of ways to hammer this square peg into a round hole. You are better than that... However, You've already noticed that docker-gen will get you nearly there. Likewise, if you take a dive into how the jwilder/nginx-proxy image works, you'll get a better idea of how docker-gen works in practice. But you've probably already seen that, since you're already using docker-gen.
The other option is to shoehorn in something like supervisord. There is plenty of information about doing that online. Tons of people have done this in the past. And so for other people that may not understand why that solves the problem, supervisord becomes your container's PID 1, and allows you to restart the child nginx processes "like normal", but without killing your container.