kubernetes share non-empty volume - nginx

In my application with docker-compose I have 2 container, 1 nginx and 1 python script crontab that update some files in nginx/html folder.
With docker-compose when I declare
volumes:
- shared-volume:/usr/share/nginx/html/assets/xxx:ro
the initial files in the nginx images are copied to the shared volume.
Now I'm trying to move the application to k8s, but when I use shared volume I see that initial files in nginx/html are missing.
So the question is, is it possible to copy initial files from my nginx images to the shared volume? How?
____________________________EDIT______________________________________
To clarify, I'm new to k8s, With VM we usually run script that update an nginx assets folder. With docker-compose I use something like this:
version: '3.7'
services:
site-web:
build: .
image: "site-home:1.0.0"
ports:
- "80:80"
volumes:
- v_site-home:/usr/share/nginx/html/assets/:ro
site-cron:
build: ./cronScript
image: "site-home-cron:1.0.0"
volumes:
- v_site-home:/app/my-assets
volumes:
v_site-home:
name: v_site-home
Now I'm starting to write a deployment (with persistent volume? Because as I understand even if there is a persistent volume a stateful set is not useful in this case) to convert my docker-compose to k8s. Actually we cannot use any public cloud for security policy (data must be in our country and now there's no big company with this option). So the idea is to run vanilla k8s in multiple bare metal server and start migration with very simple application like this. I tried with the 2 docker, replica:1 and an empty volume in a single pod. In this case I see that initially the application has the nginx folder empty, and I need to wait the crontab update to see my results. So this is the first problem.
Now I read your answer and obviously I've other doubts. Is it better to split the pod, so 1 pod for container? A deployment with persistent volume is the way? In this case, I've the old problem, how to see initial nginx assets files? Thank you so much for the help!

This generally requires an initContainer which runs cp. it’s not a great solution but it gets the job done.

Kubernetes doesn't have the Docker feature that copies content into volumes when the container is first started.
The two straightforward answers to this are:
Build a custom Nginx image that contains the static assets. You can use the Dockerfile COPY --from=other/image:tag construct to copy them from your application container into the proxy container.
Store the assets somewhere outside container space altogether. If you're deploying this to AWS, you can publish them to S3, and even directly serve them from a public S3 bucket. Or if you have something like an NFS mount accessible to your cluster, have your overall build process copy the static assets there.
The Docker feature has many corner cases that are frequently ignored, most notably that content is only copied when the container is first started. If you're expecting the volume to contain static assets connected to your application, and you update the application container, the named volume will not update. As such you need some other solution to manage the shared content anyways, and I wouldn't rely on that Docker feature as a solution to this problem.
In Kubernetes you have the additional problem that you typically will want to scale HTTP proxies and application backends separately, which means putting them in different Deployments. Once you have three copies of your application, which one provides "the" static assets? You need to use something like a persistent volume to share contents, but most of the persistent volume types that are easy to get access to don't support multiple mounts.

Related

Recommended Approach To Customization Of Alfresco Community Edition Docker Compose Installation

I have been given the assignment of customizing an Alfresco Community Edition 7.0 installation from docker-compose. I have looked at the resources and am looking for the best approach. I also see a github repository for acs-packaging but that appears to be related to the enterprise version. I could create images off the existing images and build my own docker-compose file that loads my images. This seams to be a bit of an overkill for changes to the alfresco global properties file.
For example, I am moving the DB and file share to docker volumes and mapping to host directories. I can add the volume for Postgres easily to the docker compose file. The file share information appears to be less straight forward. I see there is a global property that specifies the directory in alfresco-global.properties (dir.root=/alfresco/data). It is a little less clear how many of the docker components need the volumes mapped.
You should externalize your directory to setup persistence data storage for content store, solr etc. in your custom docker image.
volumes:
- alfdata:/usr/local/tomcat/alf_data
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
- solrdata:/opt/alfresco-search-services/data
volumes:
- amqdata:/opt/activemq/data
Please refer the link for more information.
-Arjun M
Consider going through this discussion, and potentially using the community template:
https://github.com/Alfresco/acs-community-packaging/pull/201
https://github.com/keensoft/docker-alfresco

Is it possible to back up a Docker container with all the volumes / data / state?

I'm new to Docker and was wondering if it was possible to set the following up:
I have my personal computer on which I'm working on my WordPress site via a Dockerfile. All his well and the data is persistent.
What I'd like to do is be able to save that work on Docker hub possibly or Github (I assume the updated images would be backed up on my Docker hub) and work on a totally different computer picking up where I left off.
Is that possible ?
Generally you should be able to set up your Docker containers such that there is no persistent state inside the container at all; you can freely delete and recreate the container without losing data. The best and easiest case of this is a container that just depends on some external database, in which case you don’t need to do anything.
If you have something like a Wordpress installation with local customizations, or something that stores persistent data in the filesystem, you should use the docker run -v option or the Docker Compose volumes: option to inject parts of the host filesystem into the container. Then those volumes need to be backed up (and for all that the Docker documentation endorses named volumes, if you use host directories, your normal backup solution will work fine.
In short, I’d recommend:
Build a custom image for your application, and check the Dockerfile and any supporting artifacts into source control. They don’t need to be separately backed up; even if you lose your image you can docker build again.
Inject customizations using bind mounts, and check those customizations into source control. They don’t need to be separately backed up.
Store mutable data using volumes or bind mounts, and back these up normally.
Containers are disposable. You don’t need to back up a container per se, you should always be able to recreate it from the artifacts above.

Configuration of custom Wordpress-Docker build process

So, what I often find is that out Wordpress builds start from almost the same base - with the same plugins etc (e.g., WooCommerce when we're building a shop etc). What we're looking at is using Docker for local development and deploying to production.
However, the issue we're having is building from our base and then being able to locate the mapped development directory on our local machines with the added plugin directories. Essentially, we will maintain the plugins we want and ensure that they are good with the latest stable release of Wordpress and we will pull down the latest Wordpress docker image so we don't have to maintain that side of things too closely...
Dockerfile:
FROM wordpress:php7.1-apache
COPY wordpress-docker-build/wordpress-plugins /var/www/html/wp-content/plugins
docker-compose.yml (something like):
services:
wp:
build: .
ports:
- "8000:80"
environment:
WORDPRESS_DB_PASSWORD: qwerty
volumes:
- /Users/username/Developer/repos/my-wordpress-site:/var/www/html
mysql:
image: "mysql:5.7"
environment:
MYSQL_ROOT_PASSWORD: qwerty
Essentially, what we find is when we remove volumes from docker-compose.yml - we have exactly the plugins we want. When we add in the volume mapping to the wordpress service, only the base wordpress image is installed and mapped across...no plugins.
We've tried all manor of tutorials, documentations, trial and error etc but a lot of head-scratching has ensued...
Volumes don't work like that. When you mount something into the container at /var/www/html, it replaces that directory and everything in it.
If you don't have /Users/username/Developer/repos/my-wordpress-site/wp-content/plugins being mapped from your host, it won't exist in the container after the mount. The mount isn't additive, it totally replaces what existed in the container with what you have on the host.
However, the issue we're having is building from our base and then being able to locate the mapped development directory on our local machines with the added plugin directories.
Bind mounted volumes are a one-way street in this regard, from the host to the container, with the implications discussed above. You can't use volumes to retrieve files from a container and edit them on the host. The closest thing to that is the docker cp command, but that's not helpful in this case.
The easiest way to accomplish what you want is using a bind mount to put the plugins from your host to the running container, either by placing them in /Users/username/Developer/repos/my-wordpress-site/wp-content/plugins on the host, or adding a second bind mount that only targets the plugins directory (/some/other/dir:/var/www/html/wp-content/plugins) if that's more convenient.
Also, if the COPY is only in your Dockerfile to support each developer's own effort developing plugins, and not to build an image to pass around or deploy to some other environment, you can probably just remove the line. It's overridden by the bind mount now and would be in the future if you're intention is being able to edit the plugins in a live container using a bind mount.
edit: misunderstood OP's dilemma

How to migrate containers from a local docker-compose to another host

Good day!
I'm trying to migrate my local wordpress/mariadb containers made from docker-compose to another host probably to a production server.
Here's what I did:
I created a docker-compose for the wordpress and mariadb containers locally. I then started to populate wordpress content to them.
Use Case:
I want to export and import the containers made through docker-compose along with its data to another server.
Please guide me on my problem.
Many thanks.. :-)
Ideally you wouldn't be storing data in the containers. You want to be able to destroy and recreate them at will. So if that's what you have I'd probably recommend figuring out how to copy the data out of the containers, then deploy them remotely from images. When you redeploy them you want to mount the data directories to an external drive which will never be destroyed and repopulate the data there.
If you really want to deploy the containers with the data then I'd say you want to look at Docker Commit which you can use to create images from your existing containers which you can then deploy.
This is solved! :-)
I define volumes in mariadb and wordpress services in my Compose file which created the data directories that I need. I will then tar the docker compose directory and will recreate the docker-compose in my remote server. thanks for the awesome answer. heads up for you #lecstor.

Docker, and small production server infrastructure advices

I'm figuring out how to setup my production server the best way, but i'm a little bit stuck about how to do it correctly:
Currently, all my web applications are dockerified, i have:
One nginx front container, that route request to several backend containers:
One Symfony App
Two Wordpress blog
One NodeJS App
One MySql container for DB storage
One MongoDB container too
ALL this infrastructure is started using docker-compose.
This works fine but it sounds too much "monolitihic" for me:
I cannot stop one container without restarting all the others.
I cannot add other web applications without restarting everything
I have no way to restart container automatically after a crash...
This is the first time i'm doing this, do you know some best practices or softwares that can help me to improve my production server ?
Thanks a lot !
I cannot stop one container without restarting all the others.
What prevents you from using the docker stop command instead of the docker-compose stop command when you want to stop only one container?
I cannot add other web applications without restarting everything
I would suggest the use of the excellent jwilder/nginx-proxy nginx docker image to act as a reverse proxy in front of your other containers. This reverse proxy will adapt to the running/stopped containers. You can add a container later on and this reverse proxy will automatically route traffic to it depending on the domain name.
I have no way to restart container automatically after a crash...
Take a look at the restart: directive for the docker-compose.yml file.
The "monolithic" view of docker-compose is indeed made to allow you to manage your application stack in one way. But one needs to know that docker-compose is a "layer" on top of docker which (docker) you can still use.
As #thomasleveil says, you still can manipulate docker-compose created containers individually with docker.
$ docker exec project_web_1 ls -l /
$ docker stop project_db_1
$ docker up -d project_nginx_1
$ ...
In another hand I suggest to rely more on docker-compose which also allows to act on individual containers, separate your different applications or environments, and is aware of the dependencies between containers (not being exhaustive).
$ docker-compose exec web ls -l /
$ docker-compose stop db
$ docker-compose up -d nginx
$ ...
Booting up a new service is also very easy with docker-compose, since it can detect things based on your yml config without stopping anything if not needed.
$ docker-compose up -d
project_web_1 is up-to-date
project_db_1 is up-to-date
Creating project_newservice_1
I also find the help of a reverse proxy very useful for production installations. However I would more suggest the brand new Traefik which brings nice features like hot-reloading, service discovery, automated SSL certification with Letsencrypt and renewal (not being exhaustive).

Resources