So, what I often find is that out Wordpress builds start from almost the same base - with the same plugins etc (e.g., WooCommerce when we're building a shop etc). What we're looking at is using Docker for local development and deploying to production.
However, the issue we're having is building from our base and then being able to locate the mapped development directory on our local machines with the added plugin directories. Essentially, we will maintain the plugins we want and ensure that they are good with the latest stable release of Wordpress and we will pull down the latest Wordpress docker image so we don't have to maintain that side of things too closely...
Dockerfile:
FROM wordpress:php7.1-apache
COPY wordpress-docker-build/wordpress-plugins /var/www/html/wp-content/plugins
docker-compose.yml (something like):
services:
wp:
build: .
ports:
- "8000:80"
environment:
WORDPRESS_DB_PASSWORD: qwerty
volumes:
- /Users/username/Developer/repos/my-wordpress-site:/var/www/html
mysql:
image: "mysql:5.7"
environment:
MYSQL_ROOT_PASSWORD: qwerty
Essentially, what we find is when we remove volumes from docker-compose.yml - we have exactly the plugins we want. When we add in the volume mapping to the wordpress service, only the base wordpress image is installed and mapped across...no plugins.
We've tried all manor of tutorials, documentations, trial and error etc but a lot of head-scratching has ensued...
Volumes don't work like that. When you mount something into the container at /var/www/html, it replaces that directory and everything in it.
If you don't have /Users/username/Developer/repos/my-wordpress-site/wp-content/plugins being mapped from your host, it won't exist in the container after the mount. The mount isn't additive, it totally replaces what existed in the container with what you have on the host.
However, the issue we're having is building from our base and then being able to locate the mapped development directory on our local machines with the added plugin directories.
Bind mounted volumes are a one-way street in this regard, from the host to the container, with the implications discussed above. You can't use volumes to retrieve files from a container and edit them on the host. The closest thing to that is the docker cp command, but that's not helpful in this case.
The easiest way to accomplish what you want is using a bind mount to put the plugins from your host to the running container, either by placing them in /Users/username/Developer/repos/my-wordpress-site/wp-content/plugins on the host, or adding a second bind mount that only targets the plugins directory (/some/other/dir:/var/www/html/wp-content/plugins) if that's more convenient.
Also, if the COPY is only in your Dockerfile to support each developer's own effort developing plugins, and not to build an image to pass around or deploy to some other environment, you can probably just remove the line. It's overridden by the bind mount now and would be in the future if you're intention is being able to edit the plugins in a live container using a bind mount.
edit: misunderstood OP's dilemma
Related
I have been given the assignment of customizing an Alfresco Community Edition 7.0 installation from docker-compose. I have looked at the resources and am looking for the best approach. I also see a github repository for acs-packaging but that appears to be related to the enterprise version. I could create images off the existing images and build my own docker-compose file that loads my images. This seams to be a bit of an overkill for changes to the alfresco global properties file.
For example, I am moving the DB and file share to docker volumes and mapping to host directories. I can add the volume for Postgres easily to the docker compose file. The file share information appears to be less straight forward. I see there is a global property that specifies the directory in alfresco-global.properties (dir.root=/alfresco/data). It is a little less clear how many of the docker components need the volumes mapped.
You should externalize your directory to setup persistence data storage for content store, solr etc. in your custom docker image.
volumes:
- alfdata:/usr/local/tomcat/alf_data
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
- solrdata:/opt/alfresco-search-services/data
volumes:
- amqdata:/opt/activemq/data
Please refer the link for more information.
-Arjun M
Consider going through this discussion, and potentially using the community template:
https://github.com/Alfresco/acs-community-packaging/pull/201
https://github.com/keensoft/docker-alfresco
I have Wordpress deployed in Azure AppService with containers (Azure Container registry is used)
the image used is from the docker hub -> wordpress:latest
I also have --settings WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE enabled so my files are persisted in the VM
I have noticed that images are not displayed
I see a 502 error - https://{website}.azurewebsites.net/wp-includes/images/spinner-2x.gif
I have checked with KUDU and the image is there
Could anyone point me into the right direction to fix this issue?
I have followed steps from this tutorial: https://learn.microsoft.com/en-us/azure/app-service/tutorial-multi-container-app
I opened a support ticket with Azure who said this is a known issue. The current workaround is to disable the following apache settings in the apache2.conf file:
EnableMMAP Off
EnableSendfile Off
If you're using Azure's base PHP image from mcr.microsoft.com/appsvc/php, they've built in these apache settings in their 7.4-apache_20210422.1 version (and presumably any later versions). See https://mcr.microsoft.com/v2/appsvc/php/tags/list to list image versions.
Setting WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE enable persistent shared storage. You then need to use the WEBAPP_STORAGE_HOME environment variable that points to /home in your folder paths and volumes.
${WEBAPP_STORAGE_HOME}/site/wwwroot:/var/www/html
Documentation
In my application with docker-compose I have 2 container, 1 nginx and 1 python script crontab that update some files in nginx/html folder.
With docker-compose when I declare
volumes:
- shared-volume:/usr/share/nginx/html/assets/xxx:ro
the initial files in the nginx images are copied to the shared volume.
Now I'm trying to move the application to k8s, but when I use shared volume I see that initial files in nginx/html are missing.
So the question is, is it possible to copy initial files from my nginx images to the shared volume? How?
____________________________EDIT______________________________________
To clarify, I'm new to k8s, With VM we usually run script that update an nginx assets folder. With docker-compose I use something like this:
version: '3.7'
services:
site-web:
build: .
image: "site-home:1.0.0"
ports:
- "80:80"
volumes:
- v_site-home:/usr/share/nginx/html/assets/:ro
site-cron:
build: ./cronScript
image: "site-home-cron:1.0.0"
volumes:
- v_site-home:/app/my-assets
volumes:
v_site-home:
name: v_site-home
Now I'm starting to write a deployment (with persistent volume? Because as I understand even if there is a persistent volume a stateful set is not useful in this case) to convert my docker-compose to k8s. Actually we cannot use any public cloud for security policy (data must be in our country and now there's no big company with this option). So the idea is to run vanilla k8s in multiple bare metal server and start migration with very simple application like this. I tried with the 2 docker, replica:1 and an empty volume in a single pod. In this case I see that initially the application has the nginx folder empty, and I need to wait the crontab update to see my results. So this is the first problem.
Now I read your answer and obviously I've other doubts. Is it better to split the pod, so 1 pod for container? A deployment with persistent volume is the way? In this case, I've the old problem, how to see initial nginx assets files? Thank you so much for the help!
This generally requires an initContainer which runs cp. it’s not a great solution but it gets the job done.
Kubernetes doesn't have the Docker feature that copies content into volumes when the container is first started.
The two straightforward answers to this are:
Build a custom Nginx image that contains the static assets. You can use the Dockerfile COPY --from=other/image:tag construct to copy them from your application container into the proxy container.
Store the assets somewhere outside container space altogether. If you're deploying this to AWS, you can publish them to S3, and even directly serve them from a public S3 bucket. Or if you have something like an NFS mount accessible to your cluster, have your overall build process copy the static assets there.
The Docker feature has many corner cases that are frequently ignored, most notably that content is only copied when the container is first started. If you're expecting the volume to contain static assets connected to your application, and you update the application container, the named volume will not update. As such you need some other solution to manage the shared content anyways, and I wouldn't rely on that Docker feature as a solution to this problem.
In Kubernetes you have the additional problem that you typically will want to scale HTTP proxies and application backends separately, which means putting them in different Deployments. Once you have three copies of your application, which one provides "the" static assets? You need to use something like a persistent volume to share contents, but most of the persistent volume types that are easy to get access to don't support multiple mounts.
I installed WordPress with docker-compose, now I've finished developing the website, how can I turn this container into a permanent image so that I'm able to update this website even if I remove the current container?
The procedure I went through is the same as this tutorial.
Now I got the WordPress container as below
$ docker-compose images
Container Repository Tag Image Id Size
-------------------------------------------------------------------------
wordpress_db_1 mysql 5.7 e47e309f72c8 355 MB
wordpress_wordpress_1 wordpress 5.1.0-apache 523eaf9f0ced 402 MB
If that wordpress image is well made, you should only need to backup your volumes. However if you changed files on the container filesystem (as opposed to in volumes), you will also need to commit your container to produce a new docker image. Such an image could then be used to create new containers.
In order to figure out if files were modified/added on the container filesystem, run the docker diff command:
docker diff wordpress_wordpress_1
On my tests, after going through Wordpress setup, and even after updating Wordpress, plugin and themes the result of the docker diff command gives me:
C /run
C /run/apache2
A /run/apache2/apache2.pid
Which means that only 2 files/directories were Changed and 1 file Added.
As such, there is no point going through the trouble of using the docker commit command to produce a new docker image. Such a image would only have those 3 modifications.
This also means that this Wordpress docker image is well designed because all valuable data is persisted in docker volumes. (The same applies for the MySQL image)
How to deal with container lost ?
As we have verified earlier, all valuable data lies in docker volumes. So it does not matter if you loose your containers. All that matter is to not loose your volumes. The question of how to backup a docker volume is already answered multiple times on Stack Overflow.
Now be aware that a few docker and docker-compose commands do delete volumes! For instance if you run docker rm -v <my container>, the -v option is to tell docker to also delete associated volumes while deleting the container. Or if you run docker-compose down -v, volumes would also be deleted.
How to backup Wordpress running in a docker-compose project?
Well, the best way is to backup your Wordpress data with a Wordpress plugin that is well known for doing so correctly. It is not because you are running Wordpress in docker containers that Wordpress good practices don't apply anymore.
In the case you need to restore your website, start new containers/volumes with your docker-compose.yml file, go through the minimal Wordpress setup, install your backup plugin and use it to restore your data.
I am just setting up docker on my local machine for web-dev.
I have seen lots of tutorials for docker with rails etc...
I am curious how does docker work in terms of editing the projects source code.
I am trying to wrap my head around this -v tag.
In many of the tutorials I have seen users have stored their Dockerfile in the project base directory and the built from there, do you just edit the code in the directory and refresh the browser? And leave docker running.
Just trying to wrap my head around it all, sorry if basic question.
I usually differentiate two use cases of Docker:
in one case I want a Dockerfile that helps end users get started easily
in another case I want a Dockerfile to help code contributors to have a testing environment up and running easily
For end users, you want your Dockerfile to
install dependencies
checkout the latests stable code (from github or elsewhere)
setup some kind of default configuration
for contributors, you want your Dockerfile to
install dependencies
document how to run a Docker container setting up a volume to share the source code between their development environment and the docker container.
To sum up, for end users the Docker image should embed the application code while for contributors the docker image will just have the dependencies.