I installed WordPress with docker-compose, now I've finished developing the website, how can I turn this container into a permanent image so that I'm able to update this website even if I remove the current container?
The procedure I went through is the same as this tutorial.
Now I got the WordPress container as below
$ docker-compose images
Container Repository Tag Image Id Size
-------------------------------------------------------------------------
wordpress_db_1 mysql 5.7 e47e309f72c8 355 MB
wordpress_wordpress_1 wordpress 5.1.0-apache 523eaf9f0ced 402 MB
If that wordpress image is well made, you should only need to backup your volumes. However if you changed files on the container filesystem (as opposed to in volumes), you will also need to commit your container to produce a new docker image. Such an image could then be used to create new containers.
In order to figure out if files were modified/added on the container filesystem, run the docker diff command:
docker diff wordpress_wordpress_1
On my tests, after going through Wordpress setup, and even after updating Wordpress, plugin and themes the result of the docker diff command gives me:
C /run
C /run/apache2
A /run/apache2/apache2.pid
Which means that only 2 files/directories were Changed and 1 file Added.
As such, there is no point going through the trouble of using the docker commit command to produce a new docker image. Such a image would only have those 3 modifications.
This also means that this Wordpress docker image is well designed because all valuable data is persisted in docker volumes. (The same applies for the MySQL image)
How to deal with container lost ?
As we have verified earlier, all valuable data lies in docker volumes. So it does not matter if you loose your containers. All that matter is to not loose your volumes. The question of how to backup a docker volume is already answered multiple times on Stack Overflow.
Now be aware that a few docker and docker-compose commands do delete volumes! For instance if you run docker rm -v <my container>, the -v option is to tell docker to also delete associated volumes while deleting the container. Or if you run docker-compose down -v, volumes would also be deleted.
How to backup Wordpress running in a docker-compose project?
Well, the best way is to backup your Wordpress data with a Wordpress plugin that is well known for doing so correctly. It is not because you are running Wordpress in docker containers that Wordpress good practices don't apply anymore.
In the case you need to restore your website, start new containers/volumes with your docker-compose.yml file, go through the minimal Wordpress setup, install your backup plugin and use it to restore your data.
Related
I'm new to Docker and was wondering if it was possible to set the following up:
I have my personal computer on which I'm working on my WordPress site via a Dockerfile. All his well and the data is persistent.
What I'd like to do is be able to save that work on Docker hub possibly or Github (I assume the updated images would be backed up on my Docker hub) and work on a totally different computer picking up where I left off.
Is that possible ?
Generally you should be able to set up your Docker containers such that there is no persistent state inside the container at all; you can freely delete and recreate the container without losing data. The best and easiest case of this is a container that just depends on some external database, in which case you don’t need to do anything.
If you have something like a Wordpress installation with local customizations, or something that stores persistent data in the filesystem, you should use the docker run -v option or the Docker Compose volumes: option to inject parts of the host filesystem into the container. Then those volumes need to be backed up (and for all that the Docker documentation endorses named volumes, if you use host directories, your normal backup solution will work fine.
In short, I’d recommend:
Build a custom image for your application, and check the Dockerfile and any supporting artifacts into source control. They don’t need to be separately backed up; even if you lose your image you can docker build again.
Inject customizations using bind mounts, and check those customizations into source control. They don’t need to be separately backed up.
Store mutable data using volumes or bind mounts, and back these up normally.
Containers are disposable. You don’t need to back up a container per se, you should always be able to recreate it from the artifacts above.
I'm getting started with running Docker on MacOS and I just was able to install a WordPress container and get that running locally.
But where the heck are the actual WordPress files?
Do I need to SSH into the container so I can view/edit them there? If so, how would one go about that?
Wordpress files are kept inside the container, for example you can find wp-content at:
/var/www/html/wp-content
But, to get "inside" your running container you will have to do something like docker container exec -it <your_container_name> bash. More here: How to get into a docker container?
Containers are considered ephemeral, which means that a good practice is to work in a way that lets you easily stop/remove a container and spin up a new one without losing your stuff. To persist your data you have the option to use volumes.
You might also want to take a look at this, which worked for me: Volume mount when setting up Wordpress with docker. If your case is to develop wordpress on docker containers, then... it's a different case.
If you have not set a binding when running the docker image for the first time you can still do the following.
docker volume ls
will list all of your volumes used by your local docker.
What you can do is the following :
docker volume inspect "VOLUME NAME"
e.g. docker volume inspect "181f5c9916a29e9f654317988f49237ea9067157bc94041176ab6ae5f9a57954"
you will find the Mountpoint of each docker volume. There could more than 1, every one of those will have a mount point.
Good day!
I'm trying to migrate my local wordpress/mariadb containers made from docker-compose to another host probably to a production server.
Here's what I did:
I created a docker-compose for the wordpress and mariadb containers locally. I then started to populate wordpress content to them.
Use Case:
I want to export and import the containers made through docker-compose along with its data to another server.
Please guide me on my problem.
Many thanks.. :-)
Ideally you wouldn't be storing data in the containers. You want to be able to destroy and recreate them at will. So if that's what you have I'd probably recommend figuring out how to copy the data out of the containers, then deploy them remotely from images. When you redeploy them you want to mount the data directories to an external drive which will never be destroyed and repopulate the data there.
If you really want to deploy the containers with the data then I'd say you want to look at Docker Commit which you can use to create images from your existing containers which you can then deploy.
This is solved! :-)
I define volumes in mariadb and wordpress services in my Compose file which created the data directories that I need. I will then tar the docker compose directory and will recreate the docker-compose in my remote server. thanks for the awesome answer. heads up for you #lecstor.
I have created a new Dockerfile based on the official WordPress image, and I have been trying to troubleshoot why I cannot remove the default themes. I discovered that the reason is because at the time the command is executed the files do not actually exist yet.
Here are the relevant lines from my Dockerfile:
FROM wordpress
RUN rm -rf /var/www/html/wp-content/themes/twenty*
The delete command works as expected if I run it manually after the container is running.
As a side note, I have also discovered that when I copy additional custom themes to the /var/www/html/wp-content/themes directory, from the Dockerfile, it does work but not quite as I would expect. Because any files in the official docker image will overwrite my custom versions of the same file. I would have imagined this to work the other way around, in case I want to supply my own config file.
So I actually have two questions:
Is this behavior Docker-related? Or is it in the WordPress-specific image?
How can I resolve this? It feels like a hack, but is there a way to asynchronously run a delayed command from the Dockerfile?
What's up, Ben!
Your issue is related to a concept introduced by Docker named entrypoint. It's typically a script that is executed when the container is run, and contains actions that need to be ran at runtime, not buildtime. That script is ran right after you run the image. They are used to make containers behave like services. The parameters set with the CMD directive are, by default, the ones passed directly to the entrypoint, and can be overwritten.
You can find the debian template of the Dockerfile of the image you are pulling here. As you can see, it calls an entrypoint named docker-entrypoint.sh. Since I don't want to dive into it too much, basically, it's performing the installation of your application.
Since you are inheritating the Wordpress image, the entrypoint of the wordpress image is being executed. Overwriting it so that it is not executed anymore is not a good idea either, since it would render your image useless.
A simple hack that would work in this case would be the following:
FROM wordpress
RUN sed -i 's/exec \"\$\#\"/exec \"rm -rf \/var\/www\/html\/wp-content\/themes\/twenty\* \&\& \$\#\"/g'
That would rewrite the entrypoint, making the last exec clause to remove those files and run whatever service it decided to run (typically apache, but I don't know which could be the case in this container).
I hope that helps! :)
I am just setting up docker on my local machine for web-dev.
I have seen lots of tutorials for docker with rails etc...
I am curious how does docker work in terms of editing the projects source code.
I am trying to wrap my head around this -v tag.
In many of the tutorials I have seen users have stored their Dockerfile in the project base directory and the built from there, do you just edit the code in the directory and refresh the browser? And leave docker running.
Just trying to wrap my head around it all, sorry if basic question.
I usually differentiate two use cases of Docker:
in one case I want a Dockerfile that helps end users get started easily
in another case I want a Dockerfile to help code contributors to have a testing environment up and running easily
For end users, you want your Dockerfile to
install dependencies
checkout the latests stable code (from github or elsewhere)
setup some kind of default configuration
for contributors, you want your Dockerfile to
install dependencies
document how to run a Docker container setting up a volume to share the source code between their development environment and the docker container.
To sum up, for end users the Docker image should embed the application code while for contributors the docker image will just have the dependencies.