Good day!
I'm trying to migrate my local wordpress/mariadb containers made from docker-compose to another host probably to a production server.
Here's what I did:
I created a docker-compose for the wordpress and mariadb containers locally. I then started to populate wordpress content to them.
Use Case:
I want to export and import the containers made through docker-compose along with its data to another server.
Please guide me on my problem.
Many thanks.. :-)
Ideally you wouldn't be storing data in the containers. You want to be able to destroy and recreate them at will. So if that's what you have I'd probably recommend figuring out how to copy the data out of the containers, then deploy them remotely from images. When you redeploy them you want to mount the data directories to an external drive which will never be destroyed and repopulate the data there.
If you really want to deploy the containers with the data then I'd say you want to look at Docker Commit which you can use to create images from your existing containers which you can then deploy.
This is solved! :-)
I define volumes in mariadb and wordpress services in my Compose file which created the data directories that I need. I will then tar the docker compose directory and will recreate the docker-compose in my remote server. thanks for the awesome answer. heads up for you #lecstor.
Related
I'm currently migrating a wordpress installation to azure app services with containers. First I did a normal installation with everything inside the container for testing purpose. The performance was good and and things worked without problems.
Then I wanted to add the wp-content folder to a persistent folder, for this I created a file share and added it under Path mappings. This worked without problems and after the restart Wordpress could access the files.
But now every page load takes about 1-2 minutes and the page as whole is unusable in this stage. I double checked the the file share settings and everything else. Share is optimized for transactions and as soon as I remove the volume, the container is fast as light again.
Does anyone have the same problem? Any ideas how to fix this? This is a deal break for me tbh.
Thanks!
Not answering your question directly but an alternative is to use App Service Persistent Storage that store data in /home folder of the VM where your app is running. It should be a lot faster then using a File Share in a storage account. The ${WEBAPP_STORAGE_HOME} maps to the /home folder.
You need to enable by setting WEBSITES_ENABLE_APP_SERVICE_STORAGE to true in the application settings or by using the CLI:
az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE
I'm trying to get an installation of Wordpress running in Kubernetes, as well as have an option of running the same configuration locally in minikube. I want to use the standard Docker image of Wordpress: https://hub.docker.com/_/wordpress/.
I'm having trouble with making sure that the plugins and templates are in sync though. The Docker container exposes a Volume at /var/www/html. Wordpress installation, as well as my plugins will live there.
Assuming I do the development on Minikube, along with the installation of plugins etc. How do handle the move between Persistent Volumes between my local cluster and the target cluster? Should I just reinstall Wordpress every time when the Pod is scaled?
You can follow Writing Portable Configuration (https://kubernetes.io/docs/concepts/storage/persistent-volumes/#writing-portable-configuration) guide for persistent volume if you are planning to migrate it to different cluster.
In a real production scenario you would want to use a standard tool to backup and migrate persistent volumes between clusters. Valero is such a tool which enables you to achieve that.
In my application with docker-compose I have 2 container, 1 nginx and 1 python script crontab that update some files in nginx/html folder.
With docker-compose when I declare
volumes:
- shared-volume:/usr/share/nginx/html/assets/xxx:ro
the initial files in the nginx images are copied to the shared volume.
Now I'm trying to move the application to k8s, but when I use shared volume I see that initial files in nginx/html are missing.
So the question is, is it possible to copy initial files from my nginx images to the shared volume? How?
____________________________EDIT______________________________________
To clarify, I'm new to k8s, With VM we usually run script that update an nginx assets folder. With docker-compose I use something like this:
version: '3.7'
services:
site-web:
build: .
image: "site-home:1.0.0"
ports:
- "80:80"
volumes:
- v_site-home:/usr/share/nginx/html/assets/:ro
site-cron:
build: ./cronScript
image: "site-home-cron:1.0.0"
volumes:
- v_site-home:/app/my-assets
volumes:
v_site-home:
name: v_site-home
Now I'm starting to write a deployment (with persistent volume? Because as I understand even if there is a persistent volume a stateful set is not useful in this case) to convert my docker-compose to k8s. Actually we cannot use any public cloud for security policy (data must be in our country and now there's no big company with this option). So the idea is to run vanilla k8s in multiple bare metal server and start migration with very simple application like this. I tried with the 2 docker, replica:1 and an empty volume in a single pod. In this case I see that initially the application has the nginx folder empty, and I need to wait the crontab update to see my results. So this is the first problem.
Now I read your answer and obviously I've other doubts. Is it better to split the pod, so 1 pod for container? A deployment with persistent volume is the way? In this case, I've the old problem, how to see initial nginx assets files? Thank you so much for the help!
This generally requires an initContainer which runs cp. it’s not a great solution but it gets the job done.
Kubernetes doesn't have the Docker feature that copies content into volumes when the container is first started.
The two straightforward answers to this are:
Build a custom Nginx image that contains the static assets. You can use the Dockerfile COPY --from=other/image:tag construct to copy them from your application container into the proxy container.
Store the assets somewhere outside container space altogether. If you're deploying this to AWS, you can publish them to S3, and even directly serve them from a public S3 bucket. Or if you have something like an NFS mount accessible to your cluster, have your overall build process copy the static assets there.
The Docker feature has many corner cases that are frequently ignored, most notably that content is only copied when the container is first started. If you're expecting the volume to contain static assets connected to your application, and you update the application container, the named volume will not update. As such you need some other solution to manage the shared content anyways, and I wouldn't rely on that Docker feature as a solution to this problem.
In Kubernetes you have the additional problem that you typically will want to scale HTTP proxies and application backends separately, which means putting them in different Deployments. Once you have three copies of your application, which one provides "the" static assets? You need to use something like a persistent volume to share contents, but most of the persistent volume types that are easy to get access to don't support multiple mounts.
I'm new to Docker and was wondering if it was possible to set the following up:
I have my personal computer on which I'm working on my WordPress site via a Dockerfile. All his well and the data is persistent.
What I'd like to do is be able to save that work on Docker hub possibly or Github (I assume the updated images would be backed up on my Docker hub) and work on a totally different computer picking up where I left off.
Is that possible ?
Generally you should be able to set up your Docker containers such that there is no persistent state inside the container at all; you can freely delete and recreate the container without losing data. The best and easiest case of this is a container that just depends on some external database, in which case you don’t need to do anything.
If you have something like a Wordpress installation with local customizations, or something that stores persistent data in the filesystem, you should use the docker run -v option or the Docker Compose volumes: option to inject parts of the host filesystem into the container. Then those volumes need to be backed up (and for all that the Docker documentation endorses named volumes, if you use host directories, your normal backup solution will work fine.
In short, I’d recommend:
Build a custom image for your application, and check the Dockerfile and any supporting artifacts into source control. They don’t need to be separately backed up; even if you lose your image you can docker build again.
Inject customizations using bind mounts, and check those customizations into source control. They don’t need to be separately backed up.
Store mutable data using volumes or bind mounts, and back these up normally.
Containers are disposable. You don’t need to back up a container per se, you should always be able to recreate it from the artifacts above.
I'm getting started with running Docker on MacOS and I just was able to install a WordPress container and get that running locally.
But where the heck are the actual WordPress files?
Do I need to SSH into the container so I can view/edit them there? If so, how would one go about that?
Wordpress files are kept inside the container, for example you can find wp-content at:
/var/www/html/wp-content
But, to get "inside" your running container you will have to do something like docker container exec -it <your_container_name> bash. More here: How to get into a docker container?
Containers are considered ephemeral, which means that a good practice is to work in a way that lets you easily stop/remove a container and spin up a new one without losing your stuff. To persist your data you have the option to use volumes.
You might also want to take a look at this, which worked for me: Volume mount when setting up Wordpress with docker. If your case is to develop wordpress on docker containers, then... it's a different case.
If you have not set a binding when running the docker image for the first time you can still do the following.
docker volume ls
will list all of your volumes used by your local docker.
What you can do is the following :
docker volume inspect "VOLUME NAME"
e.g. docker volume inspect "181f5c9916a29e9f654317988f49237ea9067157bc94041176ab6ae5f9a57954"
you will find the Mountpoint of each docker volume. There could more than 1, every one of those will have a mount point.