I'm currently working on a project using the WordPress API.
This is my docker-compose.yml file:
version: '3.1'
services:
wordpress:
image: wordpress
volumes:
- ./volumes/wordpress/:/var/www/html
ports:
- 8080:80
environment:
WORDPRESS_DB_PASSWORD: root
depends_on:
- mysql
mysql:
image: mysql:5.7
volumes:
- ./volumes/mysql/:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
web:
build:
context: .
dockerfile: Dockerfile
volumes:
- ./src/:/src
ports:
- 8081:8081
depends_on:
- wordpress
In order to use the WordPress API, I need to configure the WordPress container manually by going to http://localhost:8080/wp-admin and changing some settings.
Thing is, I need to make this settings changes automatic because everytime I remove the volume folder in order to reset the WordPress content, it also removes the settings.
Any idea on how I can achieve this?
I guess that all settings configured via the wp-admin section are stored in the database.
If that's the case than you can do this:
Setup a first WordPress instance by running your docker-compose and completing the setup steps.
Stop the compose. At this point in the mysql volume folder you have a database structure with a configured wordpress in it.
Store the contents of the folder somewhere.
Now, if you want to create another WordPress instance, you can edit the docker-compose.yml file in order to adjust volume binding and make sure that the initial content of the mysql volume contains the data you got from step 3 above.
When you start the new docker-compose stack it'll start from a populated database and you should have a preconfigured WordPress instance.
You need to locate the file/folder that containes the settings that you are changing.
Start the container, do the changes and backup the settings file into your host machine using:
docker cp <container-name>:<path-to-settings> .
You then can create a custom image that replaces the default settings with the backuped settings you copied to the host.
FROM wordpress
COPY <settings-from-host> <settings-path-in-container>
Related
This is my first time trying Vultr with CentOS.
I was able to successfully develop a local Wordpress website with a custom theme, now I'm trying to deploy it to a CentOS server on Vultr. My docker-compose.yml looks like this:
version: '3.3'
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:5.2.2-php7.1-apache
ports:
- "80:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
working_dir: /var/www/html
volumes:
- ./wp-content:/var/www/html/wp-content
- ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
volumes:
db_data: {}
How should I configure the images?
Should I create three images for wordpress, mysql, and wp-content & uploads.ini referencing them in the docker-compose? Or can I make just one image of everything?
First, it is generally recommended that you separate areas of concern by using one service per container. So for wordpress, mysql etc it better to use multiple services.
But, these services use one image or multiple images, it totally depends on your scenario.
In fact, you can put all things in one own image, and specify different commands in the image for different docker-compose service. E.g.
services:
db:
image: your_own_solo_image
command: the command to start db
wordpress:
image: your_own_solo_image
command: the command to start wordpress
depends_on:
- db
Disadvantage for using one image:
Maybe one container just need small base image, e.g. alpine, another container need ubuntu, but with unify image(say ubuntu), when two containers run, it will all use ubuntu, maybe some more memory waste as ubuntu will consume more resources compared to alpine.
You may encountered library conflict, e.g. container1(service1) need lib.so.1, while container2(service2) may need lib.so.2, you may have to handle LD_LIBRARY_PATH by yourself. If you separate images, no issue here.
Advantage for using one image:
Sometimes you may want to separate service(command) to different containers, but the two commands really very dependent on one own project's same source code, and the environment is all same, then no need to use different images for different containers(different service in compose). One example is django project, you may start wsgi in one service, but may also want to start celery worker in another service but still use the same code of your django project.
Error:
The uploaded file could not be moved to wp-content/uploads/.../....
Environment:
Wordpress Docker image is created from a base Wordpress image then the files are mapped in and out, for development:
version: '3'
services:
wordpress:
restart: always
environment:
WORDPRESS_DB_NAME: ...
WORDPRESS_DB_HOST: ...
WORDPRESS_DB_USER: ...
WORDPRESS_DB_PASSWORD: ...
image: wordpress:latest
ports:
- 38991:80
volumes:
- ./:/var/www/html
We talk to a dev database hosted external to the Docker container.
Image is built - and sent up to the server. Then, CMS user attempts to upload an image and the Wordpress build moans that the uploaded file could not be moved to wp-content/uploads/.../.... We don't get this error on localhost.
Could some devops experts kindly point us in the right direction on what needs to be done for this to tally up on the server.
The permissions are incorrect on the wp-content/uploads directory. I had the same error and in my case the upload folder's permissions and user/group where set wrong and also some folders inside were set to root. But that's probably because I imported a backup.
To fix the upload you can add the following two commands to your deploy pipeline/script or use docker exec -it <container-name> bash to perform it manually on the container.
Set the correct user/group on the uploads folder: $ chown -R www-data:www-data uploads/*
Set the correct permissions: $ chmod 755 uploads/*
I have all set in my local machine for virtual machine shared folders. I have following code in my Docker compose file for Wordpress service. but not sure how the volumes work here. Can you please explain?
version: '2'
services:
database:
image: mysql:5.6
volumes:
- ./mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: password
restart: unless-stopped
wordpress:
image: wordpress:4.9.6
ports:
- 49160:80
links:
- database:mysql
volumes:
- ./wordpress:/var/www/html/wp-content
environment:
WORDPRESS_DB_PASSWORD: password
restart: unless-stopped
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- database:db
ports:
- 8080:80
Does the above volumes line of code mean, does it need to create a WordPress folder in my docker-compsose.yml file that I am currently running?
Or is it anyhow related to my shared folders in virtual machine?
Basically volumes are instruments for Docker so it can retain data. Docker containers are generally designed to be stateless, but if you need to retain state/information between runs, that's where volumes come in.
You can create an unnamed volume in the following way:
volumes:
- /var/www/html/wp-content
This will retain your wp-content folder in the internal volumes storage without a particular name.
A second way would be to give it a name, making it a named volume:
volumes:
- mywp:/var/www/html/wp-content
And the final type, which is also what you are doing, is called a Volume Bind. This basically binds/mounts the content of a folder on your host machine in the container. So if you change a file in either place, it will be saved on the other.
volumes:
- ./wordpress:/var/www/html/wp-content
In order to use your Volume Bind, you will need to create the folder "wordpress" in the folder where you're running the docker-compose.yaml (usually your root folder). Afterwards, when your installation changes within the container, it will also change on the bind and vice-versa.
EDIT: In your particular case the following should work:
version: '3.2'
services:
database:
image: mysql:5.6
volumes:
- ./mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: password
restart: unless-stopped
wordpress:
image: wordpress:4.9.6
ports:
- 49160:80
links:
- database:mysql
volumes:
- type: bind
source: ./wordpress
target: /var/www/html/wp-content
environment:
WORDPRESS_DB_PASSWORD: password
restart: unless-stopped
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- database:db
ports:
- 8080:80
Adding a volume to your docker-compose.yml file will enable you to 'mount' content from your local file system into the running container.
So, about the following line here:
volumes:
- ./wordpress:/var/www/html/wp-content
This means that whatever's in your local wordpress directory will be placed in the /var/www/html/wp-content directory inside your container. This is useful because it allows you to develop themes and plugins locally and automatically inject them into the running container.
To avoid confusion, I'd recommend renaming wordpress to something else, so it's clear that you're mounting only your WordPress content, and not core files themselves.
I have a similar setup here, in case you need another reference:
https://github.com/alexmacarthur/wp-skateboard
I am new to docker, and trying to run a Wordpress application using this tutum/wordpress image: https://hub.docker.com/r/tutum/wordpress/
I simply follow this step: docker run -d -p 80:80 tutum/wordpress
But when I turn off the computer, and run it again, all the database + application gone. And I need to restart from scratch.
How do I persist the database and application?
That image is deprecated. So you should be using the official wordpress image
version: '3.1'
services:
wordpress:
image: wordpress
ports:
- 8080:80
environment:
WORDPRESS_DB_PASSWORD: example
mysql:
image: mysql:5.7
volumes:
- ./data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: example
Then use docker-compose up to get the wordpress up. The wordpress image has code located at /usr/src/wordpress. So if you need to persist the plugins directory then you need to use volumes to map it like I did for mysql
I'm trying to restore my online Wordpress site to my localhost.
Install
This little Dockerfile successfully downloads & runs Wordpress in a container on my machine:
version: '2'
services:
db:
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: lkj
volumes:
- ./mysql:/var/lib/mysql
ports:
- 60001:3306
wordpress:
image: wordpress:latest
restart: always
depends_on:
- db
links:
- db
ports:
- 60000:80
environment:
WORDPRESS_DB_PASSWORD: lkj
WORDPRESS_DB_HOST: db
working_dir: /var/www/html
volumes:
- ./data:/var/www/html
I can browse & install the default site on 0.0.0.0:60000 and explore the MySql database on 0.0.0.0:60001.
Restore files
Then I overwrite all my WP files in my wp-content folder with files from my site backup. Everything still works. (The wp-config isn't changed).
Restore db
Then I delete the wordpress database and create a new one, and run my online site's backup script. All tables are successfully created.
But now when I browse to 0.0.0.0:60000 I get the messageThis site can’t be reached. 0.0.0.0 refused to connect.
Why is it broken?
Why is this? What settings do I need to check in the database? I tried looking in wp_options and changing the home and site_url settings but that didn't help.
Update -------
I ran this on my db update wordpress.wp_options set option_value='http://0.0.0.0:60000' where option_name in ('siteurl', 'home') (http://www.wpbeginner.com/wp-tutorials/how-to-fix-the-error-establishing-a-database-connection-in-wordpress/ said it might help).
I can now log in to wp-admin but the main site error hasn't changed.
change wp-config.php file in which you have to set localhost's,
DB_NAME
DB_USER
DB_PASSWORD
DB_HOST
Run the following query against your MySql database:
update wordpress.wp_options set option_value='http://0.0.0.0:60000' where option_name in ('siteurl', 'home')
Then close your browser, reopen it, and open incognito mode and try browsing to 0.0.0.0:60000. If that fails, reopen and try browsing to 127.0.0.1:60000, or finally localhost:60000.
As well as altering your database wp_options table you have to be careful with how your Docker network is set up (especially if you are on a VM already using either NAT or bridged connection), and remember that most browsers won't clear their cache and retry the site even if it now works.