Wordpress with docker-compose.yml on CentOS - wordpress

This is my first time trying Vultr with CentOS.
I was able to successfully develop a local Wordpress website with a custom theme, now I'm trying to deploy it to a CentOS server on Vultr. My docker-compose.yml looks like this:
version: '3.3'
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:5.2.2-php7.1-apache
ports:
- "80:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
working_dir: /var/www/html
volumes:
- ./wp-content:/var/www/html/wp-content
- ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
volumes:
db_data: {}
How should I configure the images?
Should I create three images for wordpress, mysql, and wp-content & uploads.ini referencing them in the docker-compose? Or can I make just one image of everything?

First, it is generally recommended that you separate areas of concern by using one service per container. So for wordpress, mysql etc it better to use multiple services.
But, these services use one image or multiple images, it totally depends on your scenario.
In fact, you can put all things in one own image, and specify different commands in the image for different docker-compose service. E.g.
services:
db:
image: your_own_solo_image
command: the command to start db
wordpress:
image: your_own_solo_image
command: the command to start wordpress
depends_on:
- db
Disadvantage for using one image:
Maybe one container just need small base image, e.g. alpine, another container need ubuntu, but with unify image(say ubuntu), when two containers run, it will all use ubuntu, maybe some more memory waste as ubuntu will consume more resources compared to alpine.
You may encountered library conflict, e.g. container1(service1) need lib.so.1, while container2(service2) may need lib.so.2, you may have to handle LD_LIBRARY_PATH by yourself. If you separate images, no issue here.
Advantage for using one image:
Sometimes you may want to separate service(command) to different containers, but the two commands really very dependent on one own project's same source code, and the environment is all same, then no need to use different images for different containers(different service in compose). One example is django project, you may start wsgi in one service, but may also want to start celery worker in another service but still use the same code of your django project.

Related

Wordpress on Docker (Synology NAS) with different networks

first, sorry for my bad english, hope you can understand me.
i have the following task.
I want to run a (maybe more) wordpress installation on my synology nas. Therefore, i installed Docker and run portainer for creating some stuff.
My main idea is to create the following:
Create different container with separated wordpress installations
Create mysql container for hosting the different wordpress databases, each for every wordpress app
for the wordpress container there is a own network called "app_network" (bridge, attachable)
for the mysql container there is another network called "db_backend" (bridge, attachable)
So far, so god. At the moment i created one WP container, the mysql container and the two networks. Everything seems to be fine.
wordpress container is created with docker-compose (stack in portainer)
mysql container is created with docker-compose (stack in portainer)
I created a db for wordpress in the mysql container manually - local login on container works perfect.
the mysql container is in the network db_backend
the woordpress container is in the network app_network and additionally connected to the db_backend network (assigend ip´s looks correct)
But...if i am calling the wordpress page i got "Error establishing a database connection"
My yaml looks like this:
#mysql.yaml
version: '3.9'
services:
db:
image: mysql:latest
restart: on-failure:3
volumes:
- /volume1/docker/databases:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: mysuperstrongpassword
container_name: db_mysql
networks:
- db_backend
networks:
db_backend:
driver: bridge
external: true
#worppess.yaml
version: '3.9'
services:
#frontend
wp_app:
image: wordpress:latest
restart: on-failure:3
ports:
- '49200:80'
- '49201:443'
volumes:
- /volume1/docker/wp_app/wp_t:/var/www/html
environment:
WORDPRESS_DB_HOST: db_mysql:3306 //wrong entry? tried hostname, ip, service
WORDPRESS_DB_NAME: mydb
WORDPRESS_DB_USER: myuser
WORDPRESS_DB_PASSWORD: mypassword
networks:
- db_backend
- app_network
networks:
#172.168.29.1/24
db_backend:
driver: bridge
external: true
#172.168.30.1/24
app_network:
driver: bridge
external: true
After all i was able to read about docker, docker-networking and docker compose i thought my solution should work, all can be deployed without any errors, except the database connection error :( ...
Is the way of connection the container between the networks correct?
May i edit the wp-config.php with the information and add them to the wordpress containe?
Can anyone help?
Replace this WORDPRESS_DB_HOST: db_mysql:3306 with WORDPRESS_DB_HOST: db

Is it possible to have dockerized wordpress with same performance as a wordpress server install?

I am experimenting with a wordpress docker install.
I used the basic install with mounted volumes.
docker-compose.yml
version: '3.3'
services:
db:
image: mysql:5.7
volumes:
- ./mysql:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8001:80"
restart: always
volumes:
- ./html:/var/www/html
- ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
But after adding several plugins and a theme, wp-admin gets terribly slow. Approx 5-7 seconds TTFB. Using elementor becomes basically impossible.
Throwing hardware (it's an AWS EC2) at the server did NOT change the performance.
Is it possible to have wordpress in a performant docker setup?
First: You should probably use the mysql:latest tag. 5.7 is now an older database, latest is now 8.0.23 (community server).
Second: You should specify wordpress version, and php version and keep these updated along the way. I use image: wordpress:5.6-php7.4-apache which gives me php7 for better performance.
Once you make changes in your docker-compose.yml, run docker-compose up --build to make sure to get clean versions of everything.
Your docker-compose version could be upgraded from 3.3 to 3.8 (has nothing to do with performance, though).
Make sure to upgrade your Docker installation to the latest (19.03+ at the moment).
Compare your docker-compose to mine, which is running great with plugins:
version: "3.8"
services:
db:
image: mysql:latest
command: "--default-authentication-plugin=mysql_native_password"
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:5.6-php7.4-apache
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
WORDPRESS_CONFIG_EXTRA: |
define('WP_DEBUG', true);
error_reporting(E_ALL);
ini_set('display_errors', 1);
working_dir: /var/www/html
volumes:
- ./wp-content:/var/www/html/wp-content
- ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
volumes:
db_data: {}
Note that I use working_dir so that the directory for your docker container is set correctly. By adding wp-content to your volumes you copy wp-content into your container to persist it. Plugins are located in wp-content and this may improve your performance situation.
There is almost no cost of running docker. The biggest difference is on the networking layer, but it is reduced with host networking. The cost is not as much as you should think about it.
What is docker
In simplification docker is nothing more than process and resources isolation. All processes are running on the host machine without any virtualozation. There are linux modules responsible for isolation resources and processes. Example of modules:
Cgroup and memory resource controller in Kernel: https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt
Docker uses this to limit of CPU and memory usage for containers. More info here: https://docs.docker.com/config/containers/resource_constraints/
Linux namespaces - https://man7.org/linux/man-pages/man7/namespaces.7.html
This is another important feature of kernel used by Docker.
iptables - Iptables are commonly used to define networking layers in Docker. This is probably the biggest bottleneck for Docker.
IBM investigation in 2014
IBM did some investigation for that topic few years ago here: https://dominoweb.draco.res.ibm.com/reports/rc25482.pdf
You can find network latency for Docker NAT networking:
But let's see another graph, that show us latency for Redis. You can see that for network=host docker is almost as fast as native host.
Debugging
We cannot say what is wrong with your deployment, because picture is too big, and you provided only small part of the photo.
However you can start debugging by yourself.
Create another EC2 instance.
Install Prometheus on new Instance
On the WordPress instance install the node exporter. This will export metrics for Prometheus
Configure Prometheus to collect metrics from your Wordpress instance
Optionally install the Grafana on the Prometheus Server.
Wait a day to collect data and analyze where you are hitting the ceil.
To install Prometheus use Prometheus Get Start Docs
To install node_exporter and set the Prometheus Scraper up use this docs: https://prometheus.io/docs/guides/node-exporter/
Summary
So answer to your question is: It depends, how your application is deployed to docker. Probably few important things, that can affect your performance
CPU limit for container
Memory limit for container
Networking type
Missing capabilities
Number of application deployed on the same host
Other limits like max open files, virtual memory limit, number of processes inside container, etc...

How to store the wordpress wp-content folder in local machine when running through docker compose

I have all set in my local machine for virtual machine shared folders. I have following code in my Docker compose file for Wordpress service. but not sure how the volumes work here. Can you please explain?
version: '2'
services:
database:
image: mysql:5.6
volumes:
- ./mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: password
restart: unless-stopped
wordpress:
image: wordpress:4.9.6
ports:
- 49160:80
links:
- database:mysql
volumes:
- ./wordpress:/var/www/html/wp-content
environment:
WORDPRESS_DB_PASSWORD: password
restart: unless-stopped
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- database:db
ports:
- 8080:80
Does the above volumes line of code mean, does it need to create a WordPress folder in my docker-compsose.yml file that I am currently running?
Or is it anyhow related to my shared folders in virtual machine?
Basically volumes are instruments for Docker so it can retain data. Docker containers are generally designed to be stateless, but if you need to retain state/information between runs, that's where volumes come in.
You can create an unnamed volume in the following way:
volumes:
- /var/www/html/wp-content
This will retain your wp-content folder in the internal volumes storage without a particular name.
A second way would be to give it a name, making it a named volume:
volumes:
- mywp:/var/www/html/wp-content
And the final type, which is also what you are doing, is called a Volume Bind. This basically binds/mounts the content of a folder on your host machine in the container. So if you change a file in either place, it will be saved on the other.
volumes:
- ./wordpress:/var/www/html/wp-content
In order to use your Volume Bind, you will need to create the folder "wordpress" in the folder where you're running the docker-compose.yaml (usually your root folder). Afterwards, when your installation changes within the container, it will also change on the bind and vice-versa.
EDIT: In your particular case the following should work:
version: '3.2'
services:
database:
image: mysql:5.6
volumes:
- ./mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: password
restart: unless-stopped
wordpress:
image: wordpress:4.9.6
ports:
- 49160:80
links:
- database:mysql
volumes:
- type: bind
source: ./wordpress
target: /var/www/html/wp-content
environment:
WORDPRESS_DB_PASSWORD: password
restart: unless-stopped
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- database:db
ports:
- 8080:80
Adding a volume to your docker-compose.yml file will enable you to 'mount' content from your local file system into the running container.
So, about the following line here:
volumes:
- ./wordpress:/var/www/html/wp-content
This means that whatever's in your local wordpress directory will be placed in the /var/www/html/wp-content directory inside your container. This is useful because it allows you to develop themes and plugins locally and automatically inject them into the running container.
To avoid confusion, I'd recommend renaming wordpress to something else, so it's clear that you're mounting only your WordPress content, and not core files themselves.
I have a similar setup here, in case you need another reference:
https://github.com/alexmacarthur/wp-skateboard

How should I set up a development environment with Docker for WordPress themes?

Little disclaimer before I start: I am a Docker newbie.
My question is mostly stated as above, with a little bit more to my requirements:
I want to have a "full" development experience. That is, I want to be able to use VS Code, WebStorm, etc. to do development with full-featured guesser and code intelligence. This is what my current setup is lacking.
I want to have a docker-compose.yml that I can commit into my source repository and not worry about "how it will run" on multiple platforms. I think what I have below accomplishes that, but I am very open to criticism.
docker-compose.yml:
version: '3.1'
services:
wp:
image: wordpress
restart: always
ports:
- 48080:80
links:
- db:mysql
volumes:
- themes:/var/www/html/wp-content/themes
environment:
WORDPRESS_DB_PASSWORD: example
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
themes:
Any tips for moving forward?
The theme volume did not work on my machine. If I run docker-compose up I get the following error:
This worked for me on Ubuntu 18.04:
version: '3.1'
services:
wp:
image: wordpress
restart: always
ports:
- 48080:80
links:
- db:mysql
volumes:
- ./themes:/var/www/html/wp-content/themes
environment:
WORDPRESS_DB_PASSWORD: example
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
Everything else looks quite good, only improvement for now would be hosting your WordPress on your local file system:
That would give you the possibility to debug WordPress easily and you have full control over the stack.
Downside: User permission can be a problem with Docker. Normally every process inside the container will be executed as root. If WordPress writes a file it has root permissions even on you local file system.

Why does closing tutum/wordpress automatically delete all the app and data?

I am new to docker, and trying to run a Wordpress application using this tutum/wordpress image: https://hub.docker.com/r/tutum/wordpress/
I simply follow this step: docker run -d -p 80:80 tutum/wordpress
But when I turn off the computer, and run it again, all the database + application gone. And I need to restart from scratch.
How do I persist the database and application?
That image is deprecated. So you should be using the official wordpress image
version: '3.1'
services:
wordpress:
image: wordpress
ports:
- 8080:80
environment:
WORDPRESS_DB_PASSWORD: example
mysql:
image: mysql:5.7
volumes:
- ./data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: example
Then use docker-compose up to get the wordpress up. The wordpress image has code located at /usr/src/wordpress. So if you need to persist the plugins directory then you need to use volumes to map it like I did for mysql

Resources