Execute a plugin-installed WP-CLI command with Docker Compose - wordpress

Intro:
I am trying to run a few WP-CLI commands for maintenance as a part of my release process on my production sites. I can execute the following commands against the docker-compose file below successfully.
docker-compose run wp-cli_collinmbarrett-com core update
docker-compose run wp-cli_collinmbarrett-com plugin update --all
docker-compose run wp-cli_collinmbarrett-com theme update --all
docker-compose run wp-cli_collinmbarrett-com db optimize
I have a plugin (WP-Sweep) installed on the site that adds its own WP-CLI command. When I try to run this command, it fails.
docker-compose run wp-cli_collinmbarrett-com sweep --all
/usr/local/bin/docker-entrypoint.sh: exec: line 15: sweep: not found
In a non-dockerized setup, I have verified that the WP-Sweep command for WP-CLI works successfully.
Question:
How can I run plugin-installed WP-CLI commands when running in a containerized environment with Docker Compose? Do I need to somehow make the WP-CLI container aware of the installed plugins other than having a shared volume?
My docker-compose.yml:
version: "3.7"
services:
wp_collinmbarrett-com:
image: wordpress:fpm-alpine
restart: always
networks:
- reverse-proxy
- collinmbarrett-com
depends_on:
- mariadb_collinmbarrett-com
volumes:
- collinmbarrett-com_files:/var/www/html
mariadb_collinmbarrett-com:
image: mariadb:latest
restart: always
networks:
- collinmbarrett-com
volumes:
- collinmbarrett-com_data:/var/lib/mysql
wp-cli_collinmbarrett-com:
image: wordpress:cli
networks:
- collinmbarrett-com
volumes:
- collinmbarrett-com_files:/var/www/html
networks:
reverse-proxy:
external:
name: wp-host_reverse-proxy
collinmbarrett-com:
volumes:
collinmbarrett-com_files:
collinmbarrett-com_data:
Full config on GitHub.

Not answering directly to your command needs (I didn't tried yet), but I wanted to share with you all the configurations I'm using in hope it helps you.
My docker-compose.yml has:
services:
...
# Mysql container
db:
...
# Wordpress container
wp:
...
wpcli:
image: wordpress:cli
user: "33:33"
volumes:
# necessary to write to the filesys
- ./php-config/phar.ini:/usr/local/etc/php/conf.d/phar.ini
- wp_app:/var/www/html
- /tmp/wp-temp:/tmp/wp-temp
environment:
HOME: /tmp/wp-temp
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: $WORDPRESS_DB_USER
WORDPRESS_DB_PASSWORD: $WORDPRESS_DB_PASSWORD
WORDPRESS_DB_NAME: $WORDPRESS_DB_NAME
depends_on:
- db
- wp
volumes:
wp_app: {}
...
Please note that as mentioned on Running as an arbitrary user section in https://hub.docker.com/_/wordpress:
When running WP-CLI via the cli variants of this image, it is
important to note that they're based on Alpine, and have a default
USER of Alpine's www-data, whose UID is 82 (compared to the
Debian-based WordPress variants whose default effective UID is 33), so
when running wordpress:cli against an existing Debian-based WordPress
install, something like --user 33:33 is likely going to be necessary
(possibly also something like -e HOME=/tmp depending on the wp command
invoked and whether it tries to use ~/.wp-cli)
You will need to define WP-CLI user as www-data with user id and group id = 33. This is why I defined user: "33:33". Also, the command might need to download temporary content, so I defined a HOME environment setting. Please also note that HOME mapped in your Host should also be assigned with user 33:33 ownership ids, otherwise WP CLI can't write to the filesys.
Also, PHP.ini in the WPCLI image has the setting phar.readonly as On, so you need to override it. I've add a specific ./php-config/phar.ini file that has that override:
phar.readonly = Off
To execute a plugin installation I do, on my docker-compose.yml folder, the following command:
docker-compose run --rm wpcli plugin install wp-mail-smtp --force --allow-root
Please note that --force --allow-root are optional.

Related

Setting up Xdebug in a VSCode instance attached to a docker container

I want to set up a development environment for WordPress which supports IDE debugging through Xdebug and is contained in a docker container.
I'm working from WSL2 / Ubuntu 18.04
I'm using a MySQL official image
I'm using a WordPress image built on top of the WordPress official image with Xdebug installed through PECL
I'm copying configuration files for php.ini and xdebug.ini from a local folder
After launching my container I'm attaching a VSCode instance to the container by using ms Remote Containers extensions
I'm installing the official PHP debug VSCode extension from the Xdebug team in the attached instance of VSCode
I'm automatically generating a launch.json file for PHP
I'm setting a breakpoint and launching the debugger by pressing F5
What happens is that the debugger starts, offering me to open a browser at the port I set for Xdebug (9003), but the page doesn't load, the breakpoint isn't reached and the debugger tab in VSCode shows several errors such as: Failed initializing connection 1: connection closed (on close)
I tried other slightly different methods such as having the dockerfile build Xdebug from source as well as other setups for the ini files from tutorials found on the internet, but the end result is always the same.
If I try to run xdebug_info() through the browser I get a normal output, if I try to run the file from console I get this error:
Xdebug: [Step Debug] Could not connect to debugging client. Tried: host.docker.internal:9003 (through xdebug.client_host/xdebug.client_port) :-(
which is in line with similar errors I get in the docker-compile output.
I tried forwarding the 9003 port with docker-compile, but the behavior didn't change.
These are my docker and config files:
docker-compose.yml
version: "3.6"
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
ports:
- "3306:3306"
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
build:
context: ./wordpress
dockerfile: Dockerfile
volumes:
- wordpress_data:/var/www/html
ports:
- "8080:80"
# - "9003:9003"
restart: always
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
volumes:
db_data: {}
wordpress_data: {}
wordpress/Dockerfile
FROM wordpress:latest
RUN apt-get update && \
apt-get -y install git
# RUN git config --global url."https://github".insteadOf git://github
RUN pecl install xdebug
COPY ../config/ /
RUN docker-php-ext-enable xdebug
wordpress/config/usr/local/etc/php/php.ini (edited from php.ini-development)
...
[Xdebug]
xdebug.remote_autostart = 1
xdebug.scream = 1
xdebug.remote_enable = 1
xdebug.show_local_vars = 1
xdebug.remote_autostart = 1
xdebug.remote_connect_back = 1
(Since it seems this isn't being tracked I also tried making a symbolic link from php.ini-development to php.ini from the container console, then adding the variables there.)
wordpress/config/usr/local/etc/php/conf.d/xdebug.ini
zend_extension=xdebug.so
[xdebug]
xdebug.mode=develop,debug,trace,profile,coverage
xdebug.start_with_request = yes
xdebug.discover_client_host = 0
xdebug.remote_connect_back = 1
xdebug.client_port = 9003
xdebug.client_host='host.docker.internal'
xdebug.idekey=VSCODE

Docker container taking 27GB on disk while docker container ls --size only report 500MB

I'm on a Debian VPS on OVH cloud provider, running Docker.
Trying to make an apt update on the instance, I noticed that the disk of 40GB was full. What is quite surprising for an instance hosting 2 Wordpress blogs.
I tried to run:
sudo du -h /var/lib/docker/containers
One of the containers weight 27GB !
27G /var/lib/docker/containers/1618df0(...)d6cc61e
However when I run:
docker container ls --size
The same container only weight 500MB
1618df0(...) 782c(...) "docker-entrypoint.s…" 10 months ago Up 10 months 80/tcp blog_wordpress_1 2B (virtual 545MB)
The Docker Compose is pretty simple:
wordpress:
build:
# call the Dockerfile in ./wordpress
context: ./wordpress
restart: always
environment:
# Connect WordPress to the database
WORDPRESS_DB_HOST: db:xxxx
WORDPRESS_DB_USER: xxxx
WORDPRESS_DB_PASSWORD: xxxx
WORDPRESS_DB_NAME: xxxx
volumes:
# save the content of WordPress an enable local modifications
- ./wordpress/data:/var/www/html
networks:
- traefik
- backend
depends_on:
- db
- redis
The Dockerfile:
FROM wordpress
# printf statement mocks answering the prompts from the pecl install
RUN printf "\n \n" | pecl install redis && docker-php-ext-enable redis
RUN /etc/init.d/apache2 restart
Do you know what to investigate to understand this problem ?
Thanks
Ok, this was actually the logs... The logs are not counted by:
docker container ls --size
So I just truncated the logs, brutally:
sudo sh -c "truncate -s 0 /var/lib/docker/containers/*/*-json.log"
This solve the problem for a while.
For the long term, I added these lines to the Wordpress container's Docker Compose, then deleted and recreated the containers:
logging:
options:
max-size: "10m"
max-file: "3"

docker-composer wordpress asks installation whenever I reboot

Everything is fine until I reboot my Ubuntu host.
After reboot, the WordPress page shows the fresh installation page.
There are volumes properly mounted on the host's local directory.
I only have set docker.service to restart the Docker service when reboot.
There must be some mistake I am not aware of.
At least, what shall I do if this thing happens again?
I see all the files mounted on my host shows the latest modification time,
so it looks like the data is persistent...
(edited)
I also tried the external voume as #bilal said in the comment, but it didn't make any difference.
So, now I am thinking this may be related to the process while booting up. like, instead of stop&start, it somehow down/up. But I may be wrong.
version: '3.8'
services:
db:
container_name: $DB_CONTAINER
image: mariadb:latest
restart: always
volumes:
- wordpress_db_data:/var/lib/mysql:rw
environment:
MYSQL_RANDOM_ROOT_PASSWORD: 1
MYSQL_DATABASE: $DB_NAME
MYSQL_USER: $DB_USER
MYSQL_PASSWORD: $DB_PASSWORD
wp:
container_name: $WP_CONTAINER
image: wordpress:latest
depends_on:
- db
- cp
restart: always
volumes:
- wordpress_wp_data:/var/www/html:rw
environment:
WORDPRESS_DB_HOST: $DB_CONTAINER
WORDPRESS_DB_NAME: $DB_NAME
WORDPRESS_DB_USER: $DB_USER
WORDPRESS_DB_PASSWORD: $DB_PASSWORD
WORDPRESS_TABLE_PREFIX: $WP_TABLE_PREFIX
VIRTUAL_HOST: $VIRTUAL_HOST
VIRTUAL_PORT: $VIRTUAL_PORT
LETSENCRYPT_HOST: $VIRTUAL_HOST
LETSENCRYPT_EMAIL: $LETSENCRYPT_EMAIL
#LETSENCRYPT_TEST: 'true'
cp:
build: composer
container_name: ${COMPOSER_CONTAINER}
volumes:
- wordpress_wp_data:/app/wp-content:rw
command: composer install
networks:
default:
external:
name: nginx_proxy
volumes:
wordpress_wp_data:
name: wordpress_wp_data
wordpress_db_data:
name: wordpress_db_data
Here's my volume list
> docker volume ls
DRIVER VOLUME NAME
local wordpress_db_data
local wordpress_wp_data
Here's my docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket containerd.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3
# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
you should use docker volumes for persistent storage what I understand is you mounted dir. See docker volumes for more information.
so your volume section should look like this.
volumes:
- ./wp_data:/var/www/html:rw
- wp_data:/wp_data{volume you want to persist}

Why does closing tutum/wordpress automatically delete all the app and data?

I am new to docker, and trying to run a Wordpress application using this tutum/wordpress image: https://hub.docker.com/r/tutum/wordpress/
I simply follow this step: docker run -d -p 80:80 tutum/wordpress
But when I turn off the computer, and run it again, all the database + application gone. And I need to restart from scratch.
How do I persist the database and application?
That image is deprecated. So you should be using the official wordpress image
version: '3.1'
services:
wordpress:
image: wordpress
ports:
- 8080:80
environment:
WORDPRESS_DB_PASSWORD: example
mysql:
image: mysql:5.7
volumes:
- ./data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: example
Then use docker-compose up to get the wordpress up. The wordpress image has code located at /usr/src/wordpress. So if you need to persist the plugins directory then you need to use volumes to map it like I did for mysql

Symfony app deployment with docker

I come here because I develop an app with Symfony3. And I've some questions about the deployment of the app.
Actually I use docker-compose:
version: '2'
services:
nginx:
build: ./docker/nginx/
ports:
- 8081:80
volumes:
- .:/home/docker:ro
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- default
php:
build: ./docker/php/
volumes:
- .:/home/docker:rw
- ./docker/php/php.ini:/usr/local/etc/php/conf.d/custom.ini:ro
working_dir: /home/docker
networks:
- default
dns_search:
- php
db:
image: mariadb:latest
ports:
- 3307:3306
environment:
- MYSQL_ROOT_PASSWORD=collectionManager
- MYSQL_USER=collectionManager
- MYSQL_PASSWORD=collectionManager
- MYSQL_DATABASE=collectionManager
volumes:
- mariadb_data:/var/lib/mysql
networks:
- default
dns_search:
- db
search:
build: ./docker/search/
ports:
- 9200:9200
- 9300:9300
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- default
dns_search:
- search
volumes:
mariadb_data:
driver: local
elasticsearch_data:
driver: local
networks:
default:
nginx is clear, engine is PHP-FPM with some extensions and composer, db is MariaDB, and search ElasticSearch with some plugins.
Before I don't use Docker and to deploy I used Megallanes or Deployer, when I want to deploy webapp.
With Docker I can use the docker-compose file and recreate images and container on the server, I also can save my containers in images and in tar archive and load it on the server. It's okay for nginx, and php-fpm, but what about elasticsearch and the db ? Because I need to keep data in for future update of the code. Then when I deploy the code I need to execute a Doctrine Migration and maybe some commands, and Deployer do it perfectly with some other interresting things. And how I deploy the code with Docker ? Can we use both ? Deployer for code and Docker for services ?
Thanks a lot for your help.
First of all , Please try using user-defined networks, they have additional features vs legacy linking like Embedded DNS. Meaning you can call other containers on the same network with their names in your applications. Containers on a User defined network are isolate from containers on another User defined network.
To create a user defined network:
docker network create --driver bridge <networkname>
Dockerfile to use user defined network example:
search:
restart: unless-stopped
build: ./docker/search/
ports:
- "9200:9200"
- "9300:9300"
networks:
- <networkname>
Second: I noticed you didnt use data volumes for you DB and ElasticSearch.
You need to mount volumes at certain points to keep your persistant data.
Third: When you export your containers it wont contain mounted volumes. You need to back up volume data and migrate it manually.
To backup volume data:
docker run --rm --volumes-from db -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
The above command will create a container, mounts volumes from DB container and mounts current directory in container as /backup , uses ubuntu image and tar command to create a backup of /dbdata in container (consider changing this to your dbdirectory) in the /backup that is mounted from your docker hosts). after the operation completes the transient container will be removed (ubuntu container we used for creating the backup with --rm switch).
To restore:
You copy the tar archive to the remote location and create your container with empty mounted volume. then extract the tar archive in that volume using the following command.
docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"

Resources