Docker Compose Link Containers for Phpunit with Wordpress, MySQL - wordpress

I would like to use a docker-compose app to run unit tests on a wordpress plugin.
Following (mostly) this tutorial I have created four containers:
my-wpdb:
image: mariadb
ports:
- "8081:3306"
environment:
MYSQL_ROOT_PASSWORD: dockerpass
my-wp:
image: wordpress
volumes:
- ./:/var/www/html
ports:
- "8080:80"
links:
- my-wpdb:mysql
environment:
WORDPRESS_DB_PASSWORD: dockerpass
my-wpcli:
image: tatemz/wp-cli
volumes_from:
- my-wp
links:
- my-wpdb:mysql
entrypoint: wp
command: "--info"
my-phpunit:
image: phpunit/phpunit
volumes_from:
- my-wp
links:
- my-wpdb
This tutorial got me as far as creating the phpunit files (xml, tests, bin, .travis), with the exception that I had to install subversion manually:
docker exec wp_my-wp_1 apt-get update
docker exec wp_my-wp_1 apt-get install -y wget git curl zip vim
docker exec wp_my-wp_1 apt-get install -y apache2 subversion libapache2-svn libsvn-perl
And run the last part of bin/install-wp-tests.sh manually in the database container:
docker exec wp_my-wpdb_1 mysqladmin create wordpress_test --user=root --password=dockerpass --host=localhost --protocol=tcp
I can run phpunit: docker-compose run --rm my-wp phpunit --help.
I can specify the config xml file:
docker-compose run --rm my-wp phpunit --configuration /var/www/html/wp-content/plugins/my-plugin/phpunit.xml.dist
However, the test wordpress installation is installed in the my-wp container's /tmp directory: /tmp/wordpress-tests-lib/includes/functions.php
I think I have to link the my-phpunit containers /tmp to the one in my-wp?

This doesn't answer my question, but as a solution to the problem, there is a github repo for a Docker Image that provides the wanted features: https://github.com/chriszarate/docker-wordpress as well as a wrapper you can invoke it through: https://github.com/chriszarate/docker-wordpress-vip
I wonder if for the initial set-up (noted in the question), it might make more sense to add Phpunit to the Wordpress container, rather than making a separate container for it.

Related

Docker doesn't copy new images

I have a problem regarding Docker.
When I'm deploying a new version of my app image, the images i have added to the images folder in my wwwroot folder aren't copied..
My Dockerfile looks like this:
FROM microsoft/aspnetcore-build:1.0-projectjson
WORKDIR /app-src
COPY . .
RUN dotnet restore
RUN dotnet publish src/Test -o /app
EXPOSE 5000
WORKDIR /app
ENTRYPOINT ["dotnet", "Test.dll"]
And my docker-compose:
version: '3.8'
services:
app:
image: <dockeruser>/<imagename>:<tag>
links:
- db
environment:
ConnectionStrings__Dataconnection: "Host=db;Username=Username;Password=Password;Database=db"
ports:
- "5000:5000"
volumes:
- ~/data/images:/app/wwwroot/images
db:
image: postgres:9.5
ports:
- "31337:5432"
volumes:
- ~/data/db:/var/lib/postgresql/data/pgdata
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: db
PGDATA: /var/lib/postgresql/data/pgdata
My current version of docker is:
Docker version 18.09.7, build 2d0083d and docker-compose docker-compose version 1.26.2, build eefe0d31
The exact same files (except for the docker-compose version was set to 2 in docker-compose.yml) worked previously on docker version Docker version 17.03.0-ce, build 60ccb22 and docker-compose version docker-compose version 1.9.0, build 2585387
I store my new images in my repos wwwroot/images folder, and then push them to the repo, and then dockerhub automatically builds an image from the new commit. On the server i then pull the new docker-image and run the docker-compose down -v command followed by docker-compose up -d but the images is not available in the app afterwards.
Disclaimer: This is a project I have overtaken and I'm aware of some of the very old software versions.
Your images may be in your container image, but since you are doing a bind mount whatever is in your server’s “~/data/images” directory will basically “override/replace” what’s in your image when the container is created.
Try removing the volume from the app service, basically remove this:
volumes:
- ~/data/images:/app/wwwroot/images
The other thing you can try is to manually copy the images to the “~/data/images” directory on the server.

docker-compose not behaving in compliance with dockerfiles

I'm a level 0 Docker user, so bear with me on this one:
I'm trying to create a shared container environment with docker-compose. The docker-compose.yaml looks like this:
# docker-compose.yml
#ubuntu(16.04) + python(3)
version: '3'
services:
ubuntu:
image: 434c03a615a2
build:
context: .
dockerfile: dockerfileBase
volumes:
- "./data/data_vol/:/app"
tty: true
#tensorflow
tensorflow:
image: tensorflow/tensorflow
build:
context: .
dockerfile: dockerfileTensorflow
ports:
- "8888:8888"
tty: true
#rstudio
rstudio:
image: rocker/rstudio
build:
context: .
dockerfile: dockerfileRstudio1
environment:
- PASSWORD=supersecret
ports:
- "8787:8787"
tty: true
As far as I can tell everything is working, but the dockerfile, with which I import Rstudio doesn't seem to get executed the same way inside the .yaml as it does outside of it. What I mean is that this Rstudio dockerfile:
#pull rstudio
FROM rocker/rstudio:3.4.3
LABEL maintainer="Landsense"
#set Env variables
ENV http_proxy=http://##.###.###.##:####
ENV https_proxy=http://##.###.###.##:####
ENV ftp_proxy=http://##.###.###.##:####
ENV TZ="Europe/Rome"
RUN apt-get update && \
apt-get install -y \
libgdal-dev \
libproj-dev \
libv8-dev \
ssh && \
apt-get clean all
RUN Rscript -e "install.packages('raster')"
installs packages when it's built on its own, but fails to do so when ran from the docker-compose.yaml . Can someone comment on this type of behavior? RSPKT!
When you have both image and build in a docker-compose service precedence is given to image. In your scenario your since you have image: rocker/rstudio in your compose file it will pull the rocker/rstudio:latest image from docker-hub. But what do you want is a image build on top of the rocker/rstudio image (In Dockerfile it has been used as the base image).
It is not a good practice to tag your image as with existing tag in docker-hub (You may face difficulties as wrong image is cached in your local docker images as you experienced here). First decide whether you really want to name your image (Otherwise compose will tag the image for you where tag include a part of your service name so you can easily identify). If you want use it as following with a prefix in image tag. Same goes with other two services.
image: localhost/rocker/rstudio
build:
context: .
dockerfile: dockerfileRstudio1

Docker "Invalid mount path app/symfony" must be absolute

Im trying to setup Webpack to run with docker. I'm looking to put it in its own container, build the files and then nginx will serve that produced code on its container.
My docker-compose.yml file looks like:
nginx:
build: ./nginx/
ports:
- 80:80
links:
- php
volumes_from:
- app
php:
build: ./php/
expose:
- 9000
links:
- mysql
volumes_from:
- app
app:
image: php:7.0-fpm
volumes:
- ./app/symfony:/var/www/html
command: "true"
web:
build: ./webpack
volumes_from:
- app
mysql:
image: mysql:latest
volumes_from:
- data
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: project
MYSQL_USER: project
MYSQL_PASSWORD: project
data:
image: mysql:latest
volumes:
- /var/lib/mysql
command: "true"
My code is stored in the app/symfony directory. The Dockerfile for the webpack container is currently:
FROM node:wheezy
WORKDIR /app
RUN apt-get update
RUN apt-get install curl -y
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash - && apt-get install nodejs -y
RUN npm install webpack -g
RUN npm install
CMD webpack --watch --watch-polling
I am getting the error:
ERROR: for web Cannot create container for service web: invalid bind mount spec "a60f89607640b36a468b471378a6b7079dfa5890db994a1228e7809b93b8b709:app/symfony:rw": invalid volume specification: 'a60f89607640b36a468b471378a6b7079dfa5890db994a1228e7809b93b8b709:app/symfony:rw': invalid mount config for type "volume": invalid mount path: 'app/symfony' mount path must be absolute
ERROR: Encountered errors while bringing up the project.
I want webpack to take the code in app/symfony, and build any assets, and then the nginx container will serve those.
I had a similar issue.
my docker-compose.yml looked like this
version: '3.1'
services:
nginx:
build:
context: ./server
ports:
- "8080:80"
volumes:
- ./fw:opt/www/app/
and got the error "invalid mount path: 'opt/www/app' mount path must be absolute
"
I resolved it by changing the mount path like this by adding a slash in front of the path.
volumes:
- ./fw: /opt/www/app/
SOLUTION:
If you have this docker-compose.yaml in your root of project. Then make sure you have a '/' before app.
As per Docker Documentation:
docker run -dp 3000:3000 \
-w /app -v "$(pwd):/app" \
node:12-alpine \
sh -c "yarn install && yarn run dev"
This works perfectly.

Docker permissions development environment using a host mounted volume

I'm using docker-compose to set up a portable development environment for a bunch of symfony2 applications (though nothing I want to do is specific to symfony). I've decided to have the source files on the local machine exposed as a data volume with all the other dependencies in docker. This way developers can edit on the local file-system.
Everything works great, except that after running the app my cache and log files and the files created by composer in the /vendor directory are now owned by root.
I've read about this problem and some possible approaches here:
Changing permissions of added file to a Docker volume
But I can't quite quite tease out what changes I have to make in my docker-compose.yml file so that when my symphony container starts with docker-compose up any files that are created have the permissions of the user on the host machine.
I'm posting the file for reference, worker is where php, etc. live:
source:
image: symfony/worker-dev
volumes:
- $PWD:/var/www/app
mongodb:
image: mongo:2.4
ports:
- "27017:27017"
volumes_from:
- source
worker:
image: symfony/worker-dev
ports:
- "80:80"
- mongodb
volumes_from:
- source
volumes:
- "tmp/:/var/log/nginx"
One of the solutions is to execure the commands inside your container. I've tried multiple workarounds for the same issue I faced in the past. I find executing the command inside the container the most user-friendly.
Example command: docker-compose run CONTAINER_NAME php bin/console cache:clear. You may use make, ant or any modern tool to keep the commands short.
Example with Makefile:
all: | build run test
build: | docker-compose-build
run: | composer-install clear-cache
############## docker compose
docker-compose-build:
docker-compose build
############## composer
composer-install:
docker-compose run app composer install
composer-update:
docker-compose run app composer update
############## cache
clear-cache:
docker-compose run app php bin/console cache:clear
docker-set-permissions:
docker-compose run app chown -R www-data:www-data var/logs
docker-compose run app chown -R www-data:www-data var/cache
############## test
test:
docker-compose run app php bin/phpunit
Alternatively, you may introduce a .env file which contains a environment variables and then user one of the variables to run usermod command in the Docker container.

Dockerize wordpress

Trying to dockerise wordpress I figure out this scenenario:
2 data volume containers, one for the database (bbdd) and another for wordpress files (wordpress):
sudo docker create -v /var/lib/mysql --name bbdd ubuntu:trusty /bin/true
sudo docker create -v /var/www/html --name wordpress ubuntu:trusty /bin/true
Then I need a container for mysql so I use the official mysql image from docker hub and also the volume /var/lib/mysql from the first data container:
docker run --volumes-from bbdd --name mysql -e MYSQL_ROOT_PASSWORD="xxxx" -d mysql:5.6
Then I need a container for apache/php so I use official wordpress image from docker hub and also the volume /var/lib/mysql from the first data container:
docker run --volumes-from wordpress --name apache --link mysql:mysql -d -p 8080:80 wordpress:4.1.2-apache
What I understand from docker docs is that if I don't remove the data containers, I'll have persistance.
However if I stop and delete running containers (apache and mysql) and recreate them again with last commands, data get lost:
docker run --volumes-from bbdd --name mysql -e MYSQL_ROOT_PASSWORD="xxxx" -d mysql:5.6
docker run --volumes-from wordpress --name apache --link mysql:mysql -d -p 8080:80 wordpress:4.1.2-apache
However if I create the containers without data containers, it seems to work as I expected:
docker run -v /home/juanda/project/mysql:/var/lib/mysql --name mysql -e MYSQL_ROOT_PASSWORD="juanda" -d mysql:5.6
docker run -v /home/juanda/project/wordpress:/var/www/html --name apache --link mysql:mysql -d -p 8080:80 wordpress:4.1.2-apache
You need to run the data container for once to make it persistent:
sudo docker run -v /var/lib/mysql --name bbdd ubuntu:trusty /bin/true
sudo docker run -v /var/www/html --name wordpress ubuntu:trusty /bin/true
This is an old bug of Docker described here. You may be affected if your Docker version is old.
In a very simplified test case this appears to work as advertised and documented in Creating and mounting a Data Volume Container:
prologic#daisy
Thu Apr 30 08:18:45
~
$ docker create -v /test --name data busybox /vin/true
Unable to find image 'busybox:latest' locally
latest: Pulling from busybox
cf2616975b4a: Pull complete
6ce2e90b0bc7: Pull complete
8c2e06607696: Already exists
busybox:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:38a203e1986cf79639cfb9b2e1d6e773de84002feea2d4eb006b52004ee8502d
Status: Downloaded newer image for busybox:latest
6f5fc1d2e33654867cff8ffdb60c5765ced4b7128441ae2c6be24b68fb6454ef
prologic#daisy
Thu Apr 30 08:20:53
~
$ docker run -i -t --rm --volumes-from data crux /bin/bash
bash-4.3# cd /test
bash-4.3# ls
bash-4.3# touch foo
bash-4.3# echo "Hello World" >> foo
bash-4.3# cat foo
Hello World
bash-4.3# exit
prologic#daisy
Thu Apr 30 08:21:20
~
$ docker run -i -t --rm --volumes-from data crux /bin/bash
bash-4.3# cd /test
bash-4.3# ls
foo
bash-4.3# cat foo
Hello World
bash-4.3# exit
Note that I deleted the attached container to make sure the persistent data volume container's data was left in tact.
The data volume container and it's data would only disappear if you ran the following:
docker rm -v data
Note: the -v option to actually remove volumes.
See (specifically the -v/--volumes option):
$ docker rm -h
Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...]
Remove one or more containers
-f, --force=false Force the removal of a running container
(uses SIGKILL) --help=false Print usage -l, --link=false
Remove the specified link -v, --volumes=false Remove the volumes
associated with the container
For reference I am running:
prologic#daisy
Thu Apr 30 08:24:51
~
$ docker version
Client version: 1.6.0
Client API version: 1.18
Go version (client): go1.3.3
Git commit (client): 47496519da
OS/Arch (client): linux/amd64
Server version: 1.6.0
Server API version: 1.18
Go version (server): go1.3.3
Git commit (server): 47496519da
OS/Arch (server): linux/amd64
Update: For a quick example (which you can use in production) of a Dockerized Wordpress setup with full hosting support see: https://gist.github.com/prologic/b5525a50bb4d867d84a2
You can simply use docker-compose file as:
version: '3.3'
services:
# https://hub.docker.com/_/nginx
# Doesn't play well with wordpress fpm based images
# nginx:
# image: nginx:latest
# container_name: "${PROJECT_NAME}_nginx"
# ports:
# - "${NGINX_HTTP_PORT}:80"
# working_dir: /var/www/html
# volumes:
# - ./docker/etc/nginx:/etc/nginx/conf.d
# - ./logs/nginx:/var/log/nginx
# - ./app:/var/www/html
# environment:
# - NGINX_HOST=${NGINX_HOST}
# #command: /bin/sh -c "envsubst '$$NGINX_HOST' < /etc/nginx/conf.d/wordpress.conf > /etc/nginx/conf.d/wordpress.conf && nginx -g 'daemon off;'"
# links:
# - wordpress
# restart: always
# https://hub.docker.com/r/jwilder/nginx-proxy
nginx-proxy:
image: jwilder/nginx-proxy
container_name: "${PROJECT_NAME}_nginx-proxy"
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
# https://hub.docker.com/_/mysql
mysql:
image: mysql:${MYSQL_TAG}
# For MySQL 8.0
#image: mysql:8
#command: '--default-authentication-plugin=mysql_native_password'
container_name: "${PROJECT_NAME}_mysql"
ports:
- "${MYSQL_PORT}:3306"
volumes:
- ./data/mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
restart: always
# https://hub.docker.com/_/wordpress
wordpress:
# -fpm or apache, unfortunately fpm doesn't work properly with nginx-proxy
image: wordpress:${WP_VERSION}-php${PHP_VERSION}-apache
container_name: "${PROJECT_NAME}_wordpress"
environment:
- VIRTUAL_HOST=${WP_HTTP_HOST}
- WORDPRESS_DB_HOST=mysql:3306
- WORDPRESS_DB_NAME=${MYSQL_DATABASE}
- WORDPRESS_DB_USER=${MYSQL_USER}
- WORDPRESS_DB_PASSWORD=${MYSQL_ROOT_PASSWORD}
working_dir: /var/www/html
volumes:
- ./app:/var/www/html
#- ./app/wp-content:/var/www/html/wp-content
- ./docker/etc/php-fpm/custom.ini:/usr/local/etc/php/conf.d/999-custom.ini
#depends_on:
# - mysql
ports:
- "${WP_HTTP_PORT}:80"
expose:
- ${WP_HTTP_PORT}
links:
- mysql
restart: always
# https://hub.docker.com/r/phpmyadmin/phpmyadmin
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: "${PROJECT_NAME}_phpmyadmin"
ports:
- "${PMA_PORT}:80"
expose:
- ${PMA_PORT}
environment:
VIRTUAL_HOST: ${PMA_HTTP_HOST}
PMA_HOST: mysql
depends_on:
- mysql
# #todo services
# jwilder/nginx-proxy
# https / letsencrypt
# composer
# mailhog
# redis
# phpredisadmin
# blackfire
networks:
default:
external:
name: nginx-proxy
SOURCE: https://github.com/MagePsycho/wordpress-dockerized

Resources