I am writing a wordpressplugin and want to create a phpunit testenvironment for it. For that I've created a docker container using a php:7.2-apache base container and installed phpunit on it via phar archive on the image. After that I have set some environment variables and used the following bashscript, which is similear to the one created by "wp scaffold plugin-tests", as entrypoint.
# INSTALL WP-CORE
wp core download --path="${WPPATH}" --allow-root
waitforit -t 60 database:3306 -- wp config create --dbuser="${WPDBUSER}" --dbpass="${WPDBPASS}" --dbname="${WPDBNAME}" --dbhost="${WPDBHOST}" --path="${WPPATH}" --allow-root
wp db create --path="${WPPATH}" --allow-root
wp core install --url="${WPURL}" --title="SpitzeDev" --admin_user="${ADMINUSER}" --admin_password="${ADMINPASS}" --admin_email="${ADMINMAIL}" --path="${WPPATH}" --allow-root
chown www-data:www-data "/var/www/html" -R
# Install WP-Testsuite for PHPUnit
if [ ! -d $WP_TESTS_DIR ]; then
mkdir -p $WP_TESTS_DIR
svn co --quiet https://develop.svn.wordpress.org/tags/$(wp core version --allow-root --path=${WPPATH})/tests/phpunit/includes/ $WP_TESTS_DIR/includes
svn co --quiet https://develop.svn.wordpress.org/tags/$(wp core version --allow-root --path=${WPPATH})/tests/phpunit/data/ $WP_TESTS_DIR/data
fi
# Configure WP-Testuite for PHPUnit
if [ ! -f wp-tests-config.php ]; then
download https://develop.svn.wordpress.org/${WP_TESTS_TAG}/wp-tests-config-sample.php "$WP_TESTS_DIR"/wp-tests-config.php
# remove all forward slashes in the end
WP_CORE_DIR=$(echo ${WPPATH} | sed "s:/\+$::")
sed -i "s:dirname( __FILE__ ) . '/src/':'${WP_CORE_DIR}/':" "$WP_TESTS_DIR"/wp-tests-config.php
sed -i "s/youremptytestdbnamehere/$WPDBNAME/" "$WP_TESTS_DIR"/wp-tests-config.php
sed -i "s/yourusernamehere/$WPDBUSER/" "$WP_TESTS_DIR"/wp-tests-config.php
sed -i "s/yourpasswordhere/$WPDBPASS/" "$WP_TESTS_DIR"/wp-tests-config.php
sed -i "s|localhost|${WPDBHOST}|" "$WP_TESTS_DIR"/wp-tests-config.php
fi
phpunit
The script works fine until phpunit is called. But then the follwing exception is thrown:
"Fatal error: Class PHPUnit_Util_Test may not inherit from final class (PHPUnit\Util\Test) in /tmp/wordpress-tests-lib/includes/phpunit6-compat.php on line 18"
I don't really understand how this error could occure. If I use this the phpunit tests of my plugin the error is not thrown. If use these Tests on my local maschine they work just fine.
I had this problem when trying to test using the wordpress core test framework with phpunit 7.0.3 and PHP 7.1.6
Solved by switching to phpunit 6.1.0 and PHP 7.0.20.
Related
I'm building a website using docker-compose and 3 docker containers (mariadb/nginx/wordpress) and I want to install wordpress in a script, so my wordpress Dockerfile looks like this
from debian:buster
run apt-get update -y;\
apt-get install -y curl mariadb-client\
php php7.3 php7.3-fpm php7.3-mysql php-common php7.3-cli\
php7.3-common php7.3-json php7.3-opcache php7.3-readline\
php-curl php-gd php-intl php-mbstring php-soap php-xml php-xmlrpc\
php-zip
run mkdir -p /var/www/html;\
cd /var/www/html
add conf/php-fpm.conf /etc/php/7.3/fpm/pool.d/www.conf
add conf/init_wp.sh /tmp
run mkdir -p /var/run /run/php
run curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar
run chmod +x wp-cli.phar
run mv wp-cli.phar /usr/local/bin/wp
workdir /var/www/html
run wp core download --allow-root
run chown -R www-data:www-data /var/www/html
cmd [ "sh", "/tmp/init_wp.sh" ]
and my entrypoint script looks like this
FILE=/var/www/html/.exist
if [ ! -f "$FILE" ]
then
echo "Setting up wordpress"
rm -rf /var/www/html/wp-config.php
wp config create --dbname=$DB_NAME --dbuser=$WP_USER --dbpass=$WP_PASSWORD --dbhost="mariadb" --path="/var/www/html/" --allow-root --skip-check
wp core install --url="localhost" --title="inception" --admin_user=$ADMIN_USER --admin_password=$ADMIN_PASSWORD --admin_email=$ADMIN_EMAIL --path="/var/www/html/" --allow-root
wp user create testuser testuser#student.42.fr --role=author --user_pass="abc123" --allow-root
touch /var/www/html/.exist
fi
echo "Wordpress setup done"
exec php-fpm7.3 -F -R
but when I load the website for the first time I still get redirected to the wordpress installation page, shouldn't wp config create and wp core install do this for me ?
What did I do wrong?
mariadb works fine after the wordpress installation so the problem doesn't seem to come from here, so am I missing an element in the install script ?
I need to install Nginx on my target which there is no internet connection, how can I install Nginx with all dependencies in an offline mode?? thanks in advance for your answers.
I have recently gone through this procedure and this is what worked for me on centos7:
You need an online Linux server to download dependencies. You can use virtual machines or anything else.
On your online server create a .sh file and copy script below in it. (I named it download_dependencies)
#!/bin/bash
# This script is used to fetch external packages that are not available in standard Linux distribution
# Example: ./fetch-external-dependencies ubuntu18.04
# Script will create nms-dependencies-ubuntu18.04.tar.gz in local directory which can be copied
# into target machine and packages inside can be installed manually
set -eo pipefail
# current dir
PACKAGE_PATH="."
mkdir -p $PACKAGE_PATH
declare -A CLICKHOUSE_REPO
CLICKHOUSE_REPO['ubuntu18.04']="https://repo.clickhouse.tech/deb/lts/main"
CLICKHOUSE_REPO['ubuntu20.04']="https://repo.clickhouse.tech/deb/lts/main"
CLICKHOUSE_REPO['centos7']="https://repo.clickhouse.tech/rpm/lts/x86_64"
CLICKHOUSE_REPO['centos8']="https://repo.clickhouse.tech/rpm/lts/x86_64"
CLICKHOUSE_REPO['rhel7']="https://repo.clickhouse.tech/rpm/lts/x86_64"
CLICKHOUSE_REPO['rhel8']="https://repo.clickhouse.tech/rpm/lts/x86_64"
declare -A NGINX_REPO
NGINX_REPO['ubuntu18.04']="https://nginx.org/packages/mainline/ubuntu/pool/nginx/n/nginx/"
NGINX_REPO['ubuntu20.04']="https://nginx.org/packages/mainline/ubuntu/pool/nginx/n/nginx/"
NGINX_REPO['centos7']="https://nginx.org/packages/mainline/centos/7/x86_64/RPMS/"
NGINX_REPO['centos8']="https://nginx.org/packages/mainline/centos/8/x86_64/RPMS/"
NGINX_REPO['rhel7']="https://nginx.org/packages/mainline/rhel/7/x86_64/RPMS/"
NGINX_REPO['rhel8']="https://nginx.org/packages/mainline/rhel/8/x86_64/RPMS/"
CLICKHOUSE_KEY="https://repo.clickhouse.com/CLICKHOUSE-KEY.GPG"
NGINX_KEY="https://nginx.org/keys/nginx_signing.key"
declare -A CLICKHOUSE_PACKAGES
# for Clickhouse package names are static between distributions
# we use ubuntu/centos entries as placeholders
CLICKHOUSE_PACKAGES['ubuntu']="
clickhouse-server_21.3.10.1_all.deb
clickhouse-common-static_21.3.10.1_amd64.deb"
CLICKHOUSE_PACKAGES['centos']="
clickhouse-server-21.3.10.1-2.noarch.rpm
clickhouse-common-static-21.3.10.1-2.x86_64.rpm"
CLICKHOUSE_PACKAGES['ubuntu18.04']=${CLICKHOUSE_PACKAGES['ubuntu']}
CLICKHOUSE_PACKAGES['ubuntu20.04']=${CLICKHOUSE_PACKAGES['ubuntu']}
CLICKHOUSE_PACKAGES['centos7']=${CLICKHOUSE_PACKAGES['centos']}
CLICKHOUSE_PACKAGES['centos8']=${CLICKHOUSE_PACKAGES['centos']}
CLICKHOUSE_PACKAGES['rhel7']=${CLICKHOUSE_PACKAGES['centos']}
CLICKHOUSE_PACKAGES['rhel8']=${CLICKHOUSE_PACKAGES['centos']}
declare -A NGINX_PACKAGES
NGINX_PACKAGES['ubuntu18.04']="nginx_1.21.3-1~bionic_amd64.deb"
NGINX_PACKAGES['ubuntu20.04']="nginx_1.21.2-1~focal_amd64.deb"
NGINX_PACKAGES['centos7']="nginx-1.21.4-1.el7.ngx.x86_64.rpm"
NGINX_PACKAGES['centos8']="nginx-1.21.4-1.el8.ngx.x86_64.rpm"
NGINX_PACKAGES['rhel7']="nginx-1.21.4-1.el7.ngx.x86_64.rpm"
NGINX_PACKAGES['rhel8']="nginx-1.21.4-1.el8.ngx.x86_64.rpm"
download_packages() {
local target_distribution=$1
if [ -z $target_distribution ]; then
echo "$0 - no target distribution specified"
exit 1
fi
mkdir -p "${PACKAGE_PATH}/${target_distribution}"
# just in case delete all files in target dir
rm -f "${PACKAGE_PATH}/${target_distribution}/*"
readarray -t clickhouse_files <<<"${CLICKHOUSE_PACKAGES[${target_distribution}]}"
readarray -t nginx_files <<<"${NGINX_PACKAGES[${target_distribution}]}"
echo "Downloading Clickhouse signing keys"
curl -fs ${CLICKHOUSE_KEY} --output "${PACKAGE_PATH}/${target_distribution}/clickhouse-key.gpg"
echo "Downloading Nginx signing keys"
curl -fs ${NGINX_KEY} --output "${PACKAGE_PATH}/${target_distribution}/nginx-key.gpg"
for package_file in "${clickhouse_files[#]}"; do
if [ -z $package_file ]; then
continue
fi
file_url="${CLICKHOUSE_REPO[$target_distribution]}/$package_file"
save_file="${PACKAGE_PATH}/${target_distribution}/$package_file"
echo "Fetching $file_url"
curl -fs $file_url --output $save_file
done
for package_file in "${nginx_files[#]}"; do
if [ -z $package_file ]; then
continue
fi
file_url="${NGINX_REPO[$target_distribution]}/$package_file"
save_file="${PACKAGE_PATH}/${target_distribution}/$package_file"
echo "Fetching $file_url"
curl -fs $file_url --output $save_file
done
bundle_file="${PACKAGE_PATH}/nms-dependencies-${target_distribution}.tar.gz"
tar -zcf $bundle_file -C "${PACKAGE_PATH}/${target_distribution}" .
echo "Bundle file saved as $bundle_file"
}
target_distribution=$1
if [ -z $target_distribution ]; then
echo "Usage: $0 target_distribution"
echo "Supported target distributions: ${!CLICKHOUSE_REPO[#]}"
exit 1
fi
# check if target distribution is supported
if [ -z ${CLICKHOUSE_REPO[$target_distribution]} ]; then
echo "Target distribution is not supported."
echo "Supported distributions: ${!CLICKHOUSE_REPO[#]}"
exit 1
fi
download_packages "${target_distribution}"
Then on the same directory that contains download_dependencies.sh run command below:
download_dependencies.sh <your linux version>
In my case, I ran code below (leave it blank to see options):
download_dependencies.sh centos7
It should start to download and when it finished you should see nms-dependencies-rhel7.tar.gz in your directory.
Copy that file(.tar.gz) to your offline target.
Now on your target machine, go to directory which you copied your file and run the code below:
tar -zxvf nms-dependencies-rhel7.tar.gz
sudo yum install *.rpm
After installation you can start nginx using systemctl:
sudo systemctl start clickhouse-server
sudo systemctl start nginx
Your nginx service must be running now!
you can download tar file in another system and copy
did you try this link?
https://gist.github.com/taufiqibrahim/d7f697de6bb8b93ca348a5b94d6adbfc
I have a wordpress docker-compose, which contains 3 services.
1 - php,apache
2 - mysql
3 - phpmyadmin
what I want to do is install wordpress core and plugins at build time,
and the reason is obvious I don't want everytime I restart my containers I goes to all steps all over again and install plugins and ... . so I need connection to database but It seems that build time I can't access my mysql container.
I read somewhere that I need to specify network on build stage but I couldn't make it work.
and here is my docker-compose file :
version: '3.8'
volumes:
mhndev_systems_mysql_data:
mhndev_systems_wp_uploads:
networks:
mhndev_network:
services:
## --------------------------------------------
## | 1: Wordpress
## --------------------------------------------
mhndev_systems_wp:
build:
context: .
dockerfile: docker/Dockerfile
args:
WP_VERSION: ${WP_VERSION}
MYSQL_HOST: ${MYSQL_HOST}
MYSQL_PORT: ${MYSQL_PORT}
MYSQL_DATABASE_NAME: ${MYSQL_DATABASE_NAME}
MYSQL_USERNAME: ${MYSQL_USERNAME}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
SITE_URL: ${SITE_URL}
DB_TABLE_PREFIX: ${DB_TABLE_PREFIX}
ENV: ${ENV}
UID: ${UID}
GID: ${GID}
SITE_TITLE: ${SITE_TITLE}
SITE_ADMIN_USERNAME: ${SITE_ADMIN_USERNAME}
SITE_ADMIN_PASSWORD: ${SITE_ADMIN_PASSWORD}
SITE_ADMIN_EMAIL: ${SITE_ADMIN_EMAIL}
network: "mhndev_network"
ports:
- 8191:80
env_file:
- .env
volumes:
- ./themes/dt-the7-child:/var/www/html/wp-content/themes/dt-the7-child
- ./plugins/teamcity:/var/www/html/wp-content/plugins/teamcity
- mhndev_systems_wp_uploads:/var/www/html/wp-content/uploads
depends_on:
- mhndev_systems_mysql
networks:
- mhndev_network
## --------------------------------------------
## | 2: Mysql
## --------------------------------------------
mhndev_systems_mysql:
image: mysql:5.7.21
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE_NAME}
MYSQL_USER: ${MYSQL_USERNAME}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
volumes:
- mhndev_systems_mysql_data:/var/lib/mysql
networks:
- mhndev_network
## --------------------------------------------
## | 3: PhpMyAdmin
## --------------------------------------------
mhndev_systems_phpmyadmin:
image: phpmyadmin/phpmyadmin:5.0.2
depends_on:
- mhndev_systems_mysql
ports:
- "7191:80"
environment:
PMA_HOST: mhndev_systems_mysql
networks:
- mhndev_network
here is my Dockerfile :
FROM php:7.4-apache
ARG WP_VERSION=5.5.1
RUN apt-get update && apt-get install -y \
sendmail \
libpng-dev \
libjpeg-dev \
libfreetype6-dev \
netcat \
gnupg \
libzip-dev \
zip \
&& docker-php-ext-configure gd --with-freetype --with-jpeg \
&& docker-php-ext-install gd pdo_mysql zip \
&& docker-php-ext-install mysqli && docker-php-ext-enable mysqli
ADD ./docker/apache.conf /etc/apache2/sites-enabled/000-default.conf
RUN \
printf "\nServerName localhost" >> /etc/apache2/apache2.conf &&\
a2enmod rewrite expires
### install wp cli
RUN curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar &&\
chmod +x wp-cli.phar &&\
mv wp-cli.phar /usr/local/bin/wp
WORKDIR /var/www/html
RUN wp core download --allow-root
### copy plugins and themes and php config files to container
COPY ["./plugins/*.zip", "/docker/plugins/"]
COPY ["./themes/*.zip", "/docker/themes/"]
COPY ./themes/dt-the7-child /var/www/html/wp-content/themes/dt-the7-child
COPY ./plugins/teamcity /var/www/html/wp-content/plugins/teamcity
COPY ./docker/php.ini /usr/local/etc/php/php.ini
COPY ./docker/wp-config.php /var/www/html/wp-config.php
COPY ["./plugins/plugins_*.txt", "/docker/plugins/"]
COPY ["./uploads/", "/docker/uploads/"]
COPY ["./docker/commands/*.sh", "/docker/bin/"]
RUN chmod a+x /docker/bin/*.sh
RUN /bin/bash -c "source /docker/bin/setup-theme-plugins.sh"
RUN chown -R www-data:www-data /var/www/html/
CMD apachectl -D FOREGROUND
and here is a bash file (setup-theme-plugins.sh), which is responsible for installing wordpress core and plugins.
#!/bin/bash
echo '-------------------whoami--------------------'
echo $(whoami)
echo '---------------------------------------------'
printf "\033[0;32m > Waiting for mysql ...\x1b[0m \n"
until nc -z -v -w30 "$MYSQL_HOST" "$MYSQL_PORT"
do
echo "Waiting for database connection..."
# wait for 5 seconds before check again
sleep 1
done
printf "\033[0;32m >Mysql is ready ...\x1b[0m \n"
echo '---------------------------------------------'
printf "\033[0;32m > Copy wordpress uploads if not exists (usually first time ) \x1b[0m \n"
FILE=/var/www/html/wp-content/uploads/2020/01/dell.jpg
if [ -f "$FILE" ]; then
echo "$FILE exists, so no need to copy uploads"
else
echo "$FILE does not exist, copying ..."
cp -r /docker/uploads/* /var/www/html/wp-content/uploads
fi
echo '---------------------------------------------'
sed -i s/__DB_NAME__/"${MYSQL_DATABASE_NAME}"/g /var/www/html/wp-config.php
sed -i s/__DB_USER__/"${MYSQL_USERNAME}"/g /var/www/html/wp-config.php
sed -i s/__DB_PASSWORD__/"${MYSQL_PASSWORD}"/g /var/www/html/wp-config.php
sed -i s/__DB_HOST__/"${MYSQL_HOST}"/g /var/www/html/wp-config.php
sed -i s/__SITE_URL__/"${SITE_URL}"/g /var/www/html/wp-config.php
if [[ -n "${DB_TABLE_PREFIX}" ]]; then
sed -i s/__DB_TABLE_PREFIX__/"${DB_TABLE_PREFIX}"/g /var/www/html/wp-config.php
else
sed -i s/__DB_TABLE_PREFIX__/"${DB_TABLE_PREFIX}"/g /var/www/html/wp-config.php
fi
### set WP_DEBUG, SCRIPT_DEBUG based on DEV environment variable
if [ "${ENV}" = "dev" ]; then
sed -i s/__WP_DEBUG__/true/g /var/www/html/wp-config.php
sed -i s/__SCRIPT_DEBUG__/true/g /var/www/html/wp-config.php
else
sed -i s/__WP_DEBUG__/false/g /var/www/html/wp-config.php
sed -i s/__SCRIPT_DEBUG__/false/g /var/www/html/wp-config.php
fi
wp option update home "${SITE_URL}" --allow-root
wp option update siteurl "${SITE_URL}" --allow-root
### set WP_DEBUG_LOG to php://stdout to always output logs to stdout so be available for docker logs
old_string='__WP_DEBUG_LOG__'
new_string='php://stdout'
sed -i "s%$old_string%$new_string%g" /var/www/html/wp-config.php
if [[ -n "${SITE_ADMIN_USERNAME}" ]]; then
DASHBOARD_USER_NAME="${SITE_ADMIN_USERNAME}"
else
DASHBOARD_USER_NAME="admin"
fi
if [[ -n "${UID}" ]]; then
usermod -u "${UID}" www-data
groupmod -g "${GID}" www-data
fi
chown -R www-data:www-data /var/www/html/
### install wordpress
printf "\033[0;32m > Checking if wordpress core installed, if not Installing it ...\x1b[0m \n"
wp core is-installed --allow-root
retVal=$?
if [ "$retVal" == "1" ];then
printf "\033[0;32m > Trying to Install wordpress ...\x1b[0m \n"
printf "\033[0;32m > Command to execute is : wp core install --url="${SITE_URL}" --title="${SITE_TITLE}" --admin_user="${DASHBOARD_USER_NAME}" --admin_password="${SITE_ADMIN_PASSWORD}" --admin_email="${SITE_ADMIN_EMAIL}" --allow-root ...\x1b[0m \n"
wp core install --url="${SITE_URL}" --title="${SITE_TITLE}" --admin_user="${DASHBOARD_USER_NAME}" --admin_password="${SITE_ADMIN_PASSWORD}" --admin_email="${SITE_ADMIN_EMAIL}" --allow-root
fi
echo '---------------------------------------------'
### install The7 theme from zip file
# shellcheck disable=SC2059
printf "\033[0;32m > Checking if theme: $FILE installed, if not Installing it ...\x1b[0m \n"
wp theme is-installed The7 --allow-root
is_theme_installed=$?
if [[ is_theme_installed -eq 1 || ! -d /var/www/html/wp-content/themes/dt-the7 ]]; then
rm -rf /var/www/html/wp-content/themes/dt-the7
wp theme install /docker/themes/dt-the7.zip --force --allow-root;
fi
echo '---------------------------------------------'
### install plugins from plugins_dev.txt or plugins_prod.txt based on environment
# shellcheck disable=SC2162
while read line; do
IFS='=' read -r -a array <<< "$line"
printf "\033[0;32m > Checking if plugin:%s is installed else Installing %s:%s ...\x1b[0m \n" "${array[0]}" "${array[0]}" "${array[1]}"
# if wp plugin is-installed "${array[0]}" --allow-root; then
wp plugin install "${array[0]}" --version="${array[1]}" --activate --force --allow-root
# fi
echo '---------------------------------------------'
done < /docker/plugins/plugins_"${ENV}".txt
### install plugins from zip file
for FILE in /docker/plugins/*.zip;
do
# shellcheck disable=SC2059
printf "\033[0;32m > Installing plugin $FILE ...\x1b[0m \n"
wp plugin install "$FILE" --force --allow-root;
echo '---------------------------------------------'
done
printf "\033[0;32m > Checking if hello plugin exist and if so uninstall it ...\x1b[0m \n"
#if ! wp plugin is-installed hello --allow-root; then
wp plugin uninstall hello --allow-root
#fi
echo '---------------------------------------------'
printf "\033[0;32m > Checking if akismet plugin exist and if so uninstall it ...\x1b[0m \n"
#if ! wp plugin is-installed akismet --allow-root; then
wp plugin uninstall akismet --allow-root
#fi
echo '---------------------------------------------'
printf "\033[0;32m > activating the7 child theme ...\x1b[0m \n"
wp theme activate dt-the7-child --allow-root
echo '---------------------------------------------'
printf "\033[0;32m > Uninstalling initial themes ...\x1b[0m \n"
#if ! wp theme is-installed twentynineteen --allow-root; then
wp theme delete twentynineteen --allow-root
#fi
echo '---------------------------------------------'
#if ! wp theme is-installed twentyseventeen --allow-root; then
wp theme delete twentyseventeen --allow-root
#fi
echo '---------------------------------------------'
#if ! wp theme is-installed twentytwenty --allow-root; then
wp theme delete twentytwenty --allow-root
#fi
echo '---------------------------------------------'
As you can see in my bash file I'm connecting to mysql container.
How can I achieve this ?
There is a simple misunderstanding between build time and runtime; all the containers will be available in runtime, not build time. So there is no way to access your MySQL container, in build time.
My suggestion is that to remove all the steps which needs MySQL from your Dockerfile and move them to your entrypoint, then set a boolean ENV in your Dockerfile and check that value at the beginning of your entrypoint which will run your commands, if you set that ENV to true; now if you need to run your entrypoint (aka setting up your WP & MySQL), simply pass that ENV in build time.
docker build --build-arg var_name=${VARIABLE_NAME}
You cannot connect to the database from the Dockerfile at all.
Part of this is the basic Docker model of how images work. A Docker image contains only a filesystem and some metadata describing how to start a container from it; it is something you could copy to a different system and run there. If you built an image and tried to update a database as part of it, and then ran the same image on a different system, it wouldn't have the database setup; similarly, you can delete and recreate the local database without rebuilding the image, and you won't have any database content.
Mechanically, the docker build step (or its Compose equivalent) runs in a restricted environment. Most notably here, it is not attached to any Compose network, so there's no network setup for it to resolve hostnames like mhndev_systems_mysql. (Technically it is on the default bridge network, as distinct from the Compose default network or any networks: you specify for the built container.)
In a typical application you'd want to separate "the application code" from "the database setup". You have to run the database-setup part (often, "migrations") when the application starts up, or separately from starting the application; but you can't do it at build time.
My guess is that you are unable to connect because the database service started but is not ready. In the docker documentation on depends_on it states this (and I too have had this problem):
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started. If you need to wait for a service to be ready, see Controlling startup order for more on this problem and strategies for solving it.
(emphasis mine)
In my case I solved it in a very ugly way (have a sleep() for 10 seconds in my python script), but if this is no solution for you, maybe the official strategy documentation found here might help.
[edit]
What do you use as the host variable name? When using a network in docker compose, machines should be able to communicate to each other using their container names (i.e. mhndev_systems_mysql in your case). Can you try that?
When using WP CLI in docker, I need to execute it as root.
I need to add the flag --allow-root directly in .bashrc and I am trying to figure out why it doesn't work.
FROM webdevops/php-dev:7.3
# configure postfix to use mailhog
RUN postconf -e "relayhost = mail:1025"
# install wp cli
RUN curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar && \
chmod +x wp-cli.phar && \
mv wp-cli.phar /usr/local/bin/wp && \
echo 'wp() {' >> ~/.bashrc && \
echo '/usr/local/bin/wp "$#" --allow-root' >> ~/.bashrc && \
echo '}' >> ~/.bashrc
WORKDIR /var/www/html/
my .bashrc
# ~/.bashrc: executed by bash(1) for non-login shells.
# Note: PS1 and umask are already set in /etc/profile. You should not
# need this unless you want different defaults for root.
# PS1='${debian_chroot:+($debian_chroot)}\h:\w\$ '
# umask 022
# You may uncomment the following lines if you want `ls' to be colorized:
# export LS_OPTIONS='--color=auto'
# eval "`dircolors`"
# alias ls='ls $LS_OPTIONS'
# alias ll='ls $LS_OPTIONS -l'
# alias l='ls $LS_OPTIONS -lA'
#
# Some more alias to avoid making mistakes:
# alias rm='rm -i'
# alias cp='cp -i'
# alias mv='mv -i'
wp() {
/usr/local/bin/wp "$#" --allow-root
}
when I try to execute any wp command I get this error:
Error: YIKES! It looks like you're running this as root. You probably meant to run this as the user that your WordPress installation exists under.
If you REALLY mean to run this as root, we won't stop you, but just bear in mind that any code on this site will then have full control of your server, making it quite DANGEROUS.
If you'd like to continue as root, please run this again, adding this flag: --allow-root
If you'd like to run it as the user that this site is under, you can run the following to become the respective user:
sudo -u USER -i -- wp <command>
It looks like that command line doesn't consider what I input into .bashrc
Guys, do you have any suggestion how to fix this problem?
You are struggling with the classic conundrum: What goes in bashrc and what in bash_profile and which one is loaded when?
The extreme short version is:
$HOME/.bash_profile: read at login shells. Should always source $HOME/.bashrc. Should only contain environmental variables that can be passed on to other functions.
$HOME/.bashrc: read only for interactive shells that are not login
(eg. opening a terminal in X). Should only contain aliases and functions
How does this help the OP?
The OP executes the following line:
$ sudo -u USER -i -- wp <command>
The flag -i of the sudo-command initiates a login-shell
-i, --login: Run the shell specified by the target user's password database entry as a login shell. This means that login-specific resource files such as .profile, .bash_profile or .login will be read by the shell. If a command is specified, it is passed to the shell for execution via the shell's -c option. If no command is specified, an interactive shell is executed.
So the OP initiates a login-shell which only reads the .bash_profile. The way to solve the problem is now to source the .bashrc file in there as is strongly recommended.
# .bash_profile
if [ -n "$BASH" ] && [ -r ~/.bashrc ]; then
. ~/.bashrc
fi
more info on dot-files:
http://mywiki.wooledge.org/DotFiles
man bash
What's the difference between .bashrc, .bash_profile, and .environment?
About .bash_profile, .bashrc, and where should alias be written in?
related posts:
Run nvm (bash function) via sudo
Can I run a command loaded from .bashrc with sudo?
I recently had the same problem. In my Dockerfile, I was running:
RUN wp core download && wp plugin install woocommerce --activate --allow-root
I looked at the error message, and thought that from the way it was worded, the --allow-root gets ignored the first time you use it. So I added it to the first wp command, and It worked.
RUN wp core download --allow-root && wp plugin install woocommerce --activate --allow-root
The problem is that ~/.bashrc is not being sourced. It will only be sourced in an interactive Bash shell.
You might get better results doing it via executables. Something like this:
# install wp cli
RUN curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar && \
chmod +x wp-cli.phar && \
mv wp-cli.phar /usr/local/bin/wp-cli.phar && \
echo '#!/bin/sh' >> /usr/local/bin/wp && \
echo 'wp-cli.phar "$#" --allow-root' >> /usr/local/bin/wp && \
chmod +x /usr/local/bin/wp
I've just started Api-Platform framework and while executing:
php bin/schema generate-types src/ app/config/schema.yml
I get this:
C:\wamp\www\sf2-api>php bin/schema generate-types src/ app/config/schema.yml
dir=$(d=${0%[/\\]*}; cd "$d"; cd "../vendor/api-platform/schema-generator/bin" &
& pwd)
# See if we are running in Cygwin by checking for cygpath program
if command -v 'cygpath' >/dev/null 2>&1; then
# Cygwin paths start with /cygdrive/ which will break windows PHP,
# so we need to translate the dir path to windows format. However
# we could be using cygwin PHP which does not require this, so we
# test if the path to PHP starts with /cygdrive/ rather than /usr/bin
if [[ $(which php) == /cygdrive/* ]]; then
dir=$(cygpath -m $dir);
fi
fi
dir=$(echo $dir | sed 's/ /\ /g')
"${dir}/schema" "$#"
I am using Symfony 2.7.8 on window7.
I have the same issue on ubunbu 14.04.
Finally, I replace the bin directory with the one in blog-api.
Updated:
The bin-api-platform is the one generated by api-platform.
The bin-blog-api is the one I copy from blog-api. This works fine.
Use :
php vendor/api-platform/schema-generator/bin/schema generate-types src/app/config/schema.yml
instead of :
php bin/schema generate-types src/ app/config/schema.yml
The correct syntax is:
php vendor/api-platform/schema-generator/bin/schema generate-types src/ app/config/schema.yml