docker-compose not behaving in compliance with dockerfiles - r

I'm a level 0 Docker user, so bear with me on this one:
I'm trying to create a shared container environment with docker-compose. The docker-compose.yaml looks like this:
# docker-compose.yml
#ubuntu(16.04) + python(3)
version: '3'
services:
ubuntu:
image: 434c03a615a2
build:
context: .
dockerfile: dockerfileBase
volumes:
- "./data/data_vol/:/app"
tty: true
#tensorflow
tensorflow:
image: tensorflow/tensorflow
build:
context: .
dockerfile: dockerfileTensorflow
ports:
- "8888:8888"
tty: true
#rstudio
rstudio:
image: rocker/rstudio
build:
context: .
dockerfile: dockerfileRstudio1
environment:
- PASSWORD=supersecret
ports:
- "8787:8787"
tty: true
As far as I can tell everything is working, but the dockerfile, with which I import Rstudio doesn't seem to get executed the same way inside the .yaml as it does outside of it. What I mean is that this Rstudio dockerfile:
#pull rstudio
FROM rocker/rstudio:3.4.3
LABEL maintainer="Landsense"
#set Env variables
ENV http_proxy=http://##.###.###.##:####
ENV https_proxy=http://##.###.###.##:####
ENV ftp_proxy=http://##.###.###.##:####
ENV TZ="Europe/Rome"
RUN apt-get update && \
apt-get install -y \
libgdal-dev \
libproj-dev \
libv8-dev \
ssh && \
apt-get clean all
RUN Rscript -e "install.packages('raster')"
installs packages when it's built on its own, but fails to do so when ran from the docker-compose.yaml . Can someone comment on this type of behavior? RSPKT!

When you have both image and build in a docker-compose service precedence is given to image. In your scenario your since you have image: rocker/rstudio in your compose file it will pull the rocker/rstudio:latest image from docker-hub. But what do you want is a image build on top of the rocker/rstudio image (In Dockerfile it has been used as the base image).
It is not a good practice to tag your image as with existing tag in docker-hub (You may face difficulties as wrong image is cached in your local docker images as you experienced here). First decide whether you really want to name your image (Otherwise compose will tag the image for you where tag include a part of your service name so you can easily identify). If you want use it as following with a prefix in image tag. Same goes with other two services.
image: localhost/rocker/rstudio
build:
context: .
dockerfile: dockerfileRstudio1

Related

Is it possible to run ui tests in codeception in the background?

I'm new to codeception and wonder, is it possible to run ui tests in the background, without opening test web browser every time?
I suspect, that I should change something in acceptance.suite.yml, but not sure what.
I would appreciate any help.
You can use headless browser. This will execute all the test flow almost exactly as it would work on regular UI mode while there will not be opened visual browser.
You can learn more about this here and in more similar resources.
You can use Docker to virtualize the WebDriver and Selenium.
Create two different files in the root directory. The Dockerfile will generate a container with PHP and composer to run your codeception tests in it.
Dockerfile
FROM php:8.0-cli-alpine
RUN apk -U upgrade --no-cache
# install composer
COPY --from=composer:2.2 /usr/bin/composer /usr/bin/composer
ENV COMPOSER_ALLOW_SUPERUSER=1
ENV PATH="${PATH}:/root/.composer/vendor/bin"
WORKDIR /app
COPY composer.json composer.lock ./
RUN composer install --no-dev --no-scripts --no-progress \
&& composer clear-cache
COPY . /app
RUN composer dump-autoload --optimize --classmap-authoritative \
&& composer clear-cache
The second file is docker-compose.yml which is using a preconfigured Selenium image and bound your PHP codeception tests to one network, so that the container can talk with each other over the needed ports (4444 and 7900)
docker-compose.yml
---
version: '3.4'
services:
php:
build: .
depends_on:
- selenium
volumes:
- ./:/usr/src/app:rw,cached
selenium:
image: selenium/standalone-chrome:4
shm_size: 2gb
container_name: selenium
ports:
- "4444:4444"
- "7900:7900"
environment:
- VNC_NO_PASSWORD=1
- SCREEN_WIDTH=1920
- SCREEN_HEIGHT=1080
If you setup docker and your codeception project correctly, you can run these containers in the background.
docker-compose up -d
and execute your tests:
vendor/bin/codecept run
If you want to see, what the test is doing, you can visit http://localhost:7900 to connect to the browser in the container and you can see, what the test is executing.
If you are using the WebDriver module to run your tests with codeception, there is an option to configure your browser in headless mode.
It won't open any windows and the tests will run in the background without bothering you.
There is an example with chrome :
modules:
enabled:
- WebDriver
config:
WebDriver:
url: 'http://myapp.local'
browser: chrome
window_size: 1920x1080
capabilities:
chromeOptions:
args: ["--headless", "--no-sandbox"]

Docker doesn't copy new images

I have a problem regarding Docker.
When I'm deploying a new version of my app image, the images i have added to the images folder in my wwwroot folder aren't copied..
My Dockerfile looks like this:
FROM microsoft/aspnetcore-build:1.0-projectjson
WORKDIR /app-src
COPY . .
RUN dotnet restore
RUN dotnet publish src/Test -o /app
EXPOSE 5000
WORKDIR /app
ENTRYPOINT ["dotnet", "Test.dll"]
And my docker-compose:
version: '3.8'
services:
app:
image: <dockeruser>/<imagename>:<tag>
links:
- db
environment:
ConnectionStrings__Dataconnection: "Host=db;Username=Username;Password=Password;Database=db"
ports:
- "5000:5000"
volumes:
- ~/data/images:/app/wwwroot/images
db:
image: postgres:9.5
ports:
- "31337:5432"
volumes:
- ~/data/db:/var/lib/postgresql/data/pgdata
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: db
PGDATA: /var/lib/postgresql/data/pgdata
My current version of docker is:
Docker version 18.09.7, build 2d0083d and docker-compose docker-compose version 1.26.2, build eefe0d31
The exact same files (except for the docker-compose version was set to 2 in docker-compose.yml) worked previously on docker version Docker version 17.03.0-ce, build 60ccb22 and docker-compose version docker-compose version 1.9.0, build 2585387
I store my new images in my repos wwwroot/images folder, and then push them to the repo, and then dockerhub automatically builds an image from the new commit. On the server i then pull the new docker-image and run the docker-compose down -v command followed by docker-compose up -d but the images is not available in the app afterwards.
Disclaimer: This is a project I have overtaken and I'm aware of some of the very old software versions.
Your images may be in your container image, but since you are doing a bind mount whatever is in your server’s “~/data/images” directory will basically “override/replace” what’s in your image when the container is created.
Try removing the volume from the app service, basically remove this:
volumes:
- ~/data/images:/app/wwwroot/images
The other thing you can try is to manually copy the images to the “~/data/images” directory on the server.

How to update wordpress on docker

I'm running a php-fpm wordpress container.
The wordpress source files are mounted in a named volume "wordpress" shared with the Nginx container.
Everything is running well except when i need to update wordpress to a new version. The code inside the named volume persists. It is normal for a named volume...
I could manually delete the volume but there must be a better way.
My dockerfile:
FROM wordpress:4.9.5-php5.6-fpm-alpine
My docker-compose.yml
version: '3.1'
services:
php:
build: ./docker/php/
restart: unless-stopped
volumes:
- wordpress:/var/www/html
- ./web/wp-content/:/var/www/html/wp-content/
- ./web/wp-config.php:/var/www/html/wp-config.php
environment:
- DEBUG=${DEBUG:-0}
- MYSQL_USER=$MYSQL_USER
- MYSQL_PASSWORD=$MYSQL_PASSWORD
- MYSQL_DATABASE=$MYSQL_DATABASE
nginx:
image: nginx:1-alpine
restart: unless-stopped
expose:
- 80
volumes:
- wordpress:/var/www/html
- ./web/wp-content/:/var/www/html/wp-content/
- ./docker/nginx/site.conf:/etc/nginx/conf.d/default.conf
- ./docker/nginx/wordpress.conf:/etc/nginx/wordpress.conf
environment:
- VIRTUAL_HOST=localhost
mysql:
image: mysql:5.6
restart: unless-stopped
environment:
- MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
- MYSQL_USER=$MYSQL_USER
- MYSQL_PASSWORD=$MYSQL_PASSWORD
- MYSQL_DATABASE=$MYSQL_DATABASE
volumes:
- mysql:/var/lib/mysql
volumes:
wordpress: {}
mysql: {}
networks:
default:
external:
name: wordpress
Looking forward to reading your suggestions
Thank you
When the wordpress container comes up it checks for the existence of files at /var/www/html and copies only if not present. So in your case may you can update the entrypoint script to check the wordpress version in the wp-includes/version.php in the /var/www/html and the files in the container and then make a decision to replace the new files.
Edit:
According to this just deletion of index.php or wp-includes/version.php should copy the files from container again. Or may you can update your entrypoint script to copy files to /var/www/html all the time, but that may cause issues if you choose to scale the wordpress layer.
Thank you for your help.
It worked.
Here is the code i'm using.
I overriden the entrypoint in dockerfile
COPY check-wordpress-version.sh /usr/local/bin/
ENTRYPOINT ["check-wordpress-version.sh"]
Here is the content of check-wordpress-version.sh to check wordpress current version.
VOLUME_VERSION="$(php -r 'require('"'"'/var/www/html/wp-includes/version.php'"'"'); echo $wp_version;')"
echo "Volume version : "$VOLUME_VERSION
echo "WordPress version : "$WORDPRESS_VERSION
if [ $VOLUME_VERSION != $WORDPRESS_VERSION ]; then
echo "Forcing WordPress code update..."
rm -f /var/www/html/index.php
fi
docker-entrypoint.sh php-fpm
Wordpress seems to have addressed this under this issue.
I notice you are using a custom wp-config.php. Most likely, you can use the WORDPRESS_CONFIG_EXTRA for this rather than mounting wp-config.php.
Theoretically (per the link above), updating the image should update the database, but I have not confirmed.
Based on this, my stack.yml/docker-compose.yml looks like this:
environment:
WORDPRESS_CONFIG_EXTRA: |
define( 'AUTOMATIC_UPDATER_DISABLED', true );
volumes:
- "./themes:/var/www/html/wp-content/themes/"
- "./plugins:/var/www/html/wp-content/plugins/"
- "./uploads:/var/www/html/wp-content/uploads/"
It's easier solution.
You have to edit wp-config.php file by add define('FS_METHOD','direct'); to the end of file.
Save the file and run update. From now, you don't need FTP server to update your WordPress.
Remember! Before update make a backup :)
To expend on #Bigbenny's answer, my Dockerfile looked like the following:
FROM wordpress:latest
WORKDIR /var/www/html
COPY . /var/www/html
COPY check-wordpress-version.sh /usr/local/bin/
RUN chmod 755 /usr/local/bin/check-wordpress-version.sh
ENTRYPOINT ["/usr/local/bin/check-wordpress-version.sh"]
Two things to notice here:
I had to chmod 755 the file or I would get a permissions denied error
I placed the script inside the /usr/local/bin because for some reason when I would just use ENTRYPOINT["check-wordpress-version.sh"], the file wouldn't be found by the container.
I also, slightly tweaked the script to look like:
#!/bin/sh
VOLUME_VERSION="$(php -r 'require('"'"'/var/www/html/wp-includes/version.php'"'"'); echo $wp_version;')"
echo "Volume version : "$VOLUME_VERSION
echo "WordPress version : "$WORDPRESS_VERSION
if [ $VOLUME_VERSION != $WORDPRESS_VERSION ]; then
echo "Forcing WordPress code update..."
rm -f /var/www/html/index.php
rm -f /var/www/html/wp-includes/version.php
fi
docker-entrypoint.sh apache2-foreground
For my use-case, I had to use apache2-foreground rather than php-fpm; I also deleted the /var/www/html/wp-includes/version.php file.
Finally, in my docker-compose, instead of the using the image directive; I used build: ./wordpress.
I hope this helps!😁
My answer applied to the official docker wordpress image. So
probably off topic but might help someone.
If you are using docker-compose you can pull the latest image using this command.
docker pull wordpress
I believe this will update your core docker image. Any other local project which you docker-compose up -d with this yml image setting as this will use the latest update.
services:
wordpress:
image: wordpress:latest
If you currently running the image will you need to docker-compose down and docker-compose up -d to invoke the update.

Docker "Invalid mount path app/symfony" must be absolute

Im trying to setup Webpack to run with docker. I'm looking to put it in its own container, build the files and then nginx will serve that produced code on its container.
My docker-compose.yml file looks like:
nginx:
build: ./nginx/
ports:
- 80:80
links:
- php
volumes_from:
- app
php:
build: ./php/
expose:
- 9000
links:
- mysql
volumes_from:
- app
app:
image: php:7.0-fpm
volumes:
- ./app/symfony:/var/www/html
command: "true"
web:
build: ./webpack
volumes_from:
- app
mysql:
image: mysql:latest
volumes_from:
- data
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: project
MYSQL_USER: project
MYSQL_PASSWORD: project
data:
image: mysql:latest
volumes:
- /var/lib/mysql
command: "true"
My code is stored in the app/symfony directory. The Dockerfile for the webpack container is currently:
FROM node:wheezy
WORKDIR /app
RUN apt-get update
RUN apt-get install curl -y
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash - && apt-get install nodejs -y
RUN npm install webpack -g
RUN npm install
CMD webpack --watch --watch-polling
I am getting the error:
ERROR: for web Cannot create container for service web: invalid bind mount spec "a60f89607640b36a468b471378a6b7079dfa5890db994a1228e7809b93b8b709:app/symfony:rw": invalid volume specification: 'a60f89607640b36a468b471378a6b7079dfa5890db994a1228e7809b93b8b709:app/symfony:rw': invalid mount config for type "volume": invalid mount path: 'app/symfony' mount path must be absolute
ERROR: Encountered errors while bringing up the project.
I want webpack to take the code in app/symfony, and build any assets, and then the nginx container will serve those.
I had a similar issue.
my docker-compose.yml looked like this
version: '3.1'
services:
nginx:
build:
context: ./server
ports:
- "8080:80"
volumes:
- ./fw:opt/www/app/
and got the error "invalid mount path: 'opt/www/app' mount path must be absolute
"
I resolved it by changing the mount path like this by adding a slash in front of the path.
volumes:
- ./fw: /opt/www/app/
SOLUTION:
If you have this docker-compose.yaml in your root of project. Then make sure you have a '/' before app.
As per Docker Documentation:
docker run -dp 3000:3000 \
-w /app -v "$(pwd):/app" \
node:12-alpine \
sh -c "yarn install && yarn run dev"
This works perfectly.

Docker Compose Link Containers for Phpunit with Wordpress, MySQL

I would like to use a docker-compose app to run unit tests on a wordpress plugin.
Following (mostly) this tutorial I have created four containers:
my-wpdb:
image: mariadb
ports:
- "8081:3306"
environment:
MYSQL_ROOT_PASSWORD: dockerpass
my-wp:
image: wordpress
volumes:
- ./:/var/www/html
ports:
- "8080:80"
links:
- my-wpdb:mysql
environment:
WORDPRESS_DB_PASSWORD: dockerpass
my-wpcli:
image: tatemz/wp-cli
volumes_from:
- my-wp
links:
- my-wpdb:mysql
entrypoint: wp
command: "--info"
my-phpunit:
image: phpunit/phpunit
volumes_from:
- my-wp
links:
- my-wpdb
This tutorial got me as far as creating the phpunit files (xml, tests, bin, .travis), with the exception that I had to install subversion manually:
docker exec wp_my-wp_1 apt-get update
docker exec wp_my-wp_1 apt-get install -y wget git curl zip vim
docker exec wp_my-wp_1 apt-get install -y apache2 subversion libapache2-svn libsvn-perl
And run the last part of bin/install-wp-tests.sh manually in the database container:
docker exec wp_my-wpdb_1 mysqladmin create wordpress_test --user=root --password=dockerpass --host=localhost --protocol=tcp
I can run phpunit: docker-compose run --rm my-wp phpunit --help.
I can specify the config xml file:
docker-compose run --rm my-wp phpunit --configuration /var/www/html/wp-content/plugins/my-plugin/phpunit.xml.dist
However, the test wordpress installation is installed in the my-wp container's /tmp directory: /tmp/wordpress-tests-lib/includes/functions.php
I think I have to link the my-phpunit containers /tmp to the one in my-wp?
This doesn't answer my question, but as a solution to the problem, there is a github repo for a Docker Image that provides the wanted features: https://github.com/chriszarate/docker-wordpress as well as a wrapper you can invoke it through: https://github.com/chriszarate/docker-wordpress-vip
I wonder if for the initial set-up (noted in the question), it might make more sense to add Phpunit to the Wordpress container, rather than making a separate container for it.

Resources