Docker permissions development environment using a host mounted volume - symfony

I'm using docker-compose to set up a portable development environment for a bunch of symfony2 applications (though nothing I want to do is specific to symfony). I've decided to have the source files on the local machine exposed as a data volume with all the other dependencies in docker. This way developers can edit on the local file-system.
Everything works great, except that after running the app my cache and log files and the files created by composer in the /vendor directory are now owned by root.
I've read about this problem and some possible approaches here:
Changing permissions of added file to a Docker volume
But I can't quite quite tease out what changes I have to make in my docker-compose.yml file so that when my symphony container starts with docker-compose up any files that are created have the permissions of the user on the host machine.
I'm posting the file for reference, worker is where php, etc. live:
source:
image: symfony/worker-dev
volumes:
- $PWD:/var/www/app
mongodb:
image: mongo:2.4
ports:
- "27017:27017"
volumes_from:
- source
worker:
image: symfony/worker-dev
ports:
- "80:80"
- mongodb
volumes_from:
- source
volumes:
- "tmp/:/var/log/nginx"

One of the solutions is to execure the commands inside your container. I've tried multiple workarounds for the same issue I faced in the past. I find executing the command inside the container the most user-friendly.
Example command: docker-compose run CONTAINER_NAME php bin/console cache:clear. You may use make, ant or any modern tool to keep the commands short.
Example with Makefile:
all: | build run test
build: | docker-compose-build
run: | composer-install clear-cache
############## docker compose
docker-compose-build:
docker-compose build
############## composer
composer-install:
docker-compose run app composer install
composer-update:
docker-compose run app composer update
############## cache
clear-cache:
docker-compose run app php bin/console cache:clear
docker-set-permissions:
docker-compose run app chown -R www-data:www-data var/logs
docker-compose run app chown -R www-data:www-data var/cache
############## test
test:
docker-compose run app php bin/phpunit
Alternatively, you may introduce a .env file which contains a environment variables and then user one of the variables to run usermod command in the Docker container.

Related

Is it possible to run ui tests in codeception in the background?

I'm new to codeception and wonder, is it possible to run ui tests in the background, without opening test web browser every time?
I suspect, that I should change something in acceptance.suite.yml, but not sure what.
I would appreciate any help.
You can use headless browser. This will execute all the test flow almost exactly as it would work on regular UI mode while there will not be opened visual browser.
You can learn more about this here and in more similar resources.
You can use Docker to virtualize the WebDriver and Selenium.
Create two different files in the root directory. The Dockerfile will generate a container with PHP and composer to run your codeception tests in it.
Dockerfile
FROM php:8.0-cli-alpine
RUN apk -U upgrade --no-cache
# install composer
COPY --from=composer:2.2 /usr/bin/composer /usr/bin/composer
ENV COMPOSER_ALLOW_SUPERUSER=1
ENV PATH="${PATH}:/root/.composer/vendor/bin"
WORKDIR /app
COPY composer.json composer.lock ./
RUN composer install --no-dev --no-scripts --no-progress \
&& composer clear-cache
COPY . /app
RUN composer dump-autoload --optimize --classmap-authoritative \
&& composer clear-cache
The second file is docker-compose.yml which is using a preconfigured Selenium image and bound your PHP codeception tests to one network, so that the container can talk with each other over the needed ports (4444 and 7900)
docker-compose.yml
---
version: '3.4'
services:
php:
build: .
depends_on:
- selenium
volumes:
- ./:/usr/src/app:rw,cached
selenium:
image: selenium/standalone-chrome:4
shm_size: 2gb
container_name: selenium
ports:
- "4444:4444"
- "7900:7900"
environment:
- VNC_NO_PASSWORD=1
- SCREEN_WIDTH=1920
- SCREEN_HEIGHT=1080
If you setup docker and your codeception project correctly, you can run these containers in the background.
docker-compose up -d
and execute your tests:
vendor/bin/codecept run
If you want to see, what the test is doing, you can visit http://localhost:7900 to connect to the browser in the container and you can see, what the test is executing.
If you are using the WebDriver module to run your tests with codeception, there is an option to configure your browser in headless mode.
It won't open any windows and the tests will run in the background without bothering you.
There is an example with chrome :
modules:
enabled:
- WebDriver
config:
WebDriver:
url: 'http://myapp.local'
browser: chrome
window_size: 1920x1080
capabilities:
chromeOptions:
args: ["--headless", "--no-sandbox"]

Docker doesn't copy new images

I have a problem regarding Docker.
When I'm deploying a new version of my app image, the images i have added to the images folder in my wwwroot folder aren't copied..
My Dockerfile looks like this:
FROM microsoft/aspnetcore-build:1.0-projectjson
WORKDIR /app-src
COPY . .
RUN dotnet restore
RUN dotnet publish src/Test -o /app
EXPOSE 5000
WORKDIR /app
ENTRYPOINT ["dotnet", "Test.dll"]
And my docker-compose:
version: '3.8'
services:
app:
image: <dockeruser>/<imagename>:<tag>
links:
- db
environment:
ConnectionStrings__Dataconnection: "Host=db;Username=Username;Password=Password;Database=db"
ports:
- "5000:5000"
volumes:
- ~/data/images:/app/wwwroot/images
db:
image: postgres:9.5
ports:
- "31337:5432"
volumes:
- ~/data/db:/var/lib/postgresql/data/pgdata
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: db
PGDATA: /var/lib/postgresql/data/pgdata
My current version of docker is:
Docker version 18.09.7, build 2d0083d and docker-compose docker-compose version 1.26.2, build eefe0d31
The exact same files (except for the docker-compose version was set to 2 in docker-compose.yml) worked previously on docker version Docker version 17.03.0-ce, build 60ccb22 and docker-compose version docker-compose version 1.9.0, build 2585387
I store my new images in my repos wwwroot/images folder, and then push them to the repo, and then dockerhub automatically builds an image from the new commit. On the server i then pull the new docker-image and run the docker-compose down -v command followed by docker-compose up -d but the images is not available in the app afterwards.
Disclaimer: This is a project I have overtaken and I'm aware of some of the very old software versions.
Your images may be in your container image, but since you are doing a bind mount whatever is in your server’s “~/data/images” directory will basically “override/replace” what’s in your image when the container is created.
Try removing the volume from the app service, basically remove this:
volumes:
- ~/data/images:/app/wwwroot/images
The other thing you can try is to manually copy the images to the “~/data/images” directory on the server.

Docker: How can I have sqlite db changes persist to the db file?

FROM golang:1.8
ADD . /go/src/beginnerapp
RUN go get -u github.com/gorilla/mux
RUN go get github.com/mattn/go-sqlite3
RUN go install beginnerapp/
VOLUME /go/src/beginnerapp/local-db
WORKDIR /go/src/beginnerapp
ENTRYPOINT /go/bin/beginnerapp
EXPOSE 8080
The sqlite db file is in the local-db directory but I don't seem to be using the VOLUME command correctly. Any ideas how I can have db changes to the sqlite db file persisted?
I don't mind if the volume is mounted before or after the build.
I also tried running the following command
user#cardboardlaptop:~/go/src/beginnerapp$ docker run -p 8080:8080 -v ./local-db:/go/src/beginnerapp/local-db beginnerapp
docker: Error response from daemon: create ./local-db: "./local-db" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
EDIT: Works with using /absolutepath/local-db instead of relative path ./local-db
You are not mounting volumes in a Dockerfile.
VOLUME tells docker that content on those directories can be mounted via docker run --volumes-from
You're right. Docker doesn't allow relative paths on volumes on command line.
Run your docker using absolute path:
docker run -v /host/db/local-db:/go/src/beginnerapp/local-db
Your db will be persisted in the host file /host/db/local-db
If you want to use relative paths, you can make it work with docker-compose with "volumes" tag:
volumes:
- ./local-db:/go/src/beginnerapp/local-db
You can try this configuration:
Put the Dockerfile in a directory, (e.g. /opt/docker/myproject)
create a docker-compose.yml file in the same path like this:
version: "2.0"
services:
myproject:
build: .
volumes:
- "./local-db:/go/src/beginnerapp/local-db"
Execute docker-compose up -d myproject in the same path.
Your db should be stored in /opt/docker/myproject/local-db
Just a comment. The content of local-db (if any) will be replaced by the content of ./local-db path (empty). If the container have any information (initialized database) will be a good idea to copy it with docker cp or include any init logic on an entrypoint or command shell script.

Updating a Symfony app with Docker-compose without losing data

I have a multi-container Symfony application that uses docker-compose to handle the relationships between the containers. To simplify a little, i have 4 main services :
code:
image: mycode
web:
image: mynginx
volumes-from:
- code
ports:
- "80:80"
links:
- php-fpm
php-fpm:
image: myphpfpm
volumes-from:
- code
links:
- mongo
mongo:
image: mongo
The "mycode" image contains the code of my application and is built from the following Dockerfile :
FROM composer/composer
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libmcrypt-dev \
libxml2-dev \
libicu-dev \
libcurl4-openssl-dev \
libssl-dev \
pkg-config
RUN docker-php-ext-install iconv mcrypt mbstring bcmath json ctype iconv posix intl
RUN pecl install mongo \
&& echo extension=mongo.so >> /usr/local/etc/php/conf.d/mongo.ini
COPY . /code
WORKDIR /code
RUN rm -rf /code/app/cache/* \
&& rm -rf /code/app/logs/* \
&& chown -R root /code/app/cache \
&& chown -R root /code/app/logs \
&& chmod -R 777 /code/app/cache \
&& chmod -R 777 /code/app/logs \
&& composer install \
&& rm -f /code/web/app_dev.php \
&& rm -f /code/web/config.php
VOLUME ["/code", "/code/app/logs", "/code/app/cache"]
At first, deploying this application was easy. I just had to do a simple docker-compose up -d and it created all the containers and ran them without any issue. But then i had to deploy a new version.
This configuration uses volumes to store data :
the source code is mounted on the /code volume, and shared between 3
containers (code, web, php-fpm). It has to be replaced by a new version when deploying.
the MongoDb data is on another
volume, mounted only by the mongo container. I have to keep this data between deployments.
When i deploy an update to my code, i publish the new version of the mycode image and re-create the container. But since the /code volume is still used by the web and php-fpm containers, the old volume can't be replaced by the new one. I have to stop all the running services to delete the old volume, and if i use the docker-compose rm -v command, it will delete the mongodb data too !
Can't i replace only one volume with a new version, without any downtime ?
So i'm kind of stuck here. I'm thinking of having a permanent volume to store the code and update it through SSH with Capistrano, old style. This will allow me to run doctrine migrations scripts after deployment too. But i have other issues with it as Capistrano uses symlinks to handle versions so i can't just mount the /current folder to /code.
Do you have a solution to handle the deployment of a Docker application without losing data and without downtime ?
Should i use manual scripts instead of docker-compose ?
the source code is mounted on the /code volume
This is the problem, it is not what you want.
Code never goes into a volume, it should change when the image changes. Volumes are for things that you want to preserve between changes to the image (data, logs, state, etc).
Code is the immutable thing that you want to replace when you change a container. So remove the /code volume from the Dockerfile entirely, and instead do an ADD . /code in the mynginx and myphpfpm Dockerfiles.
With that change, you can deploy with just up -d. It will recreate any container that have changed, and your volumes will be copied over. You don't need an rm anymore.
If you have your Dockerfile for myphpfpm and mynginx in a different directory, you can build using docker build -f path/to/dockerfile .
Using a host volume (as suggested in another answer) is another option, however that's not usually what you want outside of development. With a host volume you would still remove the /code VOLUME from the dockerfile.
Do not copy the code via the Dockerfile, just attach volumes to the 'code' container.
Few edits:
code:
image: mycode
volumes:
- .:/code
- /code
web:
image: mynginx
volumes-from:
- code
ports:
- "80:80"
links:
- php-fpm
php-fpm:
image: myphpfpm
volumes-from:
- code
links:
- mongo
mongo:
image: mongo
Same thing applies to mongo mount it to an external volume so it persists when the container shuts down. Actually there is also another method, they mention it in their dockerhub page https://hub.docker.com/_/mongo/
Where to Store Data
Important note: There are several ways to store data used by
applications that run in Docker containers. We encourage users of the
mongo images to familiarize themselves with the options available,
including:
Let Docker manage the storage of your database data by writing the
database files to disk on the host system using its own internal
volume management. This is the default and is easy and fairly
transparent to the user. The downside is that the files may be hard to
locate for tools and applications that run directly on the host
system, i.e. outside containers.
Create a data directory on the host system (outside the container) and
mount this to a directory visible from inside the container. This
places the database files in a known location on the host system, and
makes it easy for tools and applications on the host system to access
the files. The downside is that the user needs to make sure that the
directory exists, and that e.g. directory permissions and other
security mechanisms on the host system are set up correctly.

Why I can't see my files inside a docker container?

I'm a Docker newbie and I'm trying to setup my first project.
To test how to play with it, I just cloned one ready-to-go project and I setup it (Project repo).
As the guide claims if I access a specific url, I reach the homepage. To be more specific a symfony start page.
Moreover with this command
docker run -i -t testdocker_application /bin/bash
I'm able to login to the container.
My problem is if I try to go to the application folder through bash, the folder that I shared with my host is empty.
I tried with another project, but the result is the same.
Where I'm wrong?
Here some infos about my env:
Ubuntu 12.04
Docker version 1.8.3, build f4bf5c7
Config:
application:
build: code
volumes:
- ./symfony:/var/www/symfony
- ./logs/symfony:/var/www/symfony/app/logs
tty: true
Looks like you have a docker-compose.yml file but are running the image with docker. You don't actually need docker-compose to start a single container. If you just want to start the container your command should look like this:
docker run -ti -v $(pwd)/symfony:/var/www/symfony -v $(pwd)/logs/symfony:/var/www/symfony/app/logs testdocker_application /bin/bash
To use your docker-compose.yml start your container with docker-compose up. You would also need to add the following to drop into a shell.
stdin_open: true
command: /bin/bash

Resources