i have this dockerfile for php-consumer:
`
FROM node:latest as node
FROM php:8.0-fpm
COPY --from=mlocati/php-extension-installer:1.2 /usr/bin/install-php-extensions /usr/local/bin/
COPY --from=node /usr/local/lib/node_modules /usr/local/lib/node_modules
COPY --from=node /usr/local/bin/node /usr/local/bin/node
RUN ln -s /usr/local/lib/node_modules/npm/bin/npm-cli.js /usr/local/bin/npm
RUN apt-get update && apt-get install -y \
libpq-dev \
wget \
zlib1g-dev \
libmcrypt-dev \
libzip-dev \
git \
php7.*-xml \
pkg-config \
libcurl4-openssl-dev \
librabbitmq-dev \
libpng-dev \
libjpeg-dev \
libfreetype6-dev
RUN apt-get update && apt-get install -y zlib1g-dev libicu-dev g++
RUN docker-php-ext-configure intl
RUN docker-php-ext-install intl
ENV CFLAGS="$CFLAGS -D_GNU_SOURCE"
RUN docker-php-ext-configure gd --with-freetype=/usr/include/ --with-jpeg=/usr/include/ && \
docker-php-ext-install mysqli pdo pdo_mysql zip curl sockets pcntl
RUN install-php-extensions \
decimal \
pdo_mysql \
intl \
amqp \
bcmath \
pcntl \
sockets \
xsl
RUN pecl install -o -f redis \
&& rm -rf /tmp/pear \
&& docker-php-ext-enable redis
RUN apt-get update && apt-get install libxslt1-dev -y && docker-php-ext-install xsl
RUN curl -S https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
COPY consume.sh /consumer/consume.sh
WORKDIR /workdir
ENTRYPOINT ["bash", "/consumer/consume.sh"]
`
This consume.sh:
`
#!/bin/bash
sleep 10;
/workdir/bin/console messenger:consume >&1;
`
This messenger.yaml:
`
framework:
messenger:
transports:
movies:
dsn: "%env(MESSENGER_TRANSPORT_DSN)%"
options:
vhost: "/"
routing:
App\Message\Tmdb\MovieRequestedMessage: movies
`
`
DATABASE_URL="mysql://user_name:password#mysql/database_name?serverVersion=5.7"
`
And when i dispatch the message, i got this error in php_consumer:
`02:46:24 WARNING [messenger] Error thrown while handling message App\Message\Tmdb\MovieRequestedMessage. Sending for retry #1 using 1000 ms delay. Error: "Handling "App\Message\Tmdb\MovieRequestedMessage" failed: An exception occurred in the driver: SQLSTATE[HY000] [2002] No such file or directory" ["class" => "App\Message\Tmdb\MovieRequestedMessage","retryCount" => 1,"delay" => 1000,"error" => "Handling "App\Message\Tmdb\MovieRequestedMessage" failed: An exception occurred in the driver: SQLSTATE[HY000] [2002] No such file or directory","exception" => Symfony\Component\Messenger\Exception\HandlerFailedException { …}]
docker-compose.yml:
php-consumer:
container_name: php_consumer
build:
context: docker/php-consumer
env_file:
- .env
volumes:
- .:/project
depends_on:
- mysql
- rabbitmq
- php-cli
- php-fpm
environment:
- "MESSENGER_TRANSPORT_DSN=${MESSENGER_TRANSPORT_DSN}"
- "DATABASE_URL=${DATABASE_URL}"
networks:
- network
I tried to change mysql to mysql container ip, but it doesn't work.
I would be very grateful for any clarification, and for any help. Thank you so much!
Related
I want to connect an individual app within shiny proxy to a docker network.
I have a few apps on shinyproxy, only one needs to connect to the database.
It is a postgresql DB running on the same machine in a docker set up to receive connections though the network my-docker-network
In application.yml Should I use
container-network: my-docker-network
or
container-network-connections: ["my-docker-network"]
?
Even though I don’t need internal networks in shiny proxy do I still need to set ``internal-networking: trueunderdocker:```
At the moment the container isn’t starting, but as the container runs fine by itself using docker run --net my-docker-network --env-file /mypath/.Renviron my_app_image it seems to be a connection issue. The container also works if I run it with --network="host"
I've tried various options of putting the .Renviron in different places and don't think that is the issue.
Full dockerfile (other apps deleted and pseudonomised):
FROM rocker/r-ver:3.6.3
RUN apt-get update --allow-releaseinfo-change && apt-get install -y \
lbzip2 \
libfftw3-dev \
libgdal-dev \
libgeos-dev \
libgsl0-dev \
libgl1-mesa-dev \
libglu1-mesa-dev \
libhdf4-alt-dev \
libhdf5-dev \
libjq-dev \
liblwgeom-dev \
libpq-dev \
libproj-dev \
libprotobuf-dev \
libnetcdf-dev \
libsqlite3-dev \
libssl-dev \
libudunits2-dev \
netcdf-bin \
postgis \
protobuf-compiler \
sqlite3 \
tk-dev \
unixodbc-dev \
libssh2-1-dev \
r-cran-v8 \
libv8-dev \
net-tools \
libsqlite3-dev \
libxml2-dev
#for whatever reason it wasn't working
#RUN export ADD=shiny && bash /etc/cont-init.d/add
#install packages
RUN R -e "install.packages(c('somepackages'))"
#copy app script and variables into docker
RUN mkdir /home/app
COPY .Renviron /home/app/
COPY global.R /home/app/
COPY ui.R /home/app/
COPY server.R /home/app/
COPY Rprofile.site /usr/lib/R/etc/
#add run script
CMD ["R", "-e", "shiny::runApp('home/app')"]
Useful parts of the application.yml
At the moment I always get "500/container doesn't respond/run" on the shinyproxy side even though it runs on the standalone.
proxy:
title: apps - page
# logo-url: https://link/to/your/logo.png
landing-page: /
favicon-path: favicon.ico
heartbeat-rate: 10000
heartbeat-timeout: 60000
container-wait-time: 40000
port: 8080
authentication: simple
admin-groups: admins
container-log-path: /etc/shinyproxy/logs
# Example: 'simple' authentication configuration
users:
- name: admin
password: password
groups: admins
- name: user
password: password
groups: users
# Docker configuration
docker:
cert-path: /home/none
url: http://localhost:2375
port-range-start: 20000
# internal-networking: true
specs:
- id: 06_rshiny_dashboard_r_ver
display-name: app r_ver container r_app_r_ver
description: using simple rver set up docker and the r_app_r_ver image
container-cmd: ["R", "-e", "shinyrunApp('/home/app')"]
#container-cmd: ["R", "-e", "shiny::runApp('/home/app', shiny.port = 3838, shiny.host = '0.0.0.0')"]
container-image: asela_r_app_r_ver:latest
#container-network: my-docker-network
container-network-connections: [ "my-docker-network" ]
container-env-file: /home/app/.Renviron
access-groups: [admins]
logging:
file:
name: /etc/shinyproxy/shinyproxy.log
Various commented out lines show the current set up but have tried with/without
Fixed it by using a shiny server version of the docker - not sure why but this sorted out some connection issue.
Dockerfile:
FROM rocker/r-ver:3.6.3
RUN apt-get update --allow-releaseinfo-change && apt-get install -y \
lbzip2 \
libfftw3-dev \
libgdal-dev \
libgeos-dev \
libgsl0-dev \
libgl1-mesa-dev \
libglu1-mesa-dev \
libhdf4-alt-dev \
libhdf5-dev \
libjq-dev \
liblwgeom-dev \
libpq-dev \
libproj-dev \
libprotobuf-dev \
libnetcdf-dev \
libsqlite3-dev \
libssl-dev \
libudunits2-dev \
netcdf-bin \
postgis \
protobuf-compiler \
sqlite3 \
tk-dev \
unixodbc-dev \
libssh2-1-dev \
r-cran-v8 \
libv8-dev \
net-tools \
libsqlite3-dev \
libxml2-dev \
wget \
gdebi
##No version control
#then install shiny
RUN wget --no-verbose https://download3.rstudio.org/ubuntu-14.04/x86_64/VERSION -O "version.txt" && \
VERSION=$(cat version.txt) && \
wget --no-verbose "https://download3.rstudio.org/ubuntu-14.04/x86_64/shiny-server-$VERSION-amd64.deb" -O ss-latest.deb && \
gdebi -n ss-latest.deb && \
rm -f version.txt ss-latest.deb
#install packages
RUN R -e "install.packages(c('xtable', 'stringr', 'glue', 'data.table', 'pool', 'RPostgres', 'palettetown', 'deckgl', 'sf', 'shinyWidgets', 'shiny', 'stats', 'graphics', 'grDevices', 'datasets', 'utils', 'methods', 'base'))"
##No version control over
##with version control and renv.lock file
##With version control over
#copy shiny server config over
COPY shiny-server.conf /etc/shiny-server/shiny-server.conf
#avoid some errors
#already in there
#RUN echo 'sanitize_errors off;disable_protocols xdr-streaming xhr-streaming iframe-eventsource iframe-htmlfile;' >> /etc/shiny-server/shiny-server.conf
# copy the app to the image
COPY .Renviron /srv/shiny-server/
COPY global.R /srv/shiny-server/
COPY server.R /srv/shiny-server/
COPY ui.R /srv/shiny-server/
# select port
EXPOSE 3838
# Copy further configuration files into the Docker image
COPY shiny-server.sh /usr/bin/shiny-server.sh
RUN ["chmod", "+x", "/usr/bin/shiny-server.sh"]
# run app
CMD ["/usr/bin/shiny-server.sh"]
application.yml:
proxy:
title: apps - page
# logo-url: https://link/to/your/logo.png
landing-page: /
favicon-path: favicon.ico
heartbeat-rate: 10000
heartbeat-timeout: 60000
container-wait-time: 40000
port: 8080
authentication: simple
admin-groups: admins
container-log-path: /etc/shinyproxy/logs
# Example: 'simple' authentication configuration
users:
- name: admin
password: password
groups: admins
- name: user
password: password
groups: users
# Docker configuration
docker:
cert-path: /home/none
url: http://localhost:2375
port-range-start: 20000
# internal-networking: true
- id: 10_asela_rshiny_shinyserv
display-name: ASELA Dash internal shiny server version
description: container has own shinyserver within it functions on docker network only not on host container-network version
container-cmd: ["/usr/bin/shiny-server.sh"]
access-groups: [admins]
container-image: asela_r_app_shinyserv_ver:latest
container-network: asela-docker-net
logging:
file:
name: /etc/shinyproxy/shinyproxy.log
Documentation for rocker/rstudio docker container.
I am able to get up and running in rstudio using Docker with the following set up in a directory:
Dockerfile:
FROM rocker/tidyverse:latest
docker-compose:
version: "3.5"
services:
ide-rstudio:
build:
context: .
ports:
- 8787:8787
environment:
ROOT: "TRUE"
PASSWORD: test
Now, if I enter this dir in the terminal and type: docker-compose build followed by docker-compose up -d and then navigate to localhost:8787 I see the rstudio login screen. So far so good.
I would like to add shiny to the same container per the documentation (as opposed to using a separate shiny image).
On the documentation I link to at the top it says:
Add shiny server on start up with e ADD=shiny
docker run -d -p 3838:3838 -p 8787:8787 -e ADD=shiny -e PASSWORD=yourpasswordhere rocker/rstudio
shiny server is now running on localhost:3838 and RStudio on localhost:8787.
Since I'm using docker-compose I updated my docker-compose file to this:
version: "3.5"
services:
ide-rstudio:
build:
context: .
ports:
- 8787:8787
- 3838:3838
environment:
ROOT: "TRUE"
ADD: "shiny"
PASSWORD: test
Now, when I go to the terminal like before and type: docker-compose build followed by docker-compose up -d I again see the rstudio login page at localhost:8787. However, if I go to localhost:3838, I see Firefox' 'connection was reset' page. It looks like nothing is there.
How can I add shiny to my container per the instructions?
It seems the image is missing shiny installer. If you run the same compose file without -d and using rocker/rstudio:3.2.0 image you will see in logs that shiny is installing. It failed to install for me (there was a problem with missing file /usr/local/lib/R/site-library/littler/examples/install2.r) but I found the script which installs the thing. For some reason the script does not exist in rocker/tidyverse:latest (I have no idea why, you'd better ask the maintainer) and ADD=shiny has no effect.
I managed to get things working by injecting that script into rocker/tidyverse:latest and here is how you can do it. Save the following as a file named add:
#!/usr/bin/with-contenv bash
ADD=${ADD:=none}
## A script to add shiny to an rstudio-based rocker image.
if [ "$ADD" == "shiny" ]; then
echo "Adding shiny server to container..."
apt-get update && apt-get -y install \
gdebi-core \
libxt-dev && \
wget --no-verbose https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/VERSION -O "version.txt" && \
VERSION=$(cat version.txt) && \
wget --no-verbose "https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/shiny-server-$VERSION-amd64.deb" -O ss-latest.deb && \
gdebi -n ss-latest.deb && \
rm -f version.txt ss-latest.deb && \
install2.r -e --skipinstalled shiny rmarkdown && \
cp -R /usr/local/lib/R/site-library/shiny/examples/* /srv/shiny-server/ && \
rm -rf /var/lib/apt/lists/* && \
mkdir -p /var/log/shiny-server && \
chown shiny.shiny /var/log/shiny-server && \
mkdir -p /etc/services.d/shiny-server && \
cd /etc/services.d/shiny-server && \
echo '#!/bin/bash' > run && echo 'exec shiny-server > /var/log/shiny-server.log' >> run && \
chmod +x run && \
adduser rstudio shiny && \
cd /
fi
if [ $"$ADD" == "none" ]; then
echo "Nothing additional to add"
fi
Then either add the following to your Dockefile:
COPY add /etc/cont-init.d/add
RUN chmod +x /etc/cont-init.d/add
or apply execution permission locally and mount it during runtime. To do this run the following locally:
chmod +x add
and add this to docker-compose.yml:
services:
ide-rstudio:
volumes: # this line and below
- ./add:/etc/cont-init.d/add
I'm trying to compile a .net core console application into native executable (linux-x64) on an ubuntu 18.04 docker container, using both coreRT and Npgsql. I'm currently using docker-compose to set up the DB and application containers.
docker-compose.yml
version: '3'
services:
database:
image: postgres:10
environment:
- POSTGRES_USER=dbuser
- POSTGRES_PASSWORD=dbpassword
- POSTGRES_DB=dbsample
ports:
- 5432:5432
tmpfs:
- /var/lib/postgresql/data:rw,noexec,nosuid,size=400m
volumes:
- ./db-init:/docker-entrypoint-initdb.d
prototype:
build: .
depends_on:
- database
links:
- database:database
Dockerfile
FROM ubuntu:18.04
RUN apt-get update \
&& apt-get install -y \
apt-transport-https \
build-essential \
clang \
cmake \
curl \
git-core \
gpg \
libbz2-dev \
libkrb5-dev \
libncurses5-dev \
libncursesw5-dev \
libreadline-dev \
libsqlite3-dev \
libssl-dev \
llvm \
make \
parallel \
wget \
zlib1g-dev
RUN wget -qO- https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.asc.gpg \
&& mv microsoft.asc.gpg /etc/apt/trusted.gpg.d/ \
&& wget -q https://packages.microsoft.com/config/ubuntu/18.04/prod.list \
&& mv prod.list /etc/apt/sources.list.d/microsoft-prod.list \
&& chown root:root /etc/apt/trusted.gpg.d/microsoft.asc.gpg \
&& chown root:root /etc/apt/sources.list.d/microsoft-prod.list \
&& apt-get update \
&& apt-get install -y dotnet-sdk-2.2
ENV CppCompilerAndLinker=clang-6.0
ENV DOTNET_CLI_TELEMETRY_OPTOUT=true
WORKDIR /home/app
COPY ./HelloWorld.fsproj /home/app
COPY ./nuget.config /home/app
RUN dotnet restore
COPY ./ /home/app
RUN dotnet publish -r linux-x64 -c Release -v detailed -o outside
CMD ./outside/HelloWorld
When It gets to compile it (dotnet publish -r linux-x64 -c Release -v detailed -o outside), it enters infinite loop consuming all the memory avaiable for the container. Until it shows this error:
Task "Exec"
"/root/.nuget/packages/runtime.linux-x64.microsoft.dotnet.ilcompiler/1.0.0-alpha-27919-02/tools/ilc" #"obj/Release/netcoreapp2.2/linux-x64/native/HelloWorld.ilc.rsp"
Killed
1:7>/root/.nuget/packages/microsoft.dotnet.ilcompiler/1.0.0-alpha-27919-02/build/Microsoft.NETCore.Native.targets(249,5): error MSB3073: The command ""/root/.nuget/packages/runtime.linux-x64.microsoft.dotnet.ilcompiler/1.0.0-alpha-27919-02/tools/ilc" #"obj/Release/netcoreapp2.2/linux-x64/native/HelloWorld.ilc.rsp"" exited with code 137. [/home/app/HelloWorld.fsproj]
Done executing task "Exec" -- FAILED.
1:7>Done building target "IlcCompile" in project "HelloWorld.fsproj" -- FAILED.
1:7>Done Building Project "/home/app/HelloWorld.fsproj" (Publish target(s)) -- FAILED.
It seems to be somehow related with the usage of generics and reflections in F#. I've looked in both Npgsql and coreRT repos and couldn't find someone close to get them both working. Have anyone faced this problem? Or managed to use Npgsql and coreRT?
I'm trying to create a Dockerfile which runs as non-root user.
When i building this all works fine, but nginx cannot write the log file because it dosen't have enough permissions. Can I, when building a Docker, give root permissions only for nginx?
I'm trying chmod, chown for blocked directories. Doesn't work
FROM php:7.1-fpm-alpine
RUN apk add --no-cache shadow
RUN apk add --no-cache --virtual .ext-deps \
openssl \
unzip \
libjpeg-turbo-dev \
libwebp-dev \
libpng-dev \
freetype-dev \
libmcrypt-dev \
imagemagick-dev \
nodejs-npm \
nginx \
git \
inkscape
# imagick
RUN apk add --update --no-cache autoconf g++ imagemagick-dev libtool make pcre-dev \
&& pecl install imagick \
&& docker-php-ext-enable imagick \
&& apk del autoconf g++ libtool make pcre-dev
# Install Blackfire
RUN version=$(php -r "echo PHP_MAJOR_VERSION.PHP_MINOR_VERSION;") \
&& curl -A "Docker" -o /tmp/blackfire-probe.tar.gz -D - -L -s https://blackfire.io/api/v1/releases/probe/php/linux/amd64/$version \
&& tar zxpf /tmp/blackfire-probe.tar.gz -C /tmp \
&& mv /tmp/blackfire-*.so $(php -r "echo ini_get('extension_dir');")/blackfire.so \
&& printf "extension=blackfire.so\nblackfire.agent_socket=tcp://blackfire:8707\n" > $PHP_INI_DIR/conf.d/blackfire.ini
RUN apk add -y icu-dev \
&& docker-php-ext-configure intl \
&& docker-php-ext-install intl
RUN docker-php-ext-configure pdo_mysql && \
docker-php-ext-configure opcache && \
docker-php-ext-configure exif && \
docker-php-ext-configure pdo && \
docker-php-ext-configure zip && \
docker-php-ext-configure gd \
--with-jpeg-dir=/usr/include --with-png-dir=/usr/include --with-webp-dir=/usr/include --with-freetype-dir=/usr/include && \
docker-php-ext-configure sockets && \
docker-php-ext-configure mcrypt
RUN docker-php-ext-install pdo zip pdo_mysql opcache exif gd sockets mcrypt && \
docker-php-source delete
RUN ln -s /usr/bin/php7 /usr/bin/php && \
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer && \
mkdir -p /run/nginx
COPY ./init.sh /
COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY ./.env /
RUN chmod +x /init.sh
EXPOSE 80
RUN addgroup -g 1001 node \
&& adduser -u 1001 -G node -s /bin/sh -D node
ARG UID=1001
ARG GID=1001
ENV UID=${UID}
ENV GID=${GID}
RUN usermod -u $UID node \
&& groupmod -g $GID node
RUN chown 1001:1001 /var/lib/nginx -R
RUN mkdir -p /var/tmp/nginx
RUN chown 1001:1001 /var/tmp/nginx -R
USER node
ENTRYPOINT [ "/init.sh" ]
There are quite a few unknowns in your question, for example, the contents of your default.conf file. By default the nginx logs are stored in /var/log/nginx, but I'll assume you're overriding that in the configuration.
The next thing is that the master process of nginx needs to be run as root if you wan't it to be able to bind to system ports (0 - 1023) so in case you are using nginx as a web server and intend to use ports 80 and 443 you should stick with running the nginx process as root.
In case you plan to use other ports and are set on the idea of running the master process as non-root, then you can check this answer for suggestions on how to do that - https://stackoverflow.com/a/42329561/5359953
I am using the term master process a lot here, because nginx spawns worker processes to handle the actual requests and those can be run as a different user (Defined in the nginx configuration file)
I found the solution. I just changed RUN chown 1001:1001 /var/lib/nginx -R to RUN chown -R 1001:1001 /var/. Thats works fine
RUN chown -R 1001:1001 /var/
sometimes it's will be actually bad decision.
u can try add permissions like this
RUN chown -R 1001:1001 /var/tmp/nginx
RUN chown -R 1001:1001 /var/lib/nginx
RUN chown -R 1001:1001 /var/log/nginx
RUN chown -R 1001:1001 /run/nginx
I guess RUN chown 1001:1001 /var/lib/nginx -R work wrong because I set the flag -R too late
I'm using Docker to work on Symfony 3 project, Here is the following stack :
-Custom Php7.1FPM here's the DockerFile :
FROM php:7.1.0-fpm
MAINTAINER xxxxx xxxxxx <xxxx.xxxxxx#gmail.com>
ENV PHP_APCU_VERSION 5.1.8
ENV PHP_XDEBUG_VERSION 2.5.0
RUN apt-get update \
&& apt-get install -y \
libicu-dev \
zlib1g-dev \
&& docker-php-source extract \
&& curl -L -o /tmp/apcu-$PHP_APCU_VERSION.tgz https://pecl.php.net/get/apcu-$PHP_APCU_VERSION.tgz \
&& curl -L -o /tmp/xdebug-$PHP_XDEBUG_VERSION.tgz http://xdebug.org/files/xdebug-$PHP_XDEBUG_VERSION.tgz \
&& tar xfz /tmp/apcu-$PHP_APCU_VERSION.tgz \
&& tar xfz /tmp/xdebug-$PHP_XDEBUG_VERSION.tgz \
&& rm -r \
/tmp/apcu-$PHP_APCU_VERSION.tgz \
/tmp/xdebug-$PHP_XDEBUG_VERSION.tgz \
&& mv apcu-$PHP_APCU_VERSION /usr/src/php/ext/apcu \
&& mv xdebug-$PHP_XDEBUG_VERSION /usr/src/php/ext/xdebug \
&& docker-php-ext-install \
apcu \
intl \
mbstring \
mysqli \
xdebug \
zip \
&& pecl install apcu_bc-1.0.3 \
&& docker-php-source delete \
&& php -r "readfile('https://getcomposer.org/installer');" | php -- --install-dir=/usr/local/bin --filename=composer \
&& chmod +x /usr/local/bin/composer
last nginx image
mysql:8.0.0
I use docker-compose to build those 3 containers, here's the docker-compose.yml :
front:
image: nginx
ports:
- "81:80"
links:
- "engine:engine"
volumes:
- ".:/home/docker:ro"
- "./docker/front/default.conf:/etc/nginx/conf.d/default.conf:ro"
engine:
build: ./docker/engine/
volumes:
- ".:/home/docker:rw"
- "./docker/engine/php.ini:/usr/local/etc/php/conf.d/custom.ini:ro"
links:
- "db:db"
working_dir: "/home/docker"
db:
image: mysql:8.0.0
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=pwd
- MYSQL_USER=myUSer
- MYSQL_PASSWORD=pwd
- MYSQL_DATABASE=bddProject
The first time without cache the time is 1700 ms :
And the time with cache is :
The half time is initialisation time :
So What kind of problem could slow the page render of my project ?
Docker last version and 2 Go with Windows Hyper-v system.
Thank you for your help.
So i make an other image without xdebug ant the result is the same
(700ms with cache) :
My DockerFile :
FROM php:7.1.0-fpm
MAINTAINER XXXXX XXXXXX <XXXXXX.XXXXXX#gmail.com>
ENV PHP_APCU_VERSION 5.1.8
RUN apt-get update \
&& apt-get install -y \
libicu-dev \
zlib1g-dev \
&& docker-php-source extract \
&& curl -L -o /tmp/apcu-$PHP_APCU_VERSION.tgz https://pecl.php.net/get/apcu-$PHP_APCU_VERSION.tgz \
&& tar xfz /tmp/apcu-$PHP_APCU_VERSION.tgz \
&& rm -r \
/tmp/apcu-$PHP_APCU_VERSION.tgz \
&& mv apcu-$PHP_APCU_VERSION /usr/src/php/ext/apcu \
&& docker-php-ext-install \
apcu \
intl \
mbstring \
mysqli \
zip \
&& pecl install apcu_bc-1.0.3 \
&& docker-php-source delete \
&& php -r "readfile('https://getcomposer.org/installer');" | php -- --install-dir=/usr/local/bin --filename=composer \
&& chmod +x /usr/local/bin/composer
So it's the window's management of Docker volume which make that, so #Geoffrey Brier you know if Microsoft has planned to improve this performance problem ?
Is there a soft or other to improve that ?
Thank you for your help.
As far as I can see there are two things that are responsible for those performances :
Xdebug
Windows : it's no troll but it's a well known problem that the way your containers volumes are handled by Docker on Windows is not as efficient as on Linux.
You have three solutions : struggle to find a method that slightly improves the performances, use Linux (in a VM for instance) or deal with it :)