Symfony 3 and Docker (nginx, php7.1-fpm mysql8) Performances low on Windows - nginx

I'm using Docker to work on Symfony 3 project, Here is the following stack :
-Custom Php7.1FPM here's the DockerFile :
FROM php:7.1.0-fpm
MAINTAINER xxxxx xxxxxx <xxxx.xxxxxx#gmail.com>
ENV PHP_APCU_VERSION 5.1.8
ENV PHP_XDEBUG_VERSION 2.5.0
RUN apt-get update \
&& apt-get install -y \
libicu-dev \
zlib1g-dev \
&& docker-php-source extract \
&& curl -L -o /tmp/apcu-$PHP_APCU_VERSION.tgz https://pecl.php.net/get/apcu-$PHP_APCU_VERSION.tgz \
&& curl -L -o /tmp/xdebug-$PHP_XDEBUG_VERSION.tgz http://xdebug.org/files/xdebug-$PHP_XDEBUG_VERSION.tgz \
&& tar xfz /tmp/apcu-$PHP_APCU_VERSION.tgz \
&& tar xfz /tmp/xdebug-$PHP_XDEBUG_VERSION.tgz \
&& rm -r \
/tmp/apcu-$PHP_APCU_VERSION.tgz \
/tmp/xdebug-$PHP_XDEBUG_VERSION.tgz \
&& mv apcu-$PHP_APCU_VERSION /usr/src/php/ext/apcu \
&& mv xdebug-$PHP_XDEBUG_VERSION /usr/src/php/ext/xdebug \
&& docker-php-ext-install \
apcu \
intl \
mbstring \
mysqli \
xdebug \
zip \
&& pecl install apcu_bc-1.0.3 \
&& docker-php-source delete \
&& php -r "readfile('https://getcomposer.org/installer');" | php -- --install-dir=/usr/local/bin --filename=composer \
&& chmod +x /usr/local/bin/composer
last nginx image
mysql:8.0.0
I use docker-compose to build those 3 containers, here's the docker-compose.yml :
front:
image: nginx
ports:
- "81:80"
links:
- "engine:engine"
volumes:
- ".:/home/docker:ro"
- "./docker/front/default.conf:/etc/nginx/conf.d/default.conf:ro"
engine:
build: ./docker/engine/
volumes:
- ".:/home/docker:rw"
- "./docker/engine/php.ini:/usr/local/etc/php/conf.d/custom.ini:ro"
links:
- "db:db"
working_dir: "/home/docker"
db:
image: mysql:8.0.0
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=pwd
- MYSQL_USER=myUSer
- MYSQL_PASSWORD=pwd
- MYSQL_DATABASE=bddProject
The first time without cache the time is 1700 ms :
And the time with cache is :
The half time is initialisation time :
So What kind of problem could slow the page render of my project ?
Docker last version and 2 Go with Windows Hyper-v system.
Thank you for your help.
So i make an other image without xdebug ant the result is the same
(700ms with cache) :
My DockerFile :
FROM php:7.1.0-fpm
MAINTAINER XXXXX XXXXXX <XXXXXX.XXXXXX#gmail.com>
ENV PHP_APCU_VERSION 5.1.8
RUN apt-get update \
&& apt-get install -y \
libicu-dev \
zlib1g-dev \
&& docker-php-source extract \
&& curl -L -o /tmp/apcu-$PHP_APCU_VERSION.tgz https://pecl.php.net/get/apcu-$PHP_APCU_VERSION.tgz \
&& tar xfz /tmp/apcu-$PHP_APCU_VERSION.tgz \
&& rm -r \
/tmp/apcu-$PHP_APCU_VERSION.tgz \
&& mv apcu-$PHP_APCU_VERSION /usr/src/php/ext/apcu \
&& docker-php-ext-install \
apcu \
intl \
mbstring \
mysqli \
zip \
&& pecl install apcu_bc-1.0.3 \
&& docker-php-source delete \
&& php -r "readfile('https://getcomposer.org/installer');" | php -- --install-dir=/usr/local/bin --filename=composer \
&& chmod +x /usr/local/bin/composer
So it's the window's management of Docker volume which make that, so #Geoffrey Brier you know if Microsoft has planned to improve this performance problem ?
Is there a soft or other to improve that ?
Thank you for your help.

As far as I can see there are two things that are responsible for those performances :
Xdebug
Windows : it's no troll but it's a well known problem that the way your containers volumes are handled by Docker on Windows is not as efficient as on Linux.
You have three solutions : struggle to find a method that slightly improves the performances, use Linux (in a VM for instance) or deal with it :)

Related

How can i fix MYSQL error in php consumer?

i have this dockerfile for php-consumer:
`
FROM node:latest as node
FROM php:8.0-fpm
COPY --from=mlocati/php-extension-installer:1.2 /usr/bin/install-php-extensions /usr/local/bin/
COPY --from=node /usr/local/lib/node_modules /usr/local/lib/node_modules
COPY --from=node /usr/local/bin/node /usr/local/bin/node
RUN ln -s /usr/local/lib/node_modules/npm/bin/npm-cli.js /usr/local/bin/npm
RUN apt-get update && apt-get install -y \
libpq-dev \
wget \
zlib1g-dev \
libmcrypt-dev \
libzip-dev \
git \
php7.*-xml \
pkg-config \
libcurl4-openssl-dev \
librabbitmq-dev \
libpng-dev \
libjpeg-dev \
libfreetype6-dev
RUN apt-get update && apt-get install -y zlib1g-dev libicu-dev g++
RUN docker-php-ext-configure intl
RUN docker-php-ext-install intl
ENV CFLAGS="$CFLAGS -D_GNU_SOURCE"
RUN docker-php-ext-configure gd --with-freetype=/usr/include/ --with-jpeg=/usr/include/ && \
docker-php-ext-install mysqli pdo pdo_mysql zip curl sockets pcntl
RUN install-php-extensions \
decimal \
pdo_mysql \
intl \
amqp \
bcmath \
pcntl \
sockets \
xsl
RUN pecl install -o -f redis \
&& rm -rf /tmp/pear \
&& docker-php-ext-enable redis
RUN apt-get update && apt-get install libxslt1-dev -y && docker-php-ext-install xsl
RUN curl -S https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
COPY consume.sh /consumer/consume.sh
WORKDIR /workdir
ENTRYPOINT ["bash", "/consumer/consume.sh"]
`
This consume.sh:
`
#!/bin/bash
sleep 10;
/workdir/bin/console messenger:consume >&1;
`
This messenger.yaml:
`
framework:
messenger:
transports:
movies:
dsn: "%env(MESSENGER_TRANSPORT_DSN)%"
options:
vhost: "/"
routing:
App\Message\Tmdb\MovieRequestedMessage: movies
`
`
DATABASE_URL="mysql://user_name:password#mysql/database_name?serverVersion=5.7"
`
And when i dispatch the message, i got this error in php_consumer:
`02:46:24 WARNING [messenger] Error thrown while handling message App\Message\Tmdb\MovieRequestedMessage. Sending for retry #1 using 1000 ms delay. Error: "Handling "App\Message\Tmdb\MovieRequestedMessage" failed: An exception occurred in the driver: SQLSTATE[HY000] [2002] No such file or directory" ["class" => "App\Message\Tmdb\MovieRequestedMessage","retryCount" => 1,"delay" => 1000,"error" => "Handling "App\Message\Tmdb\MovieRequestedMessage" failed: An exception occurred in the driver: SQLSTATE[HY000] [2002] No such file or directory","exception" => Symfony\Component\Messenger\Exception\HandlerFailedException { …}]
docker-compose.yml:
php-consumer:
container_name: php_consumer
build:
context: docker/php-consumer
env_file:
- .env
volumes:
- .:/project
depends_on:
- mysql
- rabbitmq
- php-cli
- php-fpm
environment:
- "MESSENGER_TRANSPORT_DSN=${MESSENGER_TRANSPORT_DSN}"
- "DATABASE_URL=${DATABASE_URL}"
networks:
- network
I tried to change mysql to mysql container ip, but it doesn't work.
I would be very grateful for any clarification, and for any help. Thank you so much!

Docker container failing to start with connection to database from shinyproxy

I want to connect an individual app within shiny proxy to a docker network.
I have a few apps on shinyproxy, only one needs to connect to the database.
It is a postgresql DB running on the same machine in a docker set up to receive connections though the network my-docker-network
In application.yml Should I use
container-network: my-docker-network
or
container-network-connections: ["my-docker-network"]
?
Even though I don’t need internal networks in shiny proxy do I still need to set ``internal-networking: trueunderdocker:```
At the moment the container isn’t starting, but as the container runs fine by itself using docker run --net my-docker-network --env-file /mypath/.Renviron my_app_image it seems to be a connection issue. The container also works if I run it with --network="host"
I've tried various options of putting the .Renviron in different places and don't think that is the issue.
Full dockerfile (other apps deleted and pseudonomised):
FROM rocker/r-ver:3.6.3
RUN apt-get update --allow-releaseinfo-change && apt-get install -y \
lbzip2 \
libfftw3-dev \
libgdal-dev \
libgeos-dev \
libgsl0-dev \
libgl1-mesa-dev \
libglu1-mesa-dev \
libhdf4-alt-dev \
libhdf5-dev \
libjq-dev \
liblwgeom-dev \
libpq-dev \
libproj-dev \
libprotobuf-dev \
libnetcdf-dev \
libsqlite3-dev \
libssl-dev \
libudunits2-dev \
netcdf-bin \
postgis \
protobuf-compiler \
sqlite3 \
tk-dev \
unixodbc-dev \
libssh2-1-dev \
r-cran-v8 \
libv8-dev \
net-tools \
libsqlite3-dev \
libxml2-dev
#for whatever reason it wasn't working
#RUN export ADD=shiny && bash /etc/cont-init.d/add
#install packages
RUN R -e "install.packages(c('somepackages'))"
#copy app script and variables into docker
RUN mkdir /home/app
COPY .Renviron /home/app/
COPY global.R /home/app/
COPY ui.R /home/app/
COPY server.R /home/app/
COPY Rprofile.site /usr/lib/R/etc/
#add run script
CMD ["R", "-e", "shiny::runApp('home/app')"]
Useful parts of the application.yml
At the moment I always get "500/container doesn't respond/run" on the shinyproxy side even though it runs on the standalone.
proxy:
title: apps - page
# logo-url: https://link/to/your/logo.png
landing-page: /
favicon-path: favicon.ico
heartbeat-rate: 10000
heartbeat-timeout: 60000
container-wait-time: 40000
port: 8080
authentication: simple
admin-groups: admins
container-log-path: /etc/shinyproxy/logs
# Example: 'simple' authentication configuration
users:
- name: admin
password: password
groups: admins
- name: user
password: password
groups: users
# Docker configuration
docker:
cert-path: /home/none
url: http://localhost:2375
port-range-start: 20000
# internal-networking: true
specs:
- id: 06_rshiny_dashboard_r_ver
display-name: app r_ver container r_app_r_ver
description: using simple rver set up docker and the r_app_r_ver image
container-cmd: ["R", "-e", "shinyrunApp('/home/app')"]
#container-cmd: ["R", "-e", "shiny::runApp('/home/app', shiny.port = 3838, shiny.host = '0.0.0.0')"]
container-image: asela_r_app_r_ver:latest
#container-network: my-docker-network
container-network-connections: [ "my-docker-network" ]
container-env-file: /home/app/.Renviron
access-groups: [admins]
logging:
file:
name: /etc/shinyproxy/shinyproxy.log
Various commented out lines show the current set up but have tried with/without
Fixed it by using a shiny server version of the docker - not sure why but this sorted out some connection issue.
Dockerfile:
FROM rocker/r-ver:3.6.3
RUN apt-get update --allow-releaseinfo-change && apt-get install -y \
lbzip2 \
libfftw3-dev \
libgdal-dev \
libgeos-dev \
libgsl0-dev \
libgl1-mesa-dev \
libglu1-mesa-dev \
libhdf4-alt-dev \
libhdf5-dev \
libjq-dev \
liblwgeom-dev \
libpq-dev \
libproj-dev \
libprotobuf-dev \
libnetcdf-dev \
libsqlite3-dev \
libssl-dev \
libudunits2-dev \
netcdf-bin \
postgis \
protobuf-compiler \
sqlite3 \
tk-dev \
unixodbc-dev \
libssh2-1-dev \
r-cran-v8 \
libv8-dev \
net-tools \
libsqlite3-dev \
libxml2-dev \
wget \
gdebi
##No version control
#then install shiny
RUN wget --no-verbose https://download3.rstudio.org/ubuntu-14.04/x86_64/VERSION -O "version.txt" && \
VERSION=$(cat version.txt) && \
wget --no-verbose "https://download3.rstudio.org/ubuntu-14.04/x86_64/shiny-server-$VERSION-amd64.deb" -O ss-latest.deb && \
gdebi -n ss-latest.deb && \
rm -f version.txt ss-latest.deb
#install packages
RUN R -e "install.packages(c('xtable', 'stringr', 'glue', 'data.table', 'pool', 'RPostgres', 'palettetown', 'deckgl', 'sf', 'shinyWidgets', 'shiny', 'stats', 'graphics', 'grDevices', 'datasets', 'utils', 'methods', 'base'))"
##No version control over
##with version control and renv.lock file
##With version control over
#copy shiny server config over
COPY shiny-server.conf /etc/shiny-server/shiny-server.conf
#avoid some errors
#already in there
#RUN echo 'sanitize_errors off;disable_protocols xdr-streaming xhr-streaming iframe-eventsource iframe-htmlfile;' >> /etc/shiny-server/shiny-server.conf
# copy the app to the image
COPY .Renviron /srv/shiny-server/
COPY global.R /srv/shiny-server/
COPY server.R /srv/shiny-server/
COPY ui.R /srv/shiny-server/
# select port
EXPOSE 3838
# Copy further configuration files into the Docker image
COPY shiny-server.sh /usr/bin/shiny-server.sh
RUN ["chmod", "+x", "/usr/bin/shiny-server.sh"]
# run app
CMD ["/usr/bin/shiny-server.sh"]
application.yml:
proxy:
title: apps - page
# logo-url: https://link/to/your/logo.png
landing-page: /
favicon-path: favicon.ico
heartbeat-rate: 10000
heartbeat-timeout: 60000
container-wait-time: 40000
port: 8080
authentication: simple
admin-groups: admins
container-log-path: /etc/shinyproxy/logs
# Example: 'simple' authentication configuration
users:
- name: admin
password: password
groups: admins
- name: user
password: password
groups: users
# Docker configuration
docker:
cert-path: /home/none
url: http://localhost:2375
port-range-start: 20000
# internal-networking: true
- id: 10_asela_rshiny_shinyserv
display-name: ASELA Dash internal shiny server version
description: container has own shinyserver within it functions on docker network only not on host container-network version
container-cmd: ["/usr/bin/shiny-server.sh"]
access-groups: [admins]
container-image: asela_r_app_shinyserv_ver:latest
container-network: asela-docker-net
logging:
file:
name: /etc/shinyproxy/shinyproxy.log

Compiling a .Net Core Console App with Npgsql and CoreRT

I'm trying to compile a .net core console application into native executable (linux-x64) on an ubuntu 18.04 docker container, using both coreRT and Npgsql. I'm currently using docker-compose to set up the DB and application containers.
docker-compose.yml
version: '3'
services:
database:
image: postgres:10
environment:
- POSTGRES_USER=dbuser
- POSTGRES_PASSWORD=dbpassword
- POSTGRES_DB=dbsample
ports:
- 5432:5432
tmpfs:
- /var/lib/postgresql/data:rw,noexec,nosuid,size=400m
volumes:
- ./db-init:/docker-entrypoint-initdb.d
prototype:
build: .
depends_on:
- database
links:
- database:database
Dockerfile
FROM ubuntu:18.04
RUN apt-get update \
&& apt-get install -y \
apt-transport-https \
build-essential \
clang \
cmake \
curl \
git-core \
gpg \
libbz2-dev \
libkrb5-dev \
libncurses5-dev \
libncursesw5-dev \
libreadline-dev \
libsqlite3-dev \
libssl-dev \
llvm \
make \
parallel \
wget \
zlib1g-dev
RUN wget -qO- https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.asc.gpg \
&& mv microsoft.asc.gpg /etc/apt/trusted.gpg.d/ \
&& wget -q https://packages.microsoft.com/config/ubuntu/18.04/prod.list \
&& mv prod.list /etc/apt/sources.list.d/microsoft-prod.list \
&& chown root:root /etc/apt/trusted.gpg.d/microsoft.asc.gpg \
&& chown root:root /etc/apt/sources.list.d/microsoft-prod.list \
&& apt-get update \
&& apt-get install -y dotnet-sdk-2.2
ENV CppCompilerAndLinker=clang-6.0
ENV DOTNET_CLI_TELEMETRY_OPTOUT=true
WORKDIR /home/app
COPY ./HelloWorld.fsproj /home/app
COPY ./nuget.config /home/app
RUN dotnet restore
COPY ./ /home/app
RUN dotnet publish -r linux-x64 -c Release -v detailed -o outside
CMD ./outside/HelloWorld
When It gets to compile it (dotnet publish -r linux-x64 -c Release -v detailed -o outside), it enters infinite loop consuming all the memory avaiable for the container. Until it shows this error:
Task "Exec"
"/root/.nuget/packages/runtime.linux-x64.microsoft.dotnet.ilcompiler/1.0.0-alpha-27919-02/tools/ilc" #"obj/Release/netcoreapp2.2/linux-x64/native/HelloWorld.ilc.rsp"
Killed
1:7>/root/.nuget/packages/microsoft.dotnet.ilcompiler/1.0.0-alpha-27919-02/build/Microsoft.NETCore.Native.targets(249,5): error MSB3073: The command ""/root/.nuget/packages/runtime.linux-x64.microsoft.dotnet.ilcompiler/1.0.0-alpha-27919-02/tools/ilc" #"obj/Release/netcoreapp2.2/linux-x64/native/HelloWorld.ilc.rsp"" exited with code 137. [/home/app/HelloWorld.fsproj]
Done executing task "Exec" -- FAILED.
1:7>Done building target "IlcCompile" in project "HelloWorld.fsproj" -- FAILED.
1:7>Done Building Project "/home/app/HelloWorld.fsproj" (Publish target(s)) -- FAILED.
It seems to be somehow related with the usage of generics and reflections in F#. I've looked in both Npgsql and coreRT repos and couldn't find someone close to get them both working. Have anyone faced this problem? Or managed to use Npgsql and coreRT?

How to run only one thing as root in docker

I'm trying to create a Dockerfile which runs as non-root user.
When i building this all works fine, but nginx cannot write the log file because it dosen't have enough permissions. Can I, when building a Docker, give root permissions only for nginx?
I'm trying chmod, chown for blocked directories. Doesn't work
FROM php:7.1-fpm-alpine
RUN apk add --no-cache shadow
RUN apk add --no-cache --virtual .ext-deps \
openssl \
unzip \
libjpeg-turbo-dev \
libwebp-dev \
libpng-dev \
freetype-dev \
libmcrypt-dev \
imagemagick-dev \
nodejs-npm \
nginx \
git \
inkscape
# imagick
RUN apk add --update --no-cache autoconf g++ imagemagick-dev libtool make pcre-dev \
&& pecl install imagick \
&& docker-php-ext-enable imagick \
&& apk del autoconf g++ libtool make pcre-dev
# Install Blackfire
RUN version=$(php -r "echo PHP_MAJOR_VERSION.PHP_MINOR_VERSION;") \
&& curl -A "Docker" -o /tmp/blackfire-probe.tar.gz -D - -L -s https://blackfire.io/api/v1/releases/probe/php/linux/amd64/$version \
&& tar zxpf /tmp/blackfire-probe.tar.gz -C /tmp \
&& mv /tmp/blackfire-*.so $(php -r "echo ini_get('extension_dir');")/blackfire.so \
&& printf "extension=blackfire.so\nblackfire.agent_socket=tcp://blackfire:8707\n" > $PHP_INI_DIR/conf.d/blackfire.ini
RUN apk add -y icu-dev \
&& docker-php-ext-configure intl \
&& docker-php-ext-install intl
RUN docker-php-ext-configure pdo_mysql && \
docker-php-ext-configure opcache && \
docker-php-ext-configure exif && \
docker-php-ext-configure pdo && \
docker-php-ext-configure zip && \
docker-php-ext-configure gd \
--with-jpeg-dir=/usr/include --with-png-dir=/usr/include --with-webp-dir=/usr/include --with-freetype-dir=/usr/include && \
docker-php-ext-configure sockets && \
docker-php-ext-configure mcrypt
RUN docker-php-ext-install pdo zip pdo_mysql opcache exif gd sockets mcrypt && \
docker-php-source delete
RUN ln -s /usr/bin/php7 /usr/bin/php && \
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer && \
mkdir -p /run/nginx
COPY ./init.sh /
COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY ./.env /
RUN chmod +x /init.sh
EXPOSE 80
RUN addgroup -g 1001 node \
&& adduser -u 1001 -G node -s /bin/sh -D node
ARG UID=1001
ARG GID=1001
ENV UID=${UID}
ENV GID=${GID}
RUN usermod -u $UID node \
&& groupmod -g $GID node
RUN chown 1001:1001 /var/lib/nginx -R
RUN mkdir -p /var/tmp/nginx
RUN chown 1001:1001 /var/tmp/nginx -R
USER node
ENTRYPOINT [ "/init.sh" ]
There are quite a few unknowns in your question, for example, the contents of your default.conf file. By default the nginx logs are stored in /var/log/nginx, but I'll assume you're overriding that in the configuration.
The next thing is that the master process of nginx needs to be run as root if you wan't it to be able to bind to system ports (0 - 1023) so in case you are using nginx as a web server and intend to use ports 80 and 443 you should stick with running the nginx process as root.
In case you plan to use other ports and are set on the idea of running the master process as non-root, then you can check this answer for suggestions on how to do that - https://stackoverflow.com/a/42329561/5359953
I am using the term master process a lot here, because nginx spawns worker processes to handle the actual requests and those can be run as a different user (Defined in the nginx configuration file)
I found the solution. I just changed RUN chown 1001:1001 /var/lib/nginx -R to RUN chown -R 1001:1001 /var/. Thats works fine
RUN chown -R 1001:1001 /var/
sometimes it's will be actually bad decision.
u can try add permissions like this
RUN chown -R 1001:1001 /var/tmp/nginx
RUN chown -R 1001:1001 /var/lib/nginx
RUN chown -R 1001:1001 /var/log/nginx
RUN chown -R 1001:1001 /run/nginx
I guess RUN chown 1001:1001 /var/lib/nginx -R work wrong because I set the flag -R too late

Transfer Large Files Asynchronously In Flask

What is the best approach for transfer large files Asynchronously in Flask? I have read this article. But I want to know if there is a way to do this without using celery?
Flask is a synchronous framework, you can try flask+gevent and streaming responses like explained here: http://flask.pocoo.org/docs/0.12/patterns/streaming/.
Anyway, if you want to upload properly very large files, I suggest you to use a different approach. Instead of trying to do asynchronous networking with a synchronous framework try delegating the transfer with Nginx upload_module, like explained here: http://blog.thisisfeifan.com/2013/03/nginx-upload-module-vs-flask.html
Nginx is faster and won't load up the files in memory, a thing that regular frameworks like Flask or Django even in Asynchronous mode will do. Remember to configure flask to receive after upload POST with directive upload_pass. The only caveat is that you'll have to learn how to compile a full fledge Nginx from source, here an example of working Dockerfile:
FROM buildpack-deps:jessie
##### NGINX #####
# Base Stuff
RUN apt-get update && apt-get install -y -qq \
libssl-dev
# Nginx with upload_module and upload_progress_module
# "Stable version".
ENV ZLIB_VERSION 1.2.11
ENV PCRE_VERSION 8.39
ENV NGX_UPLOAD_MODULE_VERSION 2.2
ENV NGX_UPLOAD_PROGRESS_VERSION 0.9.1
ENV NGX_HEADERS_MORE_VERSION 0.32
ENV NGX_SPPEDPAGE_VERSION 1.11.33.4
ENV NGINX_VERSION 1.11.8
RUN cd /tmp \
&& wget http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz \
&& tar xvf nginx-${NGINX_VERSION}.tar.gz \
&& wget https://github.com/openresty/headers-more-nginx-module/archive/v${NGX_HEADERS_MORE_VERSION}.tar.gz \
&& tar -xzvf v${NGX_HEADERS_MORE_VERSION}.tar.gz \
&& wget https://github.com/pagespeed/ngx_pagespeed/archive/latest-stable.tar.gz \
&& tar -xzvf latest-stable.tar.gz \
&& wget https://dl.google.com/dl/page-speed/psol/${NGX_SPPEDPAGE_VERSION}.tar.gz \
&& tar -xzvf ${NGX_SPPEDPAGE_VERSION}.tar.gz \
&& mv psol ngx_pagespeed-latest-stable/ \
&& git clone -b ${NGX_UPLOAD_MODULE_VERSION} https://github.com/Austinb/nginx-upload-module \
&& wget http://zlib.net/zlib-${ZLIB_VERSION}.tar.gz \
&& tar xvf zlib-${ZLIB_VERSION}.tar.gz \
&& wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-${PCRE_VERSION}.tar.bz2 \
&& tar -xjf pcre-${PCRE_VERSION}.tar.bz2 \
&& wget https://github.com/masterzen/nginx-upload-progress-module/archive/v${NGX_UPLOAD_PROGRESS_VERSION}.tar.gz \
&& tar xvf v${NGX_UPLOAD_PROGRESS_VERSION}.tar.gz \
&& cd nginx-${NGINX_VERSION} \
&& ./configure \
--with-pcre=../pcre-${PCRE_VERSION}/ \
--with-zlib=../zlib-${ZLIB_VERSION}/ \
--add-module=../nginx-upload-module \
--add-module=../nginx-upload-progress-module-${NGX_UPLOAD_PROGRESS_VERSION} \
--add-module=../ngx_pagespeed-latest-stable \
--add-module=../headers-more-nginx-module-${NGX_HEADERS_MORE_VERSION} \
--with-select_module \
--with-poll_module \
--with-file-aio \
--with-http_ssl_module \
--with-ipv6 \
--with-pcre-jit \
--with-http_gzip_static_module \
--with-http_ssl_module \
--with-http_v2_module \
--with-http_realip_module \
--user=nginx --group=nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --with-cpu-opt=CPU -- with-ld-opt="-Wl,-E" \
&& make \
&& make install
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
NOTE: Please in this image is lacking the provision of nginx.conf and default.conf.

Resources