I am preparing a Docker image that intends to use a static build of Nginx
RUN set -ex \
&& wget -qO- github.com/nginx/nginx/archive/"$NGINX_HASH".tar.gz | tar zx --strip-components=1 \
&& ./auto/configure --without-http_rewrite_module --without-http_gzip_module \
&& make CFLAGS="-O2 -s" LDFLAGS="-static" -j$(nproc) \
&& ldd ./objs/nginx
Unfortunately, even with the flags -static it seems to be linked against musl
ldd ./objs/nginx
/lib/ld-musl-x86_64.so.1 (0x7fcce5ebe000)
libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x7fcce5ebe000)
What do I need to do to link statically?
Solved with
RUN ./auto/configure --without-http_rewrite_module --without-http_gzip_module --with-cc-opt="-O2" --with-ld-opt="-s -static"
Related
I'm trying to compile a .net core console application into native executable (linux-x64) on an ubuntu 18.04 docker container, using both coreRT and Npgsql. I'm currently using docker-compose to set up the DB and application containers.
docker-compose.yml
version: '3'
services:
database:
image: postgres:10
environment:
- POSTGRES_USER=dbuser
- POSTGRES_PASSWORD=dbpassword
- POSTGRES_DB=dbsample
ports:
- 5432:5432
tmpfs:
- /var/lib/postgresql/data:rw,noexec,nosuid,size=400m
volumes:
- ./db-init:/docker-entrypoint-initdb.d
prototype:
build: .
depends_on:
- database
links:
- database:database
Dockerfile
FROM ubuntu:18.04
RUN apt-get update \
&& apt-get install -y \
apt-transport-https \
build-essential \
clang \
cmake \
curl \
git-core \
gpg \
libbz2-dev \
libkrb5-dev \
libncurses5-dev \
libncursesw5-dev \
libreadline-dev \
libsqlite3-dev \
libssl-dev \
llvm \
make \
parallel \
wget \
zlib1g-dev
RUN wget -qO- https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.asc.gpg \
&& mv microsoft.asc.gpg /etc/apt/trusted.gpg.d/ \
&& wget -q https://packages.microsoft.com/config/ubuntu/18.04/prod.list \
&& mv prod.list /etc/apt/sources.list.d/microsoft-prod.list \
&& chown root:root /etc/apt/trusted.gpg.d/microsoft.asc.gpg \
&& chown root:root /etc/apt/sources.list.d/microsoft-prod.list \
&& apt-get update \
&& apt-get install -y dotnet-sdk-2.2
ENV CppCompilerAndLinker=clang-6.0
ENV DOTNET_CLI_TELEMETRY_OPTOUT=true
WORKDIR /home/app
COPY ./HelloWorld.fsproj /home/app
COPY ./nuget.config /home/app
RUN dotnet restore
COPY ./ /home/app
RUN dotnet publish -r linux-x64 -c Release -v detailed -o outside
CMD ./outside/HelloWorld
When It gets to compile it (dotnet publish -r linux-x64 -c Release -v detailed -o outside), it enters infinite loop consuming all the memory avaiable for the container. Until it shows this error:
Task "Exec"
"/root/.nuget/packages/runtime.linux-x64.microsoft.dotnet.ilcompiler/1.0.0-alpha-27919-02/tools/ilc" #"obj/Release/netcoreapp2.2/linux-x64/native/HelloWorld.ilc.rsp"
Killed
1:7>/root/.nuget/packages/microsoft.dotnet.ilcompiler/1.0.0-alpha-27919-02/build/Microsoft.NETCore.Native.targets(249,5): error MSB3073: The command ""/root/.nuget/packages/runtime.linux-x64.microsoft.dotnet.ilcompiler/1.0.0-alpha-27919-02/tools/ilc" #"obj/Release/netcoreapp2.2/linux-x64/native/HelloWorld.ilc.rsp"" exited with code 137. [/home/app/HelloWorld.fsproj]
Done executing task "Exec" -- FAILED.
1:7>Done building target "IlcCompile" in project "HelloWorld.fsproj" -- FAILED.
1:7>Done Building Project "/home/app/HelloWorld.fsproj" (Publish target(s)) -- FAILED.
It seems to be somehow related with the usage of generics and reflections in F#. I've looked in both Npgsql and coreRT repos and couldn't find someone close to get them both working. Have anyone faced this problem? Or managed to use Npgsql and coreRT?
I'm trying to create a Dockerfile which runs as non-root user.
When i building this all works fine, but nginx cannot write the log file because it dosen't have enough permissions. Can I, when building a Docker, give root permissions only for nginx?
I'm trying chmod, chown for blocked directories. Doesn't work
FROM php:7.1-fpm-alpine
RUN apk add --no-cache shadow
RUN apk add --no-cache --virtual .ext-deps \
openssl \
unzip \
libjpeg-turbo-dev \
libwebp-dev \
libpng-dev \
freetype-dev \
libmcrypt-dev \
imagemagick-dev \
nodejs-npm \
nginx \
git \
inkscape
# imagick
RUN apk add --update --no-cache autoconf g++ imagemagick-dev libtool make pcre-dev \
&& pecl install imagick \
&& docker-php-ext-enable imagick \
&& apk del autoconf g++ libtool make pcre-dev
# Install Blackfire
RUN version=$(php -r "echo PHP_MAJOR_VERSION.PHP_MINOR_VERSION;") \
&& curl -A "Docker" -o /tmp/blackfire-probe.tar.gz -D - -L -s https://blackfire.io/api/v1/releases/probe/php/linux/amd64/$version \
&& tar zxpf /tmp/blackfire-probe.tar.gz -C /tmp \
&& mv /tmp/blackfire-*.so $(php -r "echo ini_get('extension_dir');")/blackfire.so \
&& printf "extension=blackfire.so\nblackfire.agent_socket=tcp://blackfire:8707\n" > $PHP_INI_DIR/conf.d/blackfire.ini
RUN apk add -y icu-dev \
&& docker-php-ext-configure intl \
&& docker-php-ext-install intl
RUN docker-php-ext-configure pdo_mysql && \
docker-php-ext-configure opcache && \
docker-php-ext-configure exif && \
docker-php-ext-configure pdo && \
docker-php-ext-configure zip && \
docker-php-ext-configure gd \
--with-jpeg-dir=/usr/include --with-png-dir=/usr/include --with-webp-dir=/usr/include --with-freetype-dir=/usr/include && \
docker-php-ext-configure sockets && \
docker-php-ext-configure mcrypt
RUN docker-php-ext-install pdo zip pdo_mysql opcache exif gd sockets mcrypt && \
docker-php-source delete
RUN ln -s /usr/bin/php7 /usr/bin/php && \
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer && \
mkdir -p /run/nginx
COPY ./init.sh /
COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY ./.env /
RUN chmod +x /init.sh
EXPOSE 80
RUN addgroup -g 1001 node \
&& adduser -u 1001 -G node -s /bin/sh -D node
ARG UID=1001
ARG GID=1001
ENV UID=${UID}
ENV GID=${GID}
RUN usermod -u $UID node \
&& groupmod -g $GID node
RUN chown 1001:1001 /var/lib/nginx -R
RUN mkdir -p /var/tmp/nginx
RUN chown 1001:1001 /var/tmp/nginx -R
USER node
ENTRYPOINT [ "/init.sh" ]
There are quite a few unknowns in your question, for example, the contents of your default.conf file. By default the nginx logs are stored in /var/log/nginx, but I'll assume you're overriding that in the configuration.
The next thing is that the master process of nginx needs to be run as root if you wan't it to be able to bind to system ports (0 - 1023) so in case you are using nginx as a web server and intend to use ports 80 and 443 you should stick with running the nginx process as root.
In case you plan to use other ports and are set on the idea of running the master process as non-root, then you can check this answer for suggestions on how to do that - https://stackoverflow.com/a/42329561/5359953
I am using the term master process a lot here, because nginx spawns worker processes to handle the actual requests and those can be run as a different user (Defined in the nginx configuration file)
I found the solution. I just changed RUN chown 1001:1001 /var/lib/nginx -R to RUN chown -R 1001:1001 /var/. Thats works fine
RUN chown -R 1001:1001 /var/
sometimes it's will be actually bad decision.
u can try add permissions like this
RUN chown -R 1001:1001 /var/tmp/nginx
RUN chown -R 1001:1001 /var/lib/nginx
RUN chown -R 1001:1001 /var/log/nginx
RUN chown -R 1001:1001 /run/nginx
I guess RUN chown 1001:1001 /var/lib/nginx -R work wrong because I set the flag -R too late
How can I compile libgnat to a single LLVM bitcode file? The latest dragonegg release is very old, so I provide a dockerfile to make testing more easy. My end goal is to run Ada in LLVM IR bitcode interpreters.
Dockerfile for the latest official dragonegg release
FROM ubuntu:trusty
COPY . /usr/src/workdir
WORKDIR /usr/src/workdir
RUN apt-get update \
&& apt-get -y install build-essential gnat-4.6 libgmp-dev libmpfr-dev libmpc-dev libz-dev gcc-4.6-plugin-dev
# libz-dev for ld when compiling dragonegg 3.3
# gcc-4.6-plugin-dev needed when compiling dragonegg 3.3
RUN tar -xzf gcc-4.6.4.tar.gz \
&& cd gcc-4.6.4 \
&& mkdir build \
&& cd build \
&& CC=gcc-4.6 ../configure --disable-multilib --enable-languages=ada,c,c++ --prefix=/opt/gcc-4.6.4 \
&& make -j4 \
&& make install
RUN tar -xzf clang+llvm-3.3-amd64-Ubuntu-12.04.2.tar.gz \
&& mv clang+llvm-3.3-amd64-Ubuntu-12.04.2 /opt/llvm-3.3
ENV PATH="/opt/llvm-3.3/bin:/opt/gcc-4.6.4/bin:${PATH}"
RUN tar -xzf dragonegg-3.3.src.tar.gz \
&& mv dragonegg-3.3.src dragonegg-3.3 \
&& cd dragonegg-3.3 \
&& GCC=/opt/gcc-4.6.4/bin/gcc make \
&& cp dragonegg.so /opt/dragonegg.so
download gcc-4.6.4.tar.gz
download clang+llvm-3.3-amd64-Ubuntu-12.04.2.tar.gz
download dragonegg-3.3.src.tar.gz
hello.adb
with Ada.Text_IO;
procedure Hello is
begin
Ada.Text_IO.Put_Line("Hello world from Ada (dragonegg)!");
end Hello;
Run gcc hello.adb -S -O1 -o hello.ll -fplugin=/opt/dragonegg.so -fplugin-arg-dragonegg-emit-ir to compile the hello.adb file. When I try to build the binary with llc -filetype=obj hello.ll and gcc hello.o, I get the following error:
/usr/lib/x86_64-linux-gnu/crt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
hello.o: In function `_ada_hello':
hello.ll:(.text+0xb): undefined reference to `ada__text_io__put_line__2'
collect2: ld returned 1 exit status
The error message indicates that the Ada runtime library is missing. Currently, I have no idea, how I can compile the libgnat to a single LLVM bitcode file, so I can link it with the program.
Why are you using DragonEgg? That's well and truly dead! See https://github.com/AdaCore/gnat-llvm instead.
What is the best approach for transfer large files Asynchronously in Flask? I have read this article. But I want to know if there is a way to do this without using celery?
Flask is a synchronous framework, you can try flask+gevent and streaming responses like explained here: http://flask.pocoo.org/docs/0.12/patterns/streaming/.
Anyway, if you want to upload properly very large files, I suggest you to use a different approach. Instead of trying to do asynchronous networking with a synchronous framework try delegating the transfer with Nginx upload_module, like explained here: http://blog.thisisfeifan.com/2013/03/nginx-upload-module-vs-flask.html
Nginx is faster and won't load up the files in memory, a thing that regular frameworks like Flask or Django even in Asynchronous mode will do. Remember to configure flask to receive after upload POST with directive upload_pass. The only caveat is that you'll have to learn how to compile a full fledge Nginx from source, here an example of working Dockerfile:
FROM buildpack-deps:jessie
##### NGINX #####
# Base Stuff
RUN apt-get update && apt-get install -y -qq \
libssl-dev
# Nginx with upload_module and upload_progress_module
# "Stable version".
ENV ZLIB_VERSION 1.2.11
ENV PCRE_VERSION 8.39
ENV NGX_UPLOAD_MODULE_VERSION 2.2
ENV NGX_UPLOAD_PROGRESS_VERSION 0.9.1
ENV NGX_HEADERS_MORE_VERSION 0.32
ENV NGX_SPPEDPAGE_VERSION 1.11.33.4
ENV NGINX_VERSION 1.11.8
RUN cd /tmp \
&& wget http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz \
&& tar xvf nginx-${NGINX_VERSION}.tar.gz \
&& wget https://github.com/openresty/headers-more-nginx-module/archive/v${NGX_HEADERS_MORE_VERSION}.tar.gz \
&& tar -xzvf v${NGX_HEADERS_MORE_VERSION}.tar.gz \
&& wget https://github.com/pagespeed/ngx_pagespeed/archive/latest-stable.tar.gz \
&& tar -xzvf latest-stable.tar.gz \
&& wget https://dl.google.com/dl/page-speed/psol/${NGX_SPPEDPAGE_VERSION}.tar.gz \
&& tar -xzvf ${NGX_SPPEDPAGE_VERSION}.tar.gz \
&& mv psol ngx_pagespeed-latest-stable/ \
&& git clone -b ${NGX_UPLOAD_MODULE_VERSION} https://github.com/Austinb/nginx-upload-module \
&& wget http://zlib.net/zlib-${ZLIB_VERSION}.tar.gz \
&& tar xvf zlib-${ZLIB_VERSION}.tar.gz \
&& wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-${PCRE_VERSION}.tar.bz2 \
&& tar -xjf pcre-${PCRE_VERSION}.tar.bz2 \
&& wget https://github.com/masterzen/nginx-upload-progress-module/archive/v${NGX_UPLOAD_PROGRESS_VERSION}.tar.gz \
&& tar xvf v${NGX_UPLOAD_PROGRESS_VERSION}.tar.gz \
&& cd nginx-${NGINX_VERSION} \
&& ./configure \
--with-pcre=../pcre-${PCRE_VERSION}/ \
--with-zlib=../zlib-${ZLIB_VERSION}/ \
--add-module=../nginx-upload-module \
--add-module=../nginx-upload-progress-module-${NGX_UPLOAD_PROGRESS_VERSION} \
--add-module=../ngx_pagespeed-latest-stable \
--add-module=../headers-more-nginx-module-${NGX_HEADERS_MORE_VERSION} \
--with-select_module \
--with-poll_module \
--with-file-aio \
--with-http_ssl_module \
--with-ipv6 \
--with-pcre-jit \
--with-http_gzip_static_module \
--with-http_ssl_module \
--with-http_v2_module \
--with-http_realip_module \
--user=nginx --group=nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --with-cpu-opt=CPU -- with-ld-opt="-Wl,-E" \
&& make \
&& make install
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
NOTE: Please in this image is lacking the provision of nginx.conf and default.conf.
I'm using Docker to work on Symfony 3 project, Here is the following stack :
-Custom Php7.1FPM here's the DockerFile :
FROM php:7.1.0-fpm
MAINTAINER xxxxx xxxxxx <xxxx.xxxxxx#gmail.com>
ENV PHP_APCU_VERSION 5.1.8
ENV PHP_XDEBUG_VERSION 2.5.0
RUN apt-get update \
&& apt-get install -y \
libicu-dev \
zlib1g-dev \
&& docker-php-source extract \
&& curl -L -o /tmp/apcu-$PHP_APCU_VERSION.tgz https://pecl.php.net/get/apcu-$PHP_APCU_VERSION.tgz \
&& curl -L -o /tmp/xdebug-$PHP_XDEBUG_VERSION.tgz http://xdebug.org/files/xdebug-$PHP_XDEBUG_VERSION.tgz \
&& tar xfz /tmp/apcu-$PHP_APCU_VERSION.tgz \
&& tar xfz /tmp/xdebug-$PHP_XDEBUG_VERSION.tgz \
&& rm -r \
/tmp/apcu-$PHP_APCU_VERSION.tgz \
/tmp/xdebug-$PHP_XDEBUG_VERSION.tgz \
&& mv apcu-$PHP_APCU_VERSION /usr/src/php/ext/apcu \
&& mv xdebug-$PHP_XDEBUG_VERSION /usr/src/php/ext/xdebug \
&& docker-php-ext-install \
apcu \
intl \
mbstring \
mysqli \
xdebug \
zip \
&& pecl install apcu_bc-1.0.3 \
&& docker-php-source delete \
&& php -r "readfile('https://getcomposer.org/installer');" | php -- --install-dir=/usr/local/bin --filename=composer \
&& chmod +x /usr/local/bin/composer
last nginx image
mysql:8.0.0
I use docker-compose to build those 3 containers, here's the docker-compose.yml :
front:
image: nginx
ports:
- "81:80"
links:
- "engine:engine"
volumes:
- ".:/home/docker:ro"
- "./docker/front/default.conf:/etc/nginx/conf.d/default.conf:ro"
engine:
build: ./docker/engine/
volumes:
- ".:/home/docker:rw"
- "./docker/engine/php.ini:/usr/local/etc/php/conf.d/custom.ini:ro"
links:
- "db:db"
working_dir: "/home/docker"
db:
image: mysql:8.0.0
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=pwd
- MYSQL_USER=myUSer
- MYSQL_PASSWORD=pwd
- MYSQL_DATABASE=bddProject
The first time without cache the time is 1700 ms :
And the time with cache is :
The half time is initialisation time :
So What kind of problem could slow the page render of my project ?
Docker last version and 2 Go with Windows Hyper-v system.
Thank you for your help.
So i make an other image without xdebug ant the result is the same
(700ms with cache) :
My DockerFile :
FROM php:7.1.0-fpm
MAINTAINER XXXXX XXXXXX <XXXXXX.XXXXXX#gmail.com>
ENV PHP_APCU_VERSION 5.1.8
RUN apt-get update \
&& apt-get install -y \
libicu-dev \
zlib1g-dev \
&& docker-php-source extract \
&& curl -L -o /tmp/apcu-$PHP_APCU_VERSION.tgz https://pecl.php.net/get/apcu-$PHP_APCU_VERSION.tgz \
&& tar xfz /tmp/apcu-$PHP_APCU_VERSION.tgz \
&& rm -r \
/tmp/apcu-$PHP_APCU_VERSION.tgz \
&& mv apcu-$PHP_APCU_VERSION /usr/src/php/ext/apcu \
&& docker-php-ext-install \
apcu \
intl \
mbstring \
mysqli \
zip \
&& pecl install apcu_bc-1.0.3 \
&& docker-php-source delete \
&& php -r "readfile('https://getcomposer.org/installer');" | php -- --install-dir=/usr/local/bin --filename=composer \
&& chmod +x /usr/local/bin/composer
So it's the window's management of Docker volume which make that, so #Geoffrey Brier you know if Microsoft has planned to improve this performance problem ?
Is there a soft or other to improve that ?
Thank you for your help.
As far as I can see there are two things that are responsible for those performances :
Xdebug
Windows : it's no troll but it's a well known problem that the way your containers volumes are handled by Docker on Windows is not as efficient as on Linux.
You have three solutions : struggle to find a method that slightly improves the performances, use Linux (in a VM for instance) or deal with it :)