So, I have, it works, but I want to change the way to immediately download the file and unpack it:
Dockerfile
FROM wordpress:fpm
# Copying themes from local
COPY ./wordpress/ /var/www/html/wp-content/themes/wordpress/
RUN chmod -R 777 /var/www/html/
How can I immediately download the file from the site and unzip it to the appropriate folder?
docker-compose.yml
wordpress:
build: .
links:
- db:mysql
nginx:
image: raulr/nginx-wordpress
links:
- wordpress
ports:
- "8080:80"
volumes_from:
- wordpress
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: qwerty
I tried:
#install unzip and wget
RUN \
apt-get update && \
apt-get install unzip wget -y && \
rm -rf /var/lib/apt/lists/*
RUN wget -O /var/www/html/type.zip http://wp-templates.ru/download/2405 \
&& unzip '/var/www/html/type.zip' -d /var/www/html/wp-content/themes/ && rm
/var/www/html/type.zip || true;
Dockerfile has "native command" for copying and extracting .tar.gz files.
So you can change archive type from .zip to .tar.gz (maybe in future versions zip also will be supported, I'm not sure) and use ADD instead of COPY.
Read more about ADD
Best to use a multistage docker build. You will need the latest version of docker and buildkit enabled. Then do something along these lines
# syntax=docker/dockerfile:1
from alpine:latest as unzipper
apk add unzip wget curl
RUN mkdir /opt/ ; \
curl <some-url> | tar xvzf - -C /opt
FROM wordpress:fpm
COPY --from unzipper /opt/ /var/www/html/wp-content/themes/wordpress/
Even better is if there is a Docker image built already with the stuff in you want you just need the 'copy --from' line and give it the image name.
Finally dont worry about any mess in the 1st stage as its discarded when the build completes, so the fact its alpine, and not using no-cache is irrelevant, and none of the installed packages end up in the final image
Found more guidance for remote zipped files in Docker documentation
Because image size matters, using ADD to fetch packages from remote
URLs is strongly discouraged; you should use curl or wget instead.
That way you can delete the files you no longer need after they’ve
been extracted and you don’t have to add another layer in your image.
For example, you should avoid doing things like:
ADD https://example.com/big.tar.xz /usr/src/things/
RUN tar -xJf /usr/src/things/big.tar.xz -C /usr/src/things
RUN make -C /usr/src/things all
And instead, do something like:
RUN mkdir -p /usr/src/things \
&& curl -SL https://example.com/big.tar.xz \
| tar -xJC /usr/src/things \
&& make -C /usr/src/things all
Related
I'm trying to install wp plugins by executing script right after the wordpress image is built.
Here is my Dockerfile:
FROM wordpress
# Update aptitude with new repo
RUN apt-get update
# Install software
RUN apt-get install -y sudo vim curl less git python-dev python3.5
# Add WP-CLI
RUN curl -o /bin/wp-cli.phar https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar
COPY wp-su.sh /bin/wp
RUN chmod +x /bin/wp-cli.phar /bin/wp && chown www-data:www-data /bin/wp-cli.phar /bin/wp
# Copy scripts into the image
COPY install.py /usr/src/wordpress
COPY plugins.json /usr/src/wordpress
COPY wait-for-it.sh /usr/src/wordpress
RUN chmod +x /usr/src/wordpress/install.py
RUN chmod +x /usr/src/wordpress/plugins.json
RUN chmod +x /usr/src/wordpress/wait-for-it.sh
COPY --chown=www-data:www-data uploads/ /usr/src/wordpress/wp-content/uploads
# Cleanup
RUN apt-get clean
RUN rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ENTRYPOINT ["/usr/src/wordpress/wait-for-it.sh", "db:3306", "--", "python" , "/usr/src/wordpress/install.py" ]
CMD ["apache2-foreground"]
Once the scripts runs I get the following error:
Error: This does not seem to be a WordPress installation.
Pass --path=`path/to/wordpress` or run `wp core download`.
I tried doing what the error suggested but it did not work. The wordpress installation from my understanding should be located at /usr/src/wordpress along with all the scripts I copied into it. Is what I'm doing correct ? Should it even be possible to do what I'm attempting ? Any help would be appreciated.
Note: This image is run from docker-compose.yml
UPDATE:
Looks to me like the reason for the above error is because the wordpress installation isn't there yet and by specifying the entrypoint I think I'm overwriting the entry point that is provided by the official wp image in which wordpress installation is copied into /var/www/html directory at runtime. Not sure how to get around this.
I have a Dockerfile, it does dotnet publish and the dll's are copied to intermediate docker container. I would like to copy the dll's which are generated in container to my local system (Host) as well.
I believe we can use "cp" command to do that, but I am not able to find a solution to get the intermediate container Id to use the "cp" command.
syntax: docker cp CONTAINER:Container_Path Host_Path.
Please suggest me any other better solution for this scenario.
Dockerfile:
FROM microsoft/aspnetcore-build:1.1.4 as builder
COPY . /Code
RUN dotnet restore /Code/MyProj.csproj
RUN dotnet publish -c Release /Code/MyProj.csproj
RUN cp CONTAINER: /Code/bin/Release/netcoreapp1.1/publish /binaries
Thanks.
This answer is outside of the Dockerfile.
first your Dockerfile would have to have a volume.
[VOLUME] /my/path/in/container
To get files into and out of a volume, try using tar -cvf and tar -xvf to put and get files between a container and a host.
To put files from host's newfiles.tar in pwd to a container at /var/lib/neo4j/conf mount.
docker run --rm \
-v my-volume-data:/my/path/in/container -v $(pwd):/newfiles ubuntu bash -c \
"cd /my/path/in/container && tar -xf /newfiles/newfiles.tar"
To get files from into a container at /my/path/in/container mount to a host oldfiles.tar.
docker run --rm \
-v my-volume-data:/my/path/in/container -v $(pwd):/newfiles ubuntu bash -c \
"cd /my/path/in/container && tar -cf /newfiles/origfiles.tar"
The --user 1000:1000 is optional if your container has a user with uid of 1000.
Dockerfile
FROM wordpress
ENV REFRESHED_AT 2015-08-12
ADD \
composer.json /var/www/html
ADD \
composer.lock /var/www/html
# install the PHP extensions
RUN \
apt-get -qq update && \
apt-get -y upgrade && \
apt-get install -y vim wget && \
rm -rf /var/lib/apt/lists/*
# Symlink User's "wp-content" folder into the newly installed Wordpress
RUN \
rm -rf /usr/src/wordpress/wp-content/plugins/* && \
rm -rf /usr/src/wordpress/wp-content/themes/* && \
cp -fr /usr/src/wordpress/* /var/www/html/ && \
chown -R www-data:www-data /var/www/html/
# volume for mysql database and wordpress install
VOLUME ["/var/www/html/wp-content/plugins", "/var/www/html/wp-content/themes"]
# Define working directory.
WORKDIR /var/www/html/
EXPOSE 80 3306
CMD ["apache2-foreground"]
Docker Compose File
wordpress:
build: .
links:
- mysql
- composer
volumes:
- wp-content/plugins/:/var/www/html/wp-content/plugins
- wp-content/themes/:/var/www/html/wp-content/themes
environment:
- WORDPRESS_DB_PASSWORD=__WORDPRESS_DB_PASSWORD__
- WORDPRESS_DB_NAME=__WORDPRESS_DB_NAME__
# - WORDPRESS_DB_USER=__WORDPRESS_DB_USER__
ports:
- "9888:80"
mysql:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=__WORDPRESS_DB_PASSWORD__
- MYSQL_DATABASE=__WORDPRESS_DB_NAME__
composer:
image: composer/composer
Question details
I'm able to ADD the composer.json and composer.lock files to the working directory. I can confirm that these two files are in the working directory.
What I need is for the Dockerfile (or wherever) to also automatically install the dependencies into the working directory.
According to Docker Hub, https://hub.docker.com/r/composer/composer/,
I should be able to docker run -v $(pwd):/app composer/composer install to install the dependencies but how do I do this in Dockerfile?
Also I'm confused because the -v flag, https://docs.docker.com/engine/userguide/dockervolumes/, has to do with mounting the specified host directory into the a container but I've already ADDed the necessary files to the working directory. All I want to do is install the dependencies.
Thank you for your help.
You just need to mount the current directory to /app when running your composer container. I've put together a simple example to illustrate this working at https://gist.github.com/andyshinn/e2c428f2cd234b718239.
The key parts here are the volumes for the composer part of the application and the restart: 'yes' on the primary PHP application (the application likely will not run until Composer has run so you will want it to restart).
I'm looking for how to install Graphicsmagick at Meteor Up Docker.
I found this solution (Access binaries inside docker) but I couldn't make work, where do I put those lines at start.sh?
meteorDockerId=docker ps | grep meteorhacks/meteord:base | awk '{print $1}'
docker exec $meteorDockerId apt-get install graphicsmagick -y
That's my start.sh:
#!/bin/bash
APPNAME=instagatas
APP_PATH=/opt/$APPNAME
BUNDLE_PATH=$APP_PATH/current
ENV_FILE=$APP_PATH/config/env.list
PORT=80
USE_LOCAL_MONGO=0
# remove previous version of the app, if exists
docker rm -f $APPNAME
# remove frontend container if exists
docker rm -f $APPNAME-frontend
set -e
docker pull meteorhacks/meteord:base
if [ "$USE_LOCAL_MONGO" == "1" ]; then
docker run \
-d \
--restart=always \
--publish=$PORT:80 \
--volume=$BUNDLE_PATH:/bundle \
--env-file=$ENV_FILE \
--link=mongodb:mongodb \
--hostname="$HOSTNAME-$APPNAME" \
--env=MONGO_URL=mongodb://mongodb:27017/$APPNAME \
--name=$APPNAME \
meteorhacks/meteord:base
else
docker run \
-d \
--restart=always \
--publish=$PORT:80 \
--volume=$BUNDLE_PATH:/bundle \
--hostname="$HOSTNAME-$APPNAME" \
--env-file=$ENV_FILE \
--name=$APPNAME \
meteorhacks/meteord:base
fi
docker pull meteorhacks/mup-frontend-server:latest
docker run \
-d \
--restart=always \
--volume=/opt/$APPNAME/config/bundle.crt:/bundle.crt \
--volume=/opt/$APPNAME/config/private.key:/private.key \
--link=$APPNAME:backend \
--publish=443:443 \
--name=$APPNAME-frontend \
meteorhacks/mup-frontend-server /start.sh
Re-installing the graphicsmagick package every time you re-start the containers seems like a hack I wouldn't want to do.
If you're modifying the start script already, might as well use a Dockerfile:
FROM meteorhacks/meteord:base
RUN apt-get install graphicsmagick -y
Then modify start.sh template to build a new docker image with graphicsmagick, tag it and use that image instead:
see: https://gist.github.com/so0k/7d4be21c5e2d9abd3743/revisions
EDIT: Where to put Dockerfile?
start.sh template is copied to /opt/<appName>/config/, currently the Dockerfile would need to be in that same directory (/opt/<appName>/config/Dockerfile)
see Linux init Task
Alternatively, you can specify specific Dockerfile with the -f flag for the docker build
Or your third option is to pipe Dockerfile to docker build using a here document
I've updated the start.sh gist, we no longer pull the meteord:base image and build it instead:
docker build -t meteorhacks/meteord:app - << EOF
FROM meteorhacks/meteord:base
RUN apt-get install graphicsmagick -y
EOF
The docker build will run every time, but as long as the requirements aren't changing, docker will use the docker images it cached.
The development Version of Meteor Up at Kadirahq allows specification of a custom Docker Image in the config file (mup.js).
MeteorD-Images with Graphicsmagick installed are available on Docker Hub.
This got me a working deployment (Meteor 1.3.2.4, Meter Up 309cefb, Node v5.4.1):
mup.js:
module.exports = {
…
meteor: {
dockerImage: 'ianmartorell/meteord-graphicsmagick',
…
},
};
I couldn't get the docker image that #bskp mentioned to work, so I figured out how to write one that uses abernix/meteord:base and then has graphicsmagick installed. Very simple, but it seems to be working for me on Meteor 1.4.1.1
I just did this in my mup.js file
docker: {
image: "joshjoe/meteor-graphicsmagick",
},
This was a huge pain to get working, so I'd be happy to help anyone who is struggling with this.
https://github.com/c316/meteor-graphicsmagick
If the if statement successes, you should be able to see a running container corresponding to the image you are grepping. In my opinion you can add the two lines after the fi to obtain the environment variable.
Build an image for get things right, but you can do temporary:
docker exec -it MeteorAppName apt-get install imagemagick -y
docker restart MeteorAppName
Check imagemagick: docker exec -it MeteorAppName convert -version
Why don't you add the following package meteor add cfs:graphicsmagick
https://atmospherejs.com/cfs/graphicsmagick
It tries to make sure Graphicsmagick is available. It worked for my use case i think it will work with docker too.
I set up an angular development environment using the following Dockerfile (don't try to build this unless you're really enthusiastic, it takes an age).
FROM ubuntu:14.04
# build environment
RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-y", "nodejs", "npm", "git"]
RUN ["ln", "-s", "/usr/bin/nodejs", "/usr/bin/node"]
RUN ["npm", "install", "-g", "yo"]
RUN ["npm", "install", "-g", "bower"]
RUN ["npm", "install", "-g", "grunt-cli"]
WORKDIR /home/angular
ADD ./package.json /home/angular/package.json
ADD ./bower.json /home/angular/bower.json
ADD ./dist /home/angular/dist
RUN ["npm", "install"]
RUN ["bower", "install", "--allow-root"]
# sass depedencies
ENV RUBY_MAJOR 2.2
ENV RUBY_VERSION 2.2.2
ENV RUBY_DOWNLOAD_SHA256 5ffc0f317e429e6b29d4a98ac521c3ce65481bfd22a8cf845fa02a7b113d9b44
# some of ruby's build scripts are written in ruby
# we purge this later to make sure our final image uses what we just built
RUN ["apt-get", "install", "-y", "curl"]
RUN apt-get update \
&& apt-get install -y autoconf bison libgdbm-dev ruby \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /usr/src/ruby \
&& curl -fSL -o ruby.tar.gz "http://cache.ruby-lang.org/pub/ruby/$RUBY_MAJOR/ruby-$RUBY_VERSION.tar.gz" \
&& echo "$RUBY_DOWNLOAD_SHA256 *ruby.tar.gz" | sha256sum -c - \
&& tar -xzf ruby.tar.gz -C /usr/src/ruby --strip-components=1 \
&& rm ruby.tar.gz \
&& cd /usr/src/ruby \
&& autoconf \
&& ./configure --disable-install-doc \
&& make -j"$(nproc)" \
&& make install \
&& apt-get purge -y --auto-remove bison libgdbm-dev ruby \
&& rm -r /usr/src/ruby
# skip installing gem documentation
RUN echo 'gem: --no-rdoc --no-ri' >> "$HOME/.gemrc"
# install things globally, for great justice
ENV GEM_HOME /usr/local/bundle
ENV PATH $GEM_HOME/bin:$PATH
ENV BUNDLER_VERSION 1.10.5
RUN gem install bundler --version "$BUNDLER_VERSION" \
&& bundle config --global path "$GEM_HOME" \
&& bundle config --global bin "$GEM_HOME/bin"
# don't create ".bundle" in all our apps
ENV BUNDLE_APP_CONFIG $GEM_HOME
RUN gem install compass
VOLUME ["/home/me/code/correspondence/client/dist"]
ADD ./ /home/angular
If I run this with:
sudo docker run -it me/angular /bin/bash
I can use grunt build with no problems. Since I haven't attached a volume to dist that build is no use to other containers such as the webserver. But running:
sudo docker run -itv /home/me/code/correspondence/client/dist:/home/angular me/angular /bin/bash
results in the grunt build command no longer being usable in the container:
grunt-cli: The grunt command line interface. (v0.1.13)
Fatal error: Unable to find local grunt.
If you're seeing this message, either a Gruntfile wasn't found or grunt
hasn't been installed locally to your project. For more information about
installing and configuring grunt, please see the Getting Started guide:
http://gruntjs.com/getting-started
The only difference is adding the volume. How does adding the volume result in this different behaviour?
I suppose that's because you have placed some files to /home/angular in image and when you're mounting your volume to the same path (/home/angular), your volume hides original files.
Quote from documentation:
Note: If the path /opt/webapp already exists inside the container’s
image, its contents will be replaced by the contents of /src/webapp on
the host to stay consistent with the expected behavior of mount
Try to mount volume to another directory, /home/angular/dist/client, for instance.