I have a project using ASP.NET Core and SQL Server. I am trying to put everything in docker containers. For my app I need to have some initial data in the database.
I am able to use docker sql server image from microsoft (microsoft/mssql-server-linux), but it is (obviously) empty. Here is my docker-compose.yml:
version: "3"
services:
web:
build: .\MyProject
ports:
- "80:80"
depends_on:
- db
db:
image: "microsoft/mssql-server-linux"
environment:
SA_PASSWORD: "your_password1!"
ACCEPT_EULA: "Y"
I have an SQL script file that I need to run on the database to insert initial data. I found an example for mongodb, but I cannot find which tool can I use instead of mongoimport.
You can achieve this by building a custom image. I'm currently using the following solution. Somewhere in your dockerfile should be:
RUN mkdir -p /opt/scripts
COPY database.sql /opt/scripts
ENV MSSQL_SA_PASSWORD=Passw#rd
ENV ACCEPT_EULA=Y
RUN /opt/mssql/bin/sqlservr --accept-eula & sleep 30 & /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Passw#rd' -d master -i /opt/scripts/database.sql
Alternatively you can wait for a certain text to be outputted, useful when working on the dockerfile setup, as it is immediate. It's less robust as it relies on some 'random' text of course:
RUN ( /opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Service Broker manager has started" \
&& /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Passw#rd' -i /opt/scripts/database.sql
Don't forget to put a database.sql (with your script) next to the dockerfile, as that is copied into the image.
Roet's answer https://stackoverflow.com/a/52280924/10446284 ditn't work for me.
The trouble was with bash ampersands firing the sqlcmd too early. Not waiting for sleep 30 to finish.
Our Dockerfile now looks like this:
FROM microsoft/mssql-server-linux:2017-GA
RUN mkdir -p /opt/scripts
COPY db-seed/seed.sql /opt/scripts/
ENV MSSQL_SA_PASSWORD=Passw#rd
ENV ACCEPT_EULA=true
RUN /opt/mssql/bin/sqlservr & sleep 60; /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Passw#rd' -d master -i /opt/scripts/seed.sql
Footnotes:
Now the bash command works like this
run-asych(sqlservr) & run-asynch(wait-and-run-sqlcmd)`
We chose sleep 60, because the build of the docker image happens "offline", before all the runtine evironment is set up. Those 60 seconds don't occur at container runtime anymore. Giving more time for the sqlservr command gives our teammates' machines more time to complete the docker build phase successfully.
One simple option is to just navigate to the container file system and copy the database files in, and then use a script to attach.
This
https://learn.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker
has an example of using sqlcmd in a docker container although I'm not sure how you would add this to whatever build process you have
Related
I'm working on a read the docs documentation where I use docker. To customize it, I d like to share the css folder between the container and host, in order to avoid building always a new image to see the changes. The goal is, that I can just refresh the browser and see the changes.
I tried something like this, but it doesn't work:
docker run -v ~/docs/source/_static/css:/docs/source/_static/css -p 80:80 -it my-docu:latest
What is wrong in this command?
The path of the folder I'd like to share is:
Documents/my-documentation/docs/source/_static/css
Thanks for your help!
I'm guessing that the ~ does not resolve correctly. The tilde character ("~") refers to the home directory of your user; usually something like /home/your_username.
In your case, it sounds like your document isn't in this directory anyway.
Try:
docker run -v Documents/my-documentation/docs/source/_static/css:/docs/source/_static/css -p 80:80 -it my-docu:latest
I have no mac to test with, but I suspect the command should be as below (Documents is a subfolder to inside your home directory denoted by ~)
docker run -v ~/Documents/my-documentation/docs/source/_static/css:/docs/source/_static/css -p 80:80 -it my-docu:latest
In your OP you mount the host folder ~/docs/source/_static/css, which does not make sense if your files are in Documents/my-documentation/docs/source/_static/css as that would correspond to ~/Documents/my-documentation/docs/source/_static/css
Keep in mind that Docker is still running inside a VM on Mac, so you will need to give a host path that is valid on that VM
What you can do to get a better view of the situation is to start an interactive container where you mount the root file system of the host vm root into /mnt/vm-root. That way you can see what paths are available to mount and how they should be formatted when you pass them using the -v flag to the docker run command
docker run --rm -it -w /mnt/vm-root -v /:/mnt/vm-root ubuntu:latest bash
I have a frontend-only web application hosted in Docker. The backend already exists but it has "custom IP" address, so I had to update my local /etc/hosts file to access it. So, from my local machine I am able to access the backend API without problem.
But the problem is that Docker somehow can not resolve this "custom IP", even when the host in written in the container (image?) /etc/hosts file.
When the Docker container starts up I see this error
$ docker run media-saturn:dev
2016/05/11 07:26:46 [emerg] 1#1: host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
nginx: [emerg] host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
I update the /etc/hosts file via command in Dockerfile, like this
# install wget
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
# The trick is to add the hostname on the same line as you use it, otherwise the hosts file will get reset, since every RUN command starts a new intermediate container
# it has to be https otherwise authentification is required
RUN echo "123.45.123.45 my-server-address.com" >> /etc/hosts && wget https://my-server-address.com
When I ssh into the machine to check the current content of /etc/hosts, the line "123.45.123.45 my-server-address.com" is indeed there.
Can anyone help me out with this? I am Docker newbee.
I have solved this. There are two things at play.
One is how it works locally and the other is how it works in Docker Cloud.
Local workflow
cd into root of project, where Dockerfile is located
build image: docker build -t media-saturn:dev .
run the builded image: docker run -it --add-host="my-server-address.com:123.45.123.45" -p 80:80 media-saturn:dev
Docker cloud workflow
Add extra_host directive to your Stackfile, like this
and then click Redeploy in Docker cloud, so that changes take effect
extra_hosts:
'my-server-address.com:123.45.123.45'
Optimization tip
ignore as many folders as possible to speed up process of sending data to docker deamon
add .dockerignore file
typically you want to add folders like node_modelues, bower_modules and tmp
in my case the tmp contained about 1.3GB of small files, so ignoring it sped up the process significantly
I have a multi-container Symfony application that uses docker-compose to handle the relationships between the containers. To simplify a little, i have 4 main services :
code:
image: mycode
web:
image: mynginx
volumes-from:
- code
ports:
- "80:80"
links:
- php-fpm
php-fpm:
image: myphpfpm
volumes-from:
- code
links:
- mongo
mongo:
image: mongo
The "mycode" image contains the code of my application and is built from the following Dockerfile :
FROM composer/composer
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libmcrypt-dev \
libxml2-dev \
libicu-dev \
libcurl4-openssl-dev \
libssl-dev \
pkg-config
RUN docker-php-ext-install iconv mcrypt mbstring bcmath json ctype iconv posix intl
RUN pecl install mongo \
&& echo extension=mongo.so >> /usr/local/etc/php/conf.d/mongo.ini
COPY . /code
WORKDIR /code
RUN rm -rf /code/app/cache/* \
&& rm -rf /code/app/logs/* \
&& chown -R root /code/app/cache \
&& chown -R root /code/app/logs \
&& chmod -R 777 /code/app/cache \
&& chmod -R 777 /code/app/logs \
&& composer install \
&& rm -f /code/web/app_dev.php \
&& rm -f /code/web/config.php
VOLUME ["/code", "/code/app/logs", "/code/app/cache"]
At first, deploying this application was easy. I just had to do a simple docker-compose up -d and it created all the containers and ran them without any issue. But then i had to deploy a new version.
This configuration uses volumes to store data :
the source code is mounted on the /code volume, and shared between 3
containers (code, web, php-fpm). It has to be replaced by a new version when deploying.
the MongoDb data is on another
volume, mounted only by the mongo container. I have to keep this data between deployments.
When i deploy an update to my code, i publish the new version of the mycode image and re-create the container. But since the /code volume is still used by the web and php-fpm containers, the old volume can't be replaced by the new one. I have to stop all the running services to delete the old volume, and if i use the docker-compose rm -v command, it will delete the mongodb data too !
Can't i replace only one volume with a new version, without any downtime ?
So i'm kind of stuck here. I'm thinking of having a permanent volume to store the code and update it through SSH with Capistrano, old style. This will allow me to run doctrine migrations scripts after deployment too. But i have other issues with it as Capistrano uses symlinks to handle versions so i can't just mount the /current folder to /code.
Do you have a solution to handle the deployment of a Docker application without losing data and without downtime ?
Should i use manual scripts instead of docker-compose ?
the source code is mounted on the /code volume
This is the problem, it is not what you want.
Code never goes into a volume, it should change when the image changes. Volumes are for things that you want to preserve between changes to the image (data, logs, state, etc).
Code is the immutable thing that you want to replace when you change a container. So remove the /code volume from the Dockerfile entirely, and instead do an ADD . /code in the mynginx and myphpfpm Dockerfiles.
With that change, you can deploy with just up -d. It will recreate any container that have changed, and your volumes will be copied over. You don't need an rm anymore.
If you have your Dockerfile for myphpfpm and mynginx in a different directory, you can build using docker build -f path/to/dockerfile .
Using a host volume (as suggested in another answer) is another option, however that's not usually what you want outside of development. With a host volume you would still remove the /code VOLUME from the dockerfile.
Do not copy the code via the Dockerfile, just attach volumes to the 'code' container.
Few edits:
code:
image: mycode
volumes:
- .:/code
- /code
web:
image: mynginx
volumes-from:
- code
ports:
- "80:80"
links:
- php-fpm
php-fpm:
image: myphpfpm
volumes-from:
- code
links:
- mongo
mongo:
image: mongo
Same thing applies to mongo mount it to an external volume so it persists when the container shuts down. Actually there is also another method, they mention it in their dockerhub page https://hub.docker.com/_/mongo/
Where to Store Data
Important note: There are several ways to store data used by
applications that run in Docker containers. We encourage users of the
mongo images to familiarize themselves with the options available,
including:
Let Docker manage the storage of your database data by writing the
database files to disk on the host system using its own internal
volume management. This is the default and is easy and fairly
transparent to the user. The downside is that the files may be hard to
locate for tools and applications that run directly on the host
system, i.e. outside containers.
Create a data directory on the host system (outside the container) and
mount this to a directory visible from inside the container. This
places the database files in a known location on the host system, and
makes it easy for tools and applications on the host system to access
the files. The downside is that the user needs to make sure that the
directory exists, and that e.g. directory permissions and other
security mechanisms on the host system are set up correctly.
I'm a Docker newbie and I'm trying to setup my first project.
To test how to play with it, I just cloned one ready-to-go project and I setup it (Project repo).
As the guide claims if I access a specific url, I reach the homepage. To be more specific a symfony start page.
Moreover with this command
docker run -i -t testdocker_application /bin/bash
I'm able to login to the container.
My problem is if I try to go to the application folder through bash, the folder that I shared with my host is empty.
I tried with another project, but the result is the same.
Where I'm wrong?
Here some infos about my env:
Ubuntu 12.04
Docker version 1.8.3, build f4bf5c7
Config:
application:
build: code
volumes:
- ./symfony:/var/www/symfony
- ./logs/symfony:/var/www/symfony/app/logs
tty: true
Looks like you have a docker-compose.yml file but are running the image with docker. You don't actually need docker-compose to start a single container. If you just want to start the container your command should look like this:
docker run -ti -v $(pwd)/symfony:/var/www/symfony -v $(pwd)/logs/symfony:/var/www/symfony/app/logs testdocker_application /bin/bash
To use your docker-compose.yml start your container with docker-compose up. You would also need to add the following to drop into a shell.
stdin_open: true
command: /bin/bash
I'm setting up a simple image: one that holds Riak (a NoSQL database). The image starts the Riak service with riak start as a CMD. Now, if I run it as a daemon with docker run -d quintenk/riak-dev, it does start the Riak process (I can see that in the logs). However, it closes automatically after a few seconds. If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started (UPDATE: see answers for an explanation for this). In fact, no services are running at all. I can start it manually using the terminal, but I would like Riak to start automatically. I figure this behavior would occur for other services as well, Riak is just an example.
So, running/restarting the container should automatically start Riak. What is the correct approach of setting this up?
For reference, here is the Dockerfile with which the image can be created (UPDATE: altered using the chosen answer):
FROM ubuntu:12.04
RUN apt-get update
RUN apt-get install -y openssh-server curl
RUN curl http://apt.basho.com/gpg/basho.apt.key | apt-key add -
RUN bash -c "echo deb http://apt.basho.com precise main > /etc/apt/sources.list.d/basho.list"
RUN apt-get update
RUN apt-get -y install riak
RUN perl -p -i -e 's/(?<=\{http,\s\[\s\{")127\.0\.0\.1/0.0.0.0/g' /etc/riak/app.config
EXPOSE 8098
CMD /bin/riak start && tail -F /var/log/riak/erlang.log.1
EDIT: -f changed to -F in CMD in accordance to sesm his remark
MY OWN ANSWER
After working with Docker for some time I picked up the habit of using supervisord to tun my processes. If you would like example code for that, check out https://github.com/Krijger/docker-cookbooks. I use my supervisor image as a base for all my other images. I blogged on using supervisor here.
To keep docker containers running, you need to keep a process active in the foreground.
So you could probably replace that last line in your Dockerfile with
CMD /bin/riak console
Or even
CMD /bin/riak start && tail -F /var/log/riak/erlang.log.1
Note that you can't have multiple lines of CMD statements, only the last one gets run.
Using tail to keep container alive is a hack. Also, note, that with -f option container will terminate when log rotation happens (this can be avoided by using -F instead).
A better solution is to use supervisor. Take a look at this tutorial about running Riak in a Docker container.
The explanation for:
If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started
is as follows. Using CMD in the Dockerfile is actually the same functionality as starting the container using docker run {image} {command}. As Gigablah remarked only the last CMD is used, so the one written in the Dockerfile is overwritten in this case.
By using CMD /bin/riak start && tail -f /var/log/riak/erlang.log.1 in the Buildfile, you can start the container as a background process using docker run -d {image}, which works like a charm.
"If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started"
It sounds like you only want to be able to monitor the log when you attach to the container. My use case is a little different in that I want commands started automatically, but I want to be able to attach to the container and be in a bash shell. I was able to solve both of our problems as follows:
In the image/container, add the commands you want automatically started to the end of the /etc/bash.bashrc file.
In your case just add the line /bin/riak start && tail -F /var/log/riak/erlang.log.1, or put /bin/riak start and tail -F /var/log/riak/erlang.log.1 on separate lines depending on the functionality desired.
Now commit your changes to your container, and run it again with: docker run -i -t quintenk/riak-dev /bin/bash. You'll find the commands you put in the bashrc are already running as you attach.
Because I want a clean way to have the process exit later I make the last command a call to the shell's read which causes that process to block until I later attach to it and hit enter.
arthur#macro:~/docker$ sudo docker run -d -t -i -v /raid:/raid -p 4040:4040 subsonic /bin/bash -c 'service subsonic start && read -p "waiting"'
WARNING: Docker detected local DNS server on resolv.conf. Using default external servers: [8.8.8.8 8.8.4.4]
f27229a260c9
arthur#macro:~/docker$ sudo docker ps
[sudo] password for arthur:
ID IMAGE COMMAND CREATED STATUS PORTS
35f253bdf45a subsonic:latest /bin/bash -c service 2 days ago Up 2 days 4040->4040
arthur#macro:~/docker$ sudo docker attach 35f253bdf45a
arthur#macro:~/docker$ sudo docker ps
ID IMAGE COMMAND CREATED STATUS PORTS
as you can see the container exits after you attach to it and unblock the read.
You can of course use a more sophisticated script than read -p if you need to do other clean up, such as stopping services and saving logs etc.
I use a simple trick whenever I start building a new docker container. To keep it alive, I use a ping in the entrypoint script.
So in the Dockerfile, when using debian, for instance, I make sure I can ping.
This is btw, always nice, to check what is accessible from within the container.
...
RUN DEBIAN_FRONTEND=noninteractive apt-get update \
&& apt-get install -y iputils-ping
...
ENTRYPOINT ["entrypoint.sh"]
And in the entrypoint.sh file
#!/bin/bash
...
ping 10.10.0.1 >/dev/null 2>/dev/null
I use this instead of CMD bash, as I always wind up using a startup file.