about docker link and container's /etc/hosts file - networking

I am following the docker official doc"Linking Containers Together". At the bottom of this doc, on the container's /etc/hosts file, it defines ip address for both ends of a link.
$ sudo docker run -t -i --rm --link db:db training/webapp /bin/bash
root#aed84ee21bde:/opt/webapp# cat /etc/hosts
172.17.0.7 aed84ee21bde
. . .
172.17.0.5 db
And then, it says:
If you restart the source container, the linked containers /etc/hosts files will be automatically updated with the source container's new IP address, allowing linked communication to continue.
$ sudo docker restart db
db
$ sudo docker run -t -i --rm --link db:db training/webapp /bin/bash
root#aed84ee21bde:/opt/webapp# cat /etc/hosts
172.17.0.7 aed84ee21bde
. . .
172.17.0.9 db
I am wandering, what about the link created before container 'db' restart, the old recipient's /etc/hosts file still keep 'db' old ip, but after 'db' container's ip changed after it restarted, the /etc/hosts file lost its effect.

There is a known bug in docker that affects some versions of it #6350. Also some versions of docker has this problem when you are using link aliases. So if you upgrade your docker to latest version (currently 1.8.1) the problem can be solved.

Related

Change mapped ip in wordpress docker container

I'm a little bit newbie with Docker. The problem is my server provider changed the public IP recently. When I ran my wordpress container I used the following:
docker run -e WORDPRESS_DB_PASSWORD=xxx --name wordpress-xx --link wordpressdb-xx -p 185.166.xx.xx:8081:80 -v "$PWD/docker/data/wordpress/xx":/var/www/html -d wordpress
How can I change the old IP in order to assign the new one in a container that is already running?
Is it possible to run this containers with localhost IP? For example:
docker run -e WORDPRESS_DB_PASSWORD=xxx --name wordpress-xx --link wordpressdb-xx -p 127.0.0.1:8081:80 -v "$PWD/docker/data/wordpress/xx":/var/www/html -d wordpress
You can also try to save the current container as an image using docker commit and then run the image as a new container with the new IP.
If you have only a single network interface just pass the port only. You can also use the 127.0.0.1 address
See https://docs.docker.com/engine/reference/commandline/run/#publish-or-expose-port--p-expose

My docker container isn't starting on localhost (0.0.0.0) on Docker for Windows (Native using Hyper-V)

I'm following Digital Ocean's tutorial on how to start a nginx docker container (Currently on Step 4). Currently this is their output:
$ docker run --name docker-nginx -p 80:80 -d nginx
d3ccb73a91985651ec61231bca9f9c716f0dec807e354a29eeef2144f883a01c
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b91f3ce26553 nginx "nginx -g 'daemon off" About a minute ago Up About a minute 0.0.0.0:80->80/tcp, 443/tcp docker-nginx
But when I run it, this is my output (noticed the different IP of the container):
C:\>docker run --name docker-nginx -p 80:80 -d nginx
d3ccb73a91985651ec61231bca9f9c716f0dec807e354a29eeef2144f883a01c
C:\>docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3ccb73a9198 nginx "nginx -g 'daemon off" 14 hours ago Up 2 seconds 10.0.75.2:80->80/tcp, 443/tcp docker-nginx
Why does this happen? And how can I get the same results as Digital Ocean's? (Getting the server to start on localhost)
Edit: I'm using Docker for windows (recently released) which apparently runs native using Hyper-V. My output for docker-machine ls is this:
C:\>docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
C:\>
But when I run it, this is my output (noticed the different IP of the
container)
Since this a Windows machine, I assume that you're using Docker Toolbox Docker for Windows. 10.0.75.2 is the IP of the boot2docker virtual machine.
If you are using Windows or Mac OS, you will need some form of virtualization in
order to run Docker. The IP you just saw is the IP of that lightweight virtual machine.
And how can I get the same results as Digital Ocean's? (Getting the
server to start on localhost)
Use a Linux distribution! Also you can enable Expose container ports on localhost in Docker For Windows Settings:
Despite you created the containers in your local machine. These are actually running on a different machine (a virtual machine)
First, check what is the IP of your docker machine (the virtual machine)
$docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default * virtualbox Running tcp://192.168.99.100
Then run curl command (or open a browser) to view the default web site on your nginx web server inside the container
curl http://192.168.99.100:80
if you are using a virtual machine on windows:
docker-machine ip default
https://docs.docker.com/machine/concepts/
When I ran this command for the first time: docker run -d -p 80:80 --name docker-tutorial docker101tutorial
I got this error:
docker: Error response from daemon: Conflict. The container name
"/docker-tutorial" is already in use by container "LONG_CONTAINER_ID".
You have to remove (or rename) that container to be able to reuse that
name.
so, I tried to remove this container using: docker rm -f LONG_CONTAINER_ID
then I did: docker run -d -p 3080:80 --name docker-tutorial docker101tutorial
note 3080:80 instead of 80:80... Had I run this from the docker desktop, I would see this default option below:

Docker run results in "host not found in upstream" error

I have a frontend-only web application hosted in Docker. The backend already exists but it has "custom IP" address, so I had to update my local /etc/hosts file to access it. So, from my local machine I am able to access the backend API without problem.
But the problem is that Docker somehow can not resolve this "custom IP", even when the host in written in the container (image?) /etc/hosts file.
When the Docker container starts up I see this error
$ docker run media-saturn:dev
2016/05/11 07:26:46 [emerg] 1#1: host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
nginx: [emerg] host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
I update the /etc/hosts file via command in Dockerfile, like this
# install wget
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
# The trick is to add the hostname on the same line as you use it, otherwise the hosts file will get reset, since every RUN command starts a new intermediate container
# it has to be https otherwise authentification is required
RUN echo "123.45.123.45 my-server-address.com" >> /etc/hosts && wget https://my-server-address.com
When I ssh into the machine to check the current content of /etc/hosts, the line "123.45.123.45 my-server-address.com" is indeed there.
Can anyone help me out with this? I am Docker newbee.
I have solved this. There are two things at play.
One is how it works locally and the other is how it works in Docker Cloud.
Local workflow
cd into root of project, where Dockerfile is located
build image: docker build -t media-saturn:dev .
run the builded image: docker run -it --add-host="my-server-address.com:123.45.123.45" -p 80:80 media-saturn:dev
Docker cloud workflow
Add extra_host directive to your Stackfile, like this
and then click Redeploy in Docker cloud, so that changes take effect
extra_hosts:
'my-server-address.com:123.45.123.45'
Optimization tip
ignore as many folders as possible to speed up process of sending data to docker deamon
add .dockerignore file
typically you want to add folders like node_modelues, bower_modules and tmp
in my case the tmp contained about 1.3GB of small files, so ignoring it sped up the process significantly

gitlab docker ssh issue

I looked the different posts concerning gitlab, docker, and ssh issues without any help. So, I ask my question here.
I have the following setting:
linux box with ubuntu server 14.04 and IP 192.168.1.104
DNS: git.mydomain.com = 192.168.1.104
A gitlab docker that I start, according to the official doc, this way:
sudo docker run --detach --name gitlab_app --publish 8080:80 --publish 2222:22 --volumes-from gitlab_data gitlab_image
or
sudo docker run --detach --name gitlab_app --publish 8080:80 --publish 2222:22 -e "GITLAB_SHELL_SSH_PORT=2222" --volumes-from gitlab_data gitlab_image
the linux box runs an nginx which redirects (proxy_pass) git.mydomain.com to 192.168.1.104:8080
I access git.mydomain.com without any issue, everything works.
I generated an ssh key that I have added to my profile on gitlab and added the following lines to my ~/.ssh/config
Host git.mydomain.com
User git
Port 2222
IdentityFile /home/user/.ssh/id_rsa
If I try
ssh -p 2222 git#git.mydomain.com
the connection is closed. I assume it is because only a git-shell is permitted.
But, if I try
mkdir test
cd test
git init
touch README.md
git add README.md
git commit -m "first commit"
git remote add origin git#git.domain.com:user/test.git
git push -u origin master
it stucks with
Connection closed by 192.168.1.104
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I also tried with
git remote add origin git#git.domain.com:2222/user/
and the result was the same.
Note that the logs of gitlab docker include
[2015-03-06T11:04:43+00:00] INFO: group[git] created
[2015-03-06T11:04:43+00:00] INFO: user[git] created
[2015-03-06T11:04:44+00:00] INFO: group[gitlab-www] created
[2015-03-06T11:04:44+00:00] INFO: user[gitlab-www] created
Any idea how I can fix this issue?
Thanks in advance for your help.
I would guess that you have authentication problem.
Here are few things you can try:
Make sure you added you public key in gitlab.
Check permissions of you id_rsa file.
Try temporarily disabling hosts verification with
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
I have the same setup as you (docker container in VM, DNS points to VM). I also configured .ssh/config like you.
But when I log-in with ssh I get:
ssh -p 2222 git#gitlab
PTY allocation request failed on channel 0
Welcome to GitLab, tomzo!
Connection to gitlab closed.
Git remotes do not need port 2222 configured. This is OK (works for me):
$ git remote -v
origin git#gitlab:lab/workflow.git (fetch)
origin git#gitlab:lab/workflow.git (push)
And I can push and pull with git.
$ git push
Everything up-to-date

Setting Up Docker Dnsmasq

I'm trying to set up a docker dnsmasq container so that I can have all my docker containers look up the domain names rather than having hard-coded IPs (if they are on the same host). This fixes an issue with the fact that one cannot alter the /etc/hosts file in docker containers, and this allows me to easily update all my containers in one go, by altering a single file that the dnsmasq container references.
It looks like someone has already done the hard work for me and created a dnsmasq container. Unfortunately, it is not "working" for me. I wrote a bash script to start the container as shown below:
name="dnsmasq_"
timenow=$(date +%s)
name="$name$timenow"
sudo docker run \
-v="$(pwd)/dnsmasq.hosts:/dnsmasq.hosts" \
--name=$name \
-p='127.0.0.1:53:5353/udp' \
-d sroegner/dnsmasq
Before running that, I created the dnsmasq.hosts directory and inserted a single file within it called hosts.txt with the following contents:
192.168.1.3 database.mydomain.com
Unfortunately whenever I try to ping that domain from within:
the host
The dnsmasq container
another container on the same host
I always receive the ping: unknown host error message.
I tried starting the dnsmasq container without daemon mode so I could debug its output, which is below:
dnsmasq: started, version 2.59 cachesize 150
dnsmasq: compile time options: IPv6 GNU-getopt DBus i18n DHCP TFTP conntrack IDN
dnsmasq: reading /etc/resolv.dnsmasq.conf
dnsmasq: using nameserver 8.8.8.8#53
dnsmasq: read /etc/hosts - 7 addresses
dnsmasq: read /dnsmasq.hosts//hosts.txt - 1 addresses
I am guessing that I have not specified the -p parameter correctly when starting the container. Can somebody tell me what it should be for other docker containers to lookup the DNS, or whether what I am trying to do is actually impossible?
The build script for the docker dnsmasq service needs to be changed in order to bind to your server's public IP, which in this case is 192.168.1.12 on my eth0 interface
#!/bin/bash
NIC="eth0"
name="dnsmasq_"
timenow=$(date +%s)
name="$name$timenow"
MY_IP=$(ifconfig $NIC | grep 'inet addr:'| grep -v '127.0.0.1' | cut -d: -f2 | awk '{ print $1}')
sudo docker run \
-v="$(pwd)/dnsmasq.hosts:/dnsmasq.hosts" \
--name=$name \
-p=$MY_IP:53:5353/udp \
-d sroegner/dnsmasq
On the host (in this case ubuntu 12), you need to update the resolv.conf or /etc/network/interfaces file so that you have registered your public IP (eth0 or eth1 device) as the nameserver.
You may want to set a secondary nameserver to be google for whenever the container is not running, by changing the line to be dns-nameservers xxx.xxx.xxx.xxx 8.8.8.8 E.g. there is no comma or another line.
You then need to restart your networking service sudo /etc/init.d/networking restart if you updated the /etc/network/interfaces file so that this auto updates the /etc/resolve.conf file that docker will copy to the container during the build.
Now restart all of your containers
sudo docker stop $CONTAINER_ID
sudo docker start $CONTAINER_ID
This causes their /etc/resolv.conf files update so they point to the new nameserver settings.
DNS lookups in all your docker containers (that you built since making the changes) should now work using your dnsmasq container!
As a side note, this means that docker containers on other hosts can also take advantage of your dnsmasq service on this host as long as their host's nameserver settings is set to using this server's public IP.

Resources