How do I find the network ip using docker and travis? - networking

In my local setup, I can run ...
docker run --name myapp -e HOST=$(docker-machine ip default) --user root myapp
... and then use $HOST to connect to any other container (e.g. one running mongodb).
However, in Travis, docker-machine does not exist. Thus, I cannot simply put that line in my .travis.yml.
How do I get the network IP?

The flag --link adds an entry to /etc/hosts with the ip address of the specified running container
docker run --name myapp --link mongodb:mongodb myapp
However please note that:
The default docker0 bridge network supports the use of port mapping
and docker run --link to allow communications between containers in
the docker0 network. These techniques are cumbersome to set up and
prone to error. While they are still available to you as techniques,
it is better to avoid them and define your own bridge networks
instead.
Another option is using the flag --add-host if you want to add a known ip address
docker run --name myapp --add-host mongodb:10.10.10.1 myapp
Option 2
Create a network
docker network create --subnet=172.18.0.0/16 mynet123
Run mongodb container assigning an static ip
docker run --network mynet123 --ip 172.18.0.22 -d mongodb
Add that ip to the other container
docker run --network mynet123 --add-host mongodb:172.18.0.22 -d myapp

Related

Change mapped ip in wordpress docker container

I'm a little bit newbie with Docker. The problem is my server provider changed the public IP recently. When I ran my wordpress container I used the following:
docker run -e WORDPRESS_DB_PASSWORD=xxx --name wordpress-xx --link wordpressdb-xx -p 185.166.xx.xx:8081:80 -v "$PWD/docker/data/wordpress/xx":/var/www/html -d wordpress
How can I change the old IP in order to assign the new one in a container that is already running?
Is it possible to run this containers with localhost IP? For example:
docker run -e WORDPRESS_DB_PASSWORD=xxx --name wordpress-xx --link wordpressdb-xx -p 127.0.0.1:8081:80 -v "$PWD/docker/data/wordpress/xx":/var/www/html -d wordpress
You can also try to save the current container as an image using docker commit and then run the image as a new container with the new IP.
If you have only a single network interface just pass the port only. You can also use the 127.0.0.1 address
See https://docs.docker.com/engine/reference/commandline/run/#publish-or-expose-port--p-expose

How to expose ports only within the docker network?

I have a few apps running in a Docker network with their ports (3000,4200, etc) exposed. I also have an nginx container running within the same Docker network which hosts these apps on port 80 with different domain names (site1.com, site2.com).
But right now if I go directly to the ports the apps are running on (localhost:3000) I can access them that way too.
How do I expose those ports only to the nginx container and not the host system?
But right now if I go directly to the ports the apps are running on
(localhost:3000) I can access them that way too.
Thats because you are using -p aka --publish command in your docker run
Explanation:
If you want to expose ports between containers only, Do Not use -p or --publish just put them on the same docker network.
Example:
Lets create a new user-defined network:
sudo docker network create appnet
Lets create nginx container for reverse proxy, It should be available to outside world so we use publish.
sudo docker run --name nginx -d --network appnet nginx
Now put your apps in the same network but do not expose ports.
sudo docker run --name app1 -d --network appnet <app image name/id>
You have to use Docker networks.
The default network is shared with host, thus accessible from localhost. You can either configure Docker, creating a network manually, or let tools like docker-compose or Kubernetes to do it for you.

docker userdefined network - no connection to the outside world

I try to expose a docker container to the outside world (well, to be specific just in my internal network - 10.0.0.0/24) with a static ip adress. In my example the container should have the IP address 10.0.0.200.
Docker version is 1.10.3.
Therefore i created a userdefined network:
docker network create --subnet 10.0.0.0/24 --gateway 10.0.0.254 dn in bridge mode.
Then i created a container and attached it to the container.
docker run -d \
--name testhost \
-it \
-h testhost \
--net dn \
--ip 10.0.0.200 \
-p 8080:8080 \
some image
The container has the correct ip and gw assigned (10.0.0.200, 10.0.0.254 - which is also the ip from the docker created bridge interface) but no communication from the container to the outside world nor from the outside to the container is possible. only thing that works is nslookup but tbh i dont know why this is working.
From another host in the network i can ping the bridge interface which was created through the docker network create command.
A second container connected the the dn network can ping my first container. so communication inside the network seems fine.
As in the docker [network documentation][1]
[1]: https://docs.docker.com/engine/userguide/networking/#a-bridge-network "docker network docu" (second picture in bridge network) it should be possible
It seems that im missing some step or config. Any advice is appreciated, thank you.

Docker EXPOSE vs command line -p option (boot2docker)

After spending way too long trying to access my node server running from a docker container within a boot2docker instance I realised the issue came down to a difference between expose and docker run -p.
In my Dockerfile I had EXPOSE 3001, and I could not access this via my host machine.
After running "docker run -p 3001:3001 myappinst myapp" I was able to access the port.
Up until now I thought "docker run -p 3001:3001" was essentially the same as EXPOSE 3001 in the dockerfile.
I noticed however, when running docker ps
I get the following for "EXPOSE":
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
16341e2b9968 housemation-crawler:latest "npm start" 2 minutes ago Up 2 minutes 3001/tcp housemation-crawler-inst
(note: 3001/tcp)
vs the below with docker run -p 3001:3001
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0b14f736033c housemation-crawler:latest "npm start" 8 seconds ago Up 2 seconds 0.0.0.0:3001->3001/tcp housemation-crawler-inst
(0.0.0.0:3001->3001/tcp)
Looks like the latter is doing some kind of port forwarding, whereas the former is just opening up the port? Would that be right?
If I wanted to access a non forwarded exposed port how would I go about doing so? Also, if I wanted to have port forwarding within the dockerfile, what would be the correct syntax?
Your assumptions about how EXPOSE in Dockerfile and -p option in docker run are right. As you can read in Docker on line documentation:
EXPOSE <port> [<port>...]
The EXPOSE instructions informs Docker that the container will listen
on the specified network ports at runtime. Docker uses this
information to interconnect containers using links (see the Docker
User Guide) and to determine which ports to expose to the host when
using the -P flag. Note: EXPOSE doesn't define which ports can be
exposed to the host or make ports accessible from the host by default.
To expose ports to the host, at runtime, use the -p flag or the -P
flag.
So the EXPOSE instruction in Dockerfile will indicate Docker which ports have to map to host if you run the container with the -P flag; but the local ports mapped are not deterministic and are chosen by Docker at run time. Apart from this, Docker will use the ports in EXPOSE to export information as environmental variables in linked containers.
If you want to set the local port mapped, you have to use the -p option in docker run.

Setting Up Docker Dnsmasq

I'm trying to set up a docker dnsmasq container so that I can have all my docker containers look up the domain names rather than having hard-coded IPs (if they are on the same host). This fixes an issue with the fact that one cannot alter the /etc/hosts file in docker containers, and this allows me to easily update all my containers in one go, by altering a single file that the dnsmasq container references.
It looks like someone has already done the hard work for me and created a dnsmasq container. Unfortunately, it is not "working" for me. I wrote a bash script to start the container as shown below:
name="dnsmasq_"
timenow=$(date +%s)
name="$name$timenow"
sudo docker run \
-v="$(pwd)/dnsmasq.hosts:/dnsmasq.hosts" \
--name=$name \
-p='127.0.0.1:53:5353/udp' \
-d sroegner/dnsmasq
Before running that, I created the dnsmasq.hosts directory and inserted a single file within it called hosts.txt with the following contents:
192.168.1.3 database.mydomain.com
Unfortunately whenever I try to ping that domain from within:
the host
The dnsmasq container
another container on the same host
I always receive the ping: unknown host error message.
I tried starting the dnsmasq container without daemon mode so I could debug its output, which is below:
dnsmasq: started, version 2.59 cachesize 150
dnsmasq: compile time options: IPv6 GNU-getopt DBus i18n DHCP TFTP conntrack IDN
dnsmasq: reading /etc/resolv.dnsmasq.conf
dnsmasq: using nameserver 8.8.8.8#53
dnsmasq: read /etc/hosts - 7 addresses
dnsmasq: read /dnsmasq.hosts//hosts.txt - 1 addresses
I am guessing that I have not specified the -p parameter correctly when starting the container. Can somebody tell me what it should be for other docker containers to lookup the DNS, or whether what I am trying to do is actually impossible?
The build script for the docker dnsmasq service needs to be changed in order to bind to your server's public IP, which in this case is 192.168.1.12 on my eth0 interface
#!/bin/bash
NIC="eth0"
name="dnsmasq_"
timenow=$(date +%s)
name="$name$timenow"
MY_IP=$(ifconfig $NIC | grep 'inet addr:'| grep -v '127.0.0.1' | cut -d: -f2 | awk '{ print $1}')
sudo docker run \
-v="$(pwd)/dnsmasq.hosts:/dnsmasq.hosts" \
--name=$name \
-p=$MY_IP:53:5353/udp \
-d sroegner/dnsmasq
On the host (in this case ubuntu 12), you need to update the resolv.conf or /etc/network/interfaces file so that you have registered your public IP (eth0 or eth1 device) as the nameserver.
You may want to set a secondary nameserver to be google for whenever the container is not running, by changing the line to be dns-nameservers xxx.xxx.xxx.xxx 8.8.8.8 E.g. there is no comma or another line.
You then need to restart your networking service sudo /etc/init.d/networking restart if you updated the /etc/network/interfaces file so that this auto updates the /etc/resolve.conf file that docker will copy to the container during the build.
Now restart all of your containers
sudo docker stop $CONTAINER_ID
sudo docker start $CONTAINER_ID
This causes their /etc/resolv.conf files update so they point to the new nameserver settings.
DNS lookups in all your docker containers (that you built since making the changes) should now work using your dnsmasq container!
As a side note, this means that docker containers on other hosts can also take advantage of your dnsmasq service on this host as long as their host's nameserver settings is set to using this server's public IP.

Resources