I plan to split my monolthic server up into many small docker containers but haven't found a good solution for "inter-container communication" yet. This is my target scenario:
I know how to link containers together and how to expose ports, but none of these solutions are satisfying to me.
Is there any solution to communicate via hostnames (container names) between the containers like in a traditional server network?
The new networking feature allows you to connect to containers by
their name, so if you create a new network, any container connected to
that network can reach other containers by their name. Example:
1) Create new network
$ docker network create <network-name>
2) Connect containers to network
$ docker run --net=<network-name> ...
or
$ docker network connect <network-name> <container-name>
3) Ping container by name
docker exec -ti <container-name-A> ping <container-name-B>
64 bytes from c1 (172.18.0.4): icmp_seq=1 ttl=64 time=0.137 ms
64 bytes from c1 (172.18.0.4): icmp_seq=2 ttl=64 time=0.073 ms
64 bytes from c1 (172.18.0.4): icmp_seq=3 ttl=64 time=0.074 ms
64 bytes from c1 (172.18.0.4): icmp_seq=4 ttl=64 time=0.074 ms
See this section of the documentation;
Note: Unlike legacy links the new networking will not create environment variables, nor share environment variables with other containers.
This feature currently doesn't support aliases
Edit: After Docker 1.9, the docker network command (see below https://stackoverflow.com/a/35184695/977939) is the recommended way to achieve this.
My solution is to set up a dnsmasq on the host to have DNS record automatically updated: "A" records have the names of containers and point to the IP addresses of the containers automatically (every 10 sec). The automatic updating script is pasted here:
#!/bin/bash
# 10 seconds interval time by default
INTERVAL=${INTERVAL:-10}
# dnsmasq config directory
DNSMASQ_CONFIG=${DNSMASQ_CONFIG:-.}
# commands used in this script
DOCKER=${DOCKER:-docker}
SLEEP=${SLEEP:-sleep}
TAIL=${TAIL:-tail}
declare -A service_map
while true
do
changed=false
while read line
do
name=${line##* }
ip=$(${DOCKER} inspect --format '{{.NetworkSettings.IPAddress}}' $name)
if [ -z ${service_map[$name]} ] || [ ${service_map[$name]} != $ip ] # IP addr changed
then
service_map[$name]=$ip
# write to file
echo $name has a new IP Address $ip >&2
echo "host-record=$name,$ip" > "${DNSMASQ_CONFIG}/docker-$name"
changed=true
fi
done < <(${DOCKER} ps | ${TAIL} -n +2)
# a change of IP address occured, restart dnsmasq
if [ $changed = true ]
then
systemctl restart dnsmasq
fi
${SLEEP} $INTERVAL
done
Make sure your dnsmasq service is available on docker0. Then, start your container with --dns HOST_ADDRESS to use this mini dns service.
Reference: http://docs.blowb.org/setup-host/dnsmasq.html
That should be what --link is for, at least for the hostname part.
With docker 1.10, and PR 19242, that would be:
docker network create --net-alias=[]: Add network-scoped alias for the container
(see last section below)
That is what Updating the /etc/hosts file details
In addition to the environment variables, Docker adds a host entry for the source container to the /etc/hosts file.
For instance, launch an LDAP server:
docker run -t --name openldap -d -p 389:389 larrycai/openldap
And define an image to test that LDAP server:
FROM ubuntu
RUN apt-get -y install ldap-utils
RUN touch /root/.bash_aliases
RUN echo "alias lds='ldapsearch -H ldap://internalopenldap -LL -b
ou=Users,dc=openstack,dc=org -D cn=admin,dc=openstack,dc=org -w
password'" > /root/.bash_aliases
ENTRYPOINT bash
You can expose the 'openldap' container as 'internalopenldap' within the test image with --link:
docker run -it --rm --name ldp --link openldap:internalopenldap ldaptest
Then, if you type 'lds', that alias will work:
ldapsearch -H ldap://internalopenldap ...
That would return people. Meaning internalopenldap is correctly reached from the ldaptest image.
Of course, docker 1.7 will add libnetwork, which provides a native Go implementation for connecting containers. See the blog post.
It introduced a more complete architecture, with the Container Network Model (CNM)
That will Update the Docker CLI with new “network” commands, and document how the “-net” flag is used to assign containers to networks.
docker 1.10 has a new section Network-scoped alias, now officially documented in network connect:
While links provide private name resolution that is localized within a container, the network-scoped alias provides a way for a container to be discovered by an alternate name by any other container within the scope of a particular network.
Unlike the link alias, which is defined by the consumer of a service, the network-scoped alias is defined by the container that is offering the service to the network.
Continuing with the above example, create another container in isolated_nw with a network alias.
$ docker run --net=isolated_nw -itd --name=container6 -alias app busybox
8ebe6767c1e0361f27433090060b33200aac054a68476c3be87ef4005eb1df17
--alias=[]
Add network-scoped alias for the container
You can use --link option to link another container with a preferred alias
You can pause, restart, and stop containers that are connected to a network. Paused containers remain connected and can be revealed by a network inspect. When the container is stopped, it does not appear on the network until you restart it.
If specified, the container's IP address(es) is reapplied when a stopped container is restarted. If the IP address is no longer available, the container fails to start.
One way to guarantee that the IP address is available is to specify an --ip-range when creating the network, and choose the static IP address(es) from outside that range. This ensures that the IP address is not given to another container while this container is not on the network.
$ docker network create --subnet 172.20.0.0/16 --ip-range 172.20.240.0/20 multi-host-network
$ docker network connect --ip 172.20.128.2 multi-host-network container2
$ docker network connect --link container1:c1 multi-host-network container2
EDIT : It is not bleeding edge anymore : http://blog.docker.com/2016/02/docker-1-10/
Original Answer
I battled with it the whole night.
If you're not afraid of bleeding edge, the latest version of Docker engine and Docker compose both implement libnetwork.
With the right config file (that need to be put in version 2), you will create services that will all see each other. And, bonus, you can scale them with docker-compose as well (you can scale any service you want that doesn't bind port on the host)
Here is an example file
version: "2"
services:
router:
build: services/router/
ports:
- "8080:8080"
auth:
build: services/auth/
todo:
build: services/todo/
data:
build: services/data/
And the reference for this new version of compose file:
https://github.com/docker/compose/blob/1.6.0-rc1/docs/networking.md
As far as I know, by using only Docker this is not possible. You need some DNS to map container ip:s to hostnames.
If you want out of the box solution. One solution is to use for example Kontena. It comes with network overlay technology from Weave and this technology is used to create virtual private LAN networks for each service and every service can be reached by service_name.kontena.local-address.
Here is simple example of Wordpress application's YAML file where Wordpress service connects to MySQL server with wordpress-mysql.kontena.local address:
wordpress:
image: wordpress:4.1
stateful: true
ports:
- 80:80
links:
- mysql:wordpress-mysql
environment:
- WORDPRESS_DB_HOST=wordpress-mysql.kontena.local
- WORDPRESS_DB_PASSWORD=secret
mysql:
image: mariadb:5.5
stateful: true
environment:
- MYSQL_ROOT_PASSWORD=secret
Related
I am trying to walk through a tutorial that brings up an application in a docker/podman container instance.
I have attempted to use -p port:port and --expose port but neither seems to work.
I've ensured that I see the port in a listen state with ss -an.
I've made sure there isn't anything else trying to bind to that port.
No matter what I do, I can never hit localhost:port or ip_address:port.
I feel like I am fundamentally missing something but don't know where to look next.
Any suggestions for things to try or documentation to review would be appreciated.
Thanks,
Shawn
Expose: (Reference)
Expose tells Podman that the container requests that port be open, but
does not forward it. All exposed ports will be forwarded to random
ports on the host if and only if --publish-all is also specified
As per Redhat documentation for Containerfile,
EXPOSE indicates that the container listens on the specified network
port at runtime. The EXPOSE instruction defines metadata only; it does
not make ports accessible from the host. The -p option in the podman
run command exposes container ports from the host.
To specify Port Number,
The -p option in the podman run command exposes container ports from
the host.
Example:
podman run -d -p 8080:80 --name httpd-basic quay.io/httpd-parent:2.4
In above example, Port # 80 is the port number which Container listens/exposes and we can access this from outside the container via Port # 8080
Hello Helpful Developers,
I'm having issues connecting docker containers. I have built a subversion docker container and a mongo docker container.
docker run -d -p 3343:3343 -p 4434:4434 -p 18080:18080 --name svn-server mamohr/subversion-edge
docker run -p 27017:27017 --name my-mongo -d mongo
I'm able to hit http://x.x.x.x:18080/ from a browser, but unable to curl from the my-mongo instance. I can talk to each container from my development machine, but unable to talk from container to container.
I see things like --net=bridge, host, ????, but I'm getting confused.
Please help.....
Borrowing this schema from SDN hub, imagine that C1 is your SVN container and C2 is your Mongo container:
Both containers are connected to docker0 bridge and NATed to external 192.168.50.16 network.
To connect from your Mongo container, check the bridge0 IP address of the SVN container:
# docker inspect <svn-container-name>
"Networks": {
"bridge0": {
"IPAddress": "172.17.0.19",
}
then CURL directly to it's bridge0 IP address:
curl http://172.17.0.19:18080/
To get you immediately going, you can start your hosts with --net=host and then both containers and host will be able to communicate.
Or you can use link( --link ) between from mongo to the other container.
There is lot to explain about docker networking and the docker documentation will be good point to start.
Read the documentation at https://docs.docker.com/engine/userguide/networking/dockernetworks/
I would advice you to take a look at docker compose. I think it's the best way to manage a system, which is composed of many containers.
Here is the official guide: https://docs.docker.com/compose/
Docker containers by default start attached to a bridge network called default. You can do docker network ls and see the networks you have available. You can also create networks with different attributes etc...
So in your case, both your containers are being started on the same default network, which means they should be able to communicate with each other just fine. In fact, if you only want your SVN server to be able to talk to Mongo (and don't need to connect to mongo from your host) you don't even need to expose ports on the Mongo container. Containers on the same network as each can communicate with each other just fine without ports being exposed. Exposing ports is to allow host > container connectivity.
So, what hostname / port are you using when you try to curl from the mongo instance to your SVN instance? You should be using svn-server as that will resolve to the SVN container (using Docker's built-in DNS resolution).
Direct container to container networking via container name can be achieved with a user defined network.
docker network create mynet
docker run -d --net=mynet --name svn-server mamohr/subversion-edge
docker run -d --net=mynet --name my-mongo mongo
docker exec <svn-id> ping my-mongo
docker exec <mongo-id> ping svn-server
You should always be able to connect to mapped ports though, even in your current setup. The hosts runs a process that listens on that port so any host IP should do.
$ docker run -d -p 8080:80 --net=mynet --name sleep busybox nc -lp 80 -e echo here!
63115ef88664f1186ea012e41138747725790383c741c12ca8675c3058383e68
$ ss -lntp | grep 8080
LISTEN 0 128 :::8080 :::* users:(("exe",pid=6287,fd=4))
$ docker run busybox nc <any_host_ip> 8080
here!
Please remember, container is not available by default to the ourside world.
When you running the svn-server container, you published the container's 18080 port and mapped it from the host's 18080 port. So you can access it by http://your_host_IP:18080.
From your two docker run commands, both svn-server container and my-mongo container are on the default bridge network. These two containers are connected by docker0, so they can communicate each other directly by localhost.
But if you tried to access http://your_host_IP:18080 from within your my-mongo container, that means your request would first be send to docker0, but docker0 will drop your request because you're trying to access the host, not the svn-server container.
So try this curl http://localhost:18080 or curl http://svn-server_IP:18080 from my-mongo container to access svn-server container.
Since Docker 1.10 (and libnetwork update) we can manually give an IP to a container inside a user-defined network, and that's cool!
I want to give a container an IP address in my LAN (like we can do with Virtual Machines in "bridge" mode). My LAN is 192.168.1.0/24, all my computers have IP addresses inside it. And I want my containers having IPs in this range, in order to reach them from anywhere in my LAN (without NAT/PAT/etc...).
I obviously read Jessie Frazelle's blog post and a lot of others post here and everywhere like :
How to set a docker container's iP?
How to assign specific IP to container and make that accessible outside of VM host?
and so much more, but nothing came out; my containers still have IP addresses "inside" my docker host, and are not reachable for others computers on my LAN.
Reading Jessie Frazelle's blog post, I thought (since she uses public IP) we can do what I want to do?
Edit: Indeed, if I do something like :
network create --subnet 192.168.1.0/24 --gateway 192.168.1.1 homenet
docker run --rm -it --net homenet --ip 192.168.1.100 nginx
The new interface on the docker host (br-[a-z0-9]+) take the '--gateway' IP, which is my router IP. And the same IP on two computers on the network... BOOM
Thanks in advance.
EDIT : This solution is now useless. Since version 1.12, Docker provides two network drivers : macvlan and ipvlan. They allow assigning static IP from the LAN network. See the answer below.
After looking for people who have the same problem, we went to a workaround :
Sum up :
(V)LAN is 192.168.1.0/24
Default Gateway (= router) is 192.168.1.1
Multiple Docker Hosts
Note : We have two NIC : eth0 and eth1 (which is dedicated to Docker)
What do we want :
We want to have containers with ip in the 192.168.1.0/24 network (like computers) without any NAT/PAT/translation/port-forwarding/etc...
Problem
When doing this :
network create --subnet 192.168.1.0/24 --gateway 192.168.1.1 homenet
we are able to give containers the IP we want to, but the bridge created by docker (br-[a-z0-9]+) will have the IP 192.168.1.1, which is our router.
Solution
1. Setup the Docker Network
Use the DefaultGatewayIPv4 parameter :
docker network create --subnet 192.168.1.0/24 --aux-address "DefaultGatewayIPv4=192.168.1.1" homenet
By default, Docker will give to the bridge interface (br-[a-z0-9]+) the first IP, which might be already taken by another machine. The solution is to use the --gateway parameter to tell docker to assign a arbitrary IP (which is available) :
docker network create --subnet 192.168.1.0/24 --aux-address "DefaultGatewayIPv4=192.168.1.1" --gateway=192.168.1.200 homenet
We can specify the bridge name by adding -o com.docker.network.bridge.name=br-home-net to the previous command.
2. Bridge the bridge !
Now we have a bridge (br-[a-z0-9]+) created by Docker. We need to bridge it to a physical interface (in my case I have to NIC, so I'm using eth1 for that):
brctl addif br-home-net eth1
3. Delete the bridge IP
We can now delete the IP address from the bridge, since we don't need one :
ip a del 192.168.1.200/24 dev br-home-net
The IP 192.168.1.200 can be used as bridge on multiple docker host, since we don't use it, and we remove it.
Docker now supports Macvlan and IPvlan network drivers. The Docker documentation for both network drivers can be found here.
With both drivers you can implement your desired scenario (configure a container to behave like a virtual machine in bridge mode):
Macvlan: Allows a single physical network interface (master device) to have an arbitrary number of slave devices, each with it's own MAC adresses.
Requires Linux kernel v3.9–3.19 or 4.0+.
IPvlan: Allows you to create an arbitrary number of slave devices for your master device which all share the same MAC address.
Requires Linux kernel v4.2+ (support for earlier kernels exists but is buggy).
See the kernel.org IPVLAN Driver HOWTO for further information.
Container connectivity is achieved by putting one of the slave devices into the network namespace of the container to be configured. The master devices remains on the host operating system (default namespace).
As a rule of thumb you should use the IPvlan driver if the Linux host that is connected to the external switch / router has a policy configured that allows only one MAC per port. That's often the case in VMWare ESXi environments!
Another important thing to remember (Macvlan and IPvlan): Traffic to and from the master device cannot be sent to and from slave devices. If you need to enable master to slave communication see section "Communication with the host (default-ns)" in the "IPVLAN – The beginning" paper published by one of the IPvlan authors (Mahesh Bandewar).
Use the official Docker driver:
As of Docker v1.12.0-rc2, the new MACVLAN driver is now available in an official Docker release:
MacVlan driver is out of experimental #23524
These new drivers have been well documented by the author(s), with usage examples.
End of the day it should provide similar functionality, be easier to setup, and with fewer bugs / other quirks.
Seeing Containers on the Docker host:
Only caveat with the new official macvlan driver is that the docker host machine cannot see / communicate with its own containers. Which might be desirable or not, depending on your specific situation.
This issue can be worked-around if you have more than 1 NIC on your docker host machine. And both NICs are connected to your LAN. Then can either A) dedicate 1 of your docker hosts's 2 nics to be for docker exclusively. And be using the remaining nic for the host to access the LAN.
Or B) by adding specific routes to only those containers you need to access via the 2nd NIC. For example:
sudo route add -host $container_ip gw $lan_router_ip $if_device_nic2
Method A) is useful if you want to access all your containers from the docker host and you have multiple hardwired links.
Wheras method B) is useful if you only require access to a few specific containers from the docker host. Or if your 2nd NIC is a wifi card and would be much slower for handling all of your LAN traffic. For example on a laptop computer.
Installation:
If cannot see the pre-release -rc2 candidate on ubuntu 16.04, temporarily add or modify this line to your /etc/apt/sources.list to say:
deb https://apt.dockerproject.org/repo ubuntu-xenial testing
instead of main (which is stable releases).
I no longer recommended this solution. So it's been removed. It was using bridge driver and brctrl
.
There is a better and official driver now. See other answer on this page: https://stackoverflow.com/a/36470828/287510
Here is an example of using macvlan. It starts a web server at http://10.0.2.1/.
These commands and Docker Compose file work on QNAP and QNAP's Container Station. Notice that QNAP's network interface is qvs0.
Commands:
The blog post "Using Docker macvlan networks"[1][2] by Lars Kellogg-Stedman explains what the commands mean.
docker network create -d macvlan -o parent=qvs0 --subnet 10.0.0.0/8 --gateway 10.0.0.1 --ip-range 10.0.2.0/24 --aux-address "host=10.0.2.254" macvlan0
ip link del macvlan0-shim link qvs0 type macvlan mode bridge
ip link add macvlan0-shim link qvs0 type macvlan mode bridge
ip addr add 10.0.2.254/32 dev macvlan0-shim
ip link set macvlan0-shim up
ip route add 10.0.2.0/24 dev macvlan0-shim
docker run --network="macvlan0" --ip=10.0.2.1 -p 80:80 nginx
Docker Compose
Use version 2 because version 3 does not support the other network configs, such as gateway, ip_range, and aux_address.
version: "2.3"
services:
HTTPd:
image: nginx:latest
ports:
- "80:80/tcp"
- "80:80/udp"
networks:
macvlan0:
ipv4_address: "10.0.2.1"
networks:
macvlan0:
driver: macvlan
driver_opts:
parent: qvs0
ipam:
config:
- subnet: "10.0.0.0/8"
gateway: "10.0.0.1"
ip_range: "10.0.2.0/24"
aux_address: "host=10.0.2.254"
It's possible map a physical interface into a container via pipework.
Connect a container to a local physical interface
pipework eth2 $(docker run -d hipache /usr/sbin/hipache) 50.19.169.157/24
pipework eth3 $(docker run -d hipache /usr/sbin/hipache) 107.22.140.5/24
There may be a native way now but I haven't looked into that for the 1.10 release.
I have a running nginx container: # docker run --name mynginx1 -P -d nginx;
And got its PORT info by docker ps: 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcp
Then I could get response from within the container(id: c30991a04b2f):
docker exec -i -t c3099 bash
curl http://localhost => which return the default index.html page content, it works
However, when I make the curl http://localhost:32769 outside of the container, I got this:
curl: (7) failed to connect to localhost port 32769: Connection refused
I am running on a mac with docker version 1.9.0; nginx latest
Does anyone know what cause this? Any help? thank you
If you are On OSX, you are probably using a VirtualBox VM for your docker environment.
Make sure you have forwarded your port 32769 to your actual host (the mac), in order for that port to be visible from localhost.
This is valid for the old boot2docker, or the new docker machine.
VBoxManage controlvm "boot2docker-vm" --natpf1 "tcp-port32769 ,tcp,,32769,,32769"
VBoxManage controlvm "boot2docker-vm" --natpf1 "udp-port32769 ,udp,,32769,,$32769
(controlvm if the VM is running, modifyvm is the VM is stopped)
(replace "boot2docker-vm" b ythe name of your vm: see docker-machine ls)
I would recommend to not use -P, but a static port mapping -p xxx:80 -p yyy:443.
That way, you can do that port forwarding once, using fixed values.
Of course, you can access the VM directly through docker-machine ip vmname
curl http://$(docker-machine ip vmname):32769
Solved.. I misunderstood how docker port mapping works.
Since I'm using mac, the host for nginx container is a VM, 0.0.0.0:32769->80/tcp maps the port 80 of the container to the port 32769 of the VM.
solution:
docker-machine ip vm-name => 192.168.99.xx
curl http://192.168.99.xx:32769
Not exactly answers for your question but spend some time trying to figure out similar thing in context of "why is my docker container not connecting to elastic search localhost:9200" and this was the first S.O. question that pops up, so I hope it helps some other googling person
if you are linking containers together (e.g. docker run --rm --name web2 --link db:db training/webapp env)
... then Dockers adds enviroment variables:
DB_NAME=/web2/db
DB_PORT=tcp://172.17.0.5:5432
DB_PORT_5432_TCP=tcp://172.17.0.5:5432
DB_PORT_5432_TCP_PROTO=tcp
DB_PORT_5432_TCP_PORT=5432
DB_PORT_5432_TCP_ADDR=172.17.0.5
... and also updates your /etc/hosts
# /etc/hosts
#...
172.17.0.9 db
so you can technically connect to ping db
https://docs.docker.com/v1.8/userguide/dockerlinks/
so for elastic search is
# /etc/hosts
# ...
172.17.0.28 elasticsearch f9db83d0dfb5 ecs-awseb-qa-3Pobblecom-env-f7yq6jhmpm-10-elasticsearch-fcbfe5e2b685d0984a00
so wget elasticseach:9200 will work
When I create my container, I want to set a specific container's IP address in the same LAN.
Is that possible? If not, after the creation can I edit the DHCP IP address?
Considering the conclusion of the (now old October 2013) article "How to configure Docker to start containers on a specific IP address range", this doesn't seem to be possible (or at least "done automatically for you by Docker") yet.
Update Nov 2015: a similar problem is discussed in docker/machine issue 1709, which include the recent workaround (Nov 2015)proposed by Tobias Munk (schmunk42) for docker machine
(for container see the next section):
A workaround for some use-cases could be to create machines like so:
192.168.98.100
docker-machine create -d virtualbox --virtualbox-hostonly-cidr "192.168.98.1/24" m98
192.168.97.100
docker-machine create -d virtualbox --virtualbox-hostonly-cidr "192.168.97.1/24" m97
192.168.96.100
docker-machine create -d virtualbox --virtualbox-hostonly-cidr "192.168.96.1/24" m96
If there's no other machine with the same cidr (Classless Inter-Domain Routing), the machine should always get the .100 IP upon start.
Another workaround:
(see my script in "How do I create a docker machine with a specific URL using docker-machine and VirtualBox?")
My virtualbox has dhcp range 192.168.99.100 - 255 and I want to set an IP before 100.
I've found a simple trick to set a static IP: after create a machine I run this command and restart the machine:
echo "ifconfig eth1 192.168.99.50 netmask 255.255.255.0 broadcast 192.168.99.255 up" \
| docker-machine ssh prova-discovery sudo tee /var/lib/boot2docker/bootsync.sh > /dev/null
This command create a file bootsync.sh that is searched by boot2docker startup scripts and executed.
Now during machine boot the command is executed and set static IP.
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
test-1 - virtualbox Running tcp://192.168.99.50:2376 test-1 (mast
Michele Tedeschi (micheletedeschi) adds
I've updated the commands with:
echo "kill `more /var/run/udhcpc.eth1.pid`\nifconfig eth1 192.168.99.50 netmask 255.255.255.0 broadcast 192.168.99.255 up" | docker-machine ssh prova-discovery sudo tee /var/lib/boot2docker/bootsync.sh > /dev/null
then run command (only the first time)
docker-machine regenerate-certs prova-discovery
now the IP will not be changed by the DHCP
(replace prova-discovery by the name of your docker-machine)
April 2015:
The article mentions the possibility to create your own bridge (but that doesn't assign one of those IP addresses to a container though):
create your own bridge, configure it with a fixed address, tell Docker to use it. Done.
If you do it manually, it will look like this (on Ubuntu):
stop docker
ip link add br0 type bridge
ip addr add 172.30.1.1/20 dev br0
ip link set br0 up
docker -d -b br0
To assign a static IP within the range of an existing bridge IP range, you can try "How can I set a static IP address in a Docker container?", using a static script which creates the bridge and a pair of peer interfaces.
Update July 2015:
The idea mention above is also detailed in "How can I set a static IP address in a Docker container?" using:
Building your own bridge
The result should be that the Docker server starts successfully and is now prepared to bind containers to the new bridge.
After pausing to verify the bridge’s configuration, try creating a container — you will see that its IP address is in your new IP address range, which Docker will have auto-detected.
you can use the brctl show command to see Docker add and remove interfaces from the bridge as you start and stop containers, and can run ip addr and ip route inside a container to see that it has been given an address in the bridge’s IP address range and has been told to use the Docker host’s IP address on the bridge as its default gateway to the rest of the Internet.
Start docker with: -b=br0 (that is also what the echo 'DOCKER_OPTS="-b=bridge0"' >> /etc/default/docker can set for you by default)
Use pipework (192.168.1.1 below being the default gateway ip address):
pipework br0 container-name 192.168.1.10/24#192.168.1.1