How to create docker containers with the same internal IP address? - networking

I have an environment where I need to run some external software into Docker containers. This software is trying to connect to our product by specific IP address - let's say 192.168.255.2 - and this address is fixed and cannot be changed. Moreover, host IP address must be also set to specific IP - let's say 192.168.255.3.
Product supports 2 ethernet interfaces:
first of them has strict restrictions regarding IP addressing - let's call it "first"
second does not have such restrictions and provides similar functionalities - for this example let's assume that the IP address of this interface is set to 10.1.1.2/24 - let's call it "second"
I need to run simultaneously multiple docker containers, each container shall be connected to one product (1 to 1 relationship).
Things that are run inside containers must think that they're reaching connectivity to product by using "first" network interface (one which have static IP assignment and which cannot be changed).
All I want to do is to create containers with the same IP addresses to pretend that application inside container is using "first" ethernet interface of product and then at host level I want to redirect all traffic using IPTables to "second" interface.
Therefore I have one major problem: how to create multiple docker containers with the same IP address?

From the exact phrasing of your question, docker has the option to share the network stack of another container. Simply run:
docker run -d --name containera yourimage
docker run -d --net container:containera anotherimage
And you'll see that the second container has the same IP interfaces and can even see ports being used by the first container.
I'd recommend instead that you install both interfaces on your docker host and bind to the IP on the host that you need, then don't worry about the actual IP of the container. The result will be much simpler to manage. Here's how you bind to a single IP on the host with ports 8080 and 8888 that's mapped to two different container's port 80:
docker run -d -p 192.168.255.2:8080:80 --name nginx8080 nginx
docker run -d -p 192.168.255.2:8888:80 --name nginx8888 nginx

Related

Multiple static IPs for Docker containers

I have a Docker host that should allow each container to have multiple static IP addresses. The application inside the container should then be able to choose from which address it will send traffic to remote hosts (e.g. ping -I <source-address> example.com).
Imagine a setup like this: IP addresses 10.0.0.10 - 10.0.0.19 are assigned to ContainerA, 10.0.0.20 - 10.0.0.29 to ContainerB and so on. Any traffic to ContainerA's address range is forwarded to ContainerA, while outgoing traffic originates from an address from that range that ContainerA can chose. The same applies to ContainerB.
The default --net=bridgemode does not seem to support this. The closest I could get is that incoming traffic to any of ContainerA's addresses is correctly forwarded to the container, but outgoing traffic always originates from the same single address.
When using --net=host, the first container will attach to all available IP addresses, thus the second container will not be able to open the sockets in its IP range.
The --ip option of the docker run command seems to come close to what I need, as explained in this blog post. Unfortunately, it does not seem to support multiple static IPs per container.
If more convenient, using CIDR subnets instead of IP ranges is fine.
How do I need to configure Docker to achieve this?
I think you can do it by customizing docker0 bridge, or even create your own network bridge
Every docker container has a single IP only. We can set custom IP also, by making a bridge network as,
docker network create net1 --driver=bridge --subnet="192.168.0.1/27"
If you don't mention the driver then by default it is bridge network.
So here using --subnet, you can give a custom IP address to the network and using that network, you can also give custom IP addresses to the containers which are inside that network.
Then run a container as,
docker run -it --network=net1 --ip="192.168.0.3" --name=MyContainer image_name
Now, in this way you can make 32-27=5 i.e., (2^5)-2 docker containers.
Hum I'm wondering if I really get the right meaning of your question.
You say:
"while outgoing traffic originates from an address from that range that ContainerA can chose."
This means that your connection should be in UDP.
Or the TCP connection would be broken without the same IP for inbound and outbound trafic ? right ?
I think you could make a network and assign IP addresses on that network to your containers.
You can do this in command line, but I'd rather go for a docker-compose file.
It could be something like this :
version: '2.1'
services:
containerA:
image: xxx
networks:
local_net:
ipv4_address: 10.0.0.10
ipv4_address: 10.0.0.11
...
containerB:
image: xxx
networks:
local_net:
ipv4_address: 10.0.0.20
ipv4_address: 10.0.0.21
...
networks:
local_net:
ipam:
driver: default
config:
- subnet: 10.0.0.0/24
gateway: 172.16.200.1
If you want to automate the creation of the file, you can script it I think.

Docker on CentOS with bridge to LAN network

I have a server VLAN of 10.101.10.0/24 and my Docker host is 10.101.10.31. How do I configure a bridge network on my Docker host (VM) so that all the containers can connect directly to my LAN network without having to redirect ports around on the default 172.17.0.0/16? I tried searching but all the howtos I've found so far have resulted in losing SSH session which I had to go into the VM from a console to revert the steps I did.
There's multiple ways this can be done. The two I've had most success with are routing a subnet to a docker bridge and using a custom bridge on the host LAN.
Docker Bridge, Routed Network
This has the benefit of only needing native docker tools to configure docker. It has the down side of needing to add a route to your network, which is outside of dockers remit and usually manual (or relies on the "networking team").
Enable IP forwarding
/etc/sysctl.conf: net.ipv4.ip_forward = 1
sysctl -p /etc/sysctl.conf
Create a docker bridge with new subnet on your VM network, say 10.101.11.0/24
docker network create routed0 --subnet 10.101.11.0/24
Tell the rest of the network that 10.101.11.0/24 should be routed via 10.101.10.X where X is IP of your docker host. This is the external router/gateway/"network guy" config. On a linux gateway you could add a route with:
ip route add 10.101.11.0/24 via 10.101.10.31
Create containers on the bridge with 10.101.11.0/24 addresses.
docker run --net routed0 busybox ping 10.101.10.31
docker run --net routed0 busybox ping 8.8.8.8
Then your done. Containers have routable IP addresses.
If you're ok with the network side, or run something like RIP/OSPF on the network or Calico that takes care of routing then this is the cleanest solution.
Custom Bridge, Existing Network (and interface)
This has the benefit of not requiring any external network setup. The downside is the setup on the docker host is more complex. The main interface requires this bridge at boot time so it's not a native docker network setup. Pipework or manual container setup is required.
Using a VM can make this a little more complicated as you are running extra interfaces with extra MAC addresses over the main VM's interface which will need additional "Promiscuous" config first to allow this to work.
The permanent network config for bridged interfaces varies by distro. The following commands outline how to set the interface up and will disappear after reboot. You are going to need console access or a seperate route into your VM as you are changing the main network interface config.
Create a bridge on the host.
ip link add name shared0 type bridge
ip link set shared0 up
In /etc/sysconfig/network-scripts/ifcfg-br0
DEVICE=shared0
TYPE=Bridge
BOOTPROTO=static
DNS1=8.8.8.8
GATEWAY=10.101.10.1
IPADDR=10.101.10.31
NETMASK=255.255.255.0
ONBOOT=yes
Attach the primary interface to the bridge, usually eth0
ip link set eth0 up
ip link set eth0 master shared0
In /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
TYPE=Ethernet
IPV6INIT=no
USERCTL=no
BRIDGE=shared0
Reconfigure your bridge to have eth0's ip config.
ip addr add dev shared0 10.101.10.31/24
ip route add default via 10.101.10.1
Attach containers to bridge with 10.101.10.0/24 addresses.
CONTAINERID=$(docker run -d --net=none busybox sleep 600)
pipework shared1 $CONTAINERID 10.101.10.43/24#10.101.10.Y
Or use a DHCP client inside the container
pipework shared1 $CONTAINERID dhclient
Docker macvlan network
Docker has since added a network driver called macvlan that can make a container appear to be directly connected to the physical network the host is on. The container is attached to a parent interface on the host.
docker network create -d macvlan \
--subnet=10.101.10.0/24 \
--gateway=10.101.10.1 \
-o parent=eth0 pub_net
This will suffer from the same VM/softswitch problems where the network and interface will need be promiscuous with regard mac addresses.

Publishing docker swarm mode port only to localhost

I've created docker swarm with a website inside swarm, publishing port 8080 outside. I want to consume that port using Nginx running outside swarm on port 80, which will perform server name resolution and host static files.
Problem is, swarm automatically publishes port 8080 to internet using iptables, and I don't know if is it possible to allow only local nginx instance to use it? Because currently users can access site on both 80 and 8080 ports, and second one is broken (without images).
Tried playing with ufw, but it's not working. Also manually changing iptables would be a nightmare, as I would have to do it on every swarm node after every update. Any solutions?
EDIT: I can't use same network for swarm and nginx outside swarm, because overlay network is incompatible with normal, single-host containers. Theoretically I could put nginx to the swarm, but I prefer to keep it separate, on the same host that contains static files.
No, right now you are not able to bind a published port to an IP (even not to 127.0.0.1) or an interface (like the loopback interface lo). But there are two issues dealing with this problem:
github.com - moby/moby - Assigning service published ports to IP
github.com - moby/moby - docker swarm mode: ports on 127.0.0.1 are exposed to 0.0.0.0
So you could subscribe to them and/or participate in the discussion.
Further reading:
How to bind the published port to specific eth[x] in docker swarm mode
Yes, if the containers are in the same network you don't need to publish ports for containers to access each other.
In your case you can publish port 80 from the nginx container and not publish any ports from the website container. Nginx can still reach the website container on port 8080 as long as both containers are in the same Docker network.
"Temp" solution that I am using is leaning on alpine/socat image.
Idea:
use additional lightweight container that is running outside of swarm and use some port forwarding tool to (e.g. socat is used here)
add that container to the same network of the swarm service we want to expose only to localhost
expose this helper container at localhost:HOST_PORT:INTERNAL_PORT
use socat of this container to forward trafic to swarm's machine
Command:
docker run --name socat-elasticsearch -p 127.0.0.1:9200:9200 --network elasticsearch --rm -it alpine/socat tcp-listen:9200,reuseaddr,fork tcp:elasticsearch:9200
Flag -it can be removed once it can be confirmed all is working fine for you. Also add -d to run it daemonized.
Daemon command:
docker run --name socat-elasticsearch -d -p 127.0.0.1:9200:9200 --network elasticsearch --rm alpine/socat tcp-listen:9200,reuseaddr,fork tcp:elasticsearch:9200
My use case:
Sometimes I need to access ES directly, so this approach is just fine for me.
Would like to see some docker's native solution, though.
P.S. Auto-restart feature of docker could be used if this needs to be up and running after host machine restart.
See restart policy docs here:
https://docs.docker.com/engine/reference/commandline/run/#restart-policies---restart

How do I get the external hosts ip address from inside a docker container

We are transitioning (we hope) from CoreOS to CentOS, from fleet to swarm. I need to determine the ip address of the machine running docker from inside the container.
The problem is that we use nginx to determine which of the machines in our docker cluster runs which service. To make this work we need the container to be able to post to our etcd repository the ip address of the machine upon which it is located. Everything I have seen so far has been able to get me to a 172.17.0.1 ip address for the external machine, but ALL of our containers on ALL of our dockers will have that private address. I need an EXTERNAL address that nginx may use to get to the service.
I could use the '--hostname ip ...' option or the '-e EXT_HOST_IP=ip ...' option to set an ip address, but if I include these in the 'docker run' command, the shell processing the docker command will expand the 'ip...' and return the ip address of the current machine -- NOT the machine upon which swarm will eventually run the container.
The best I have come up with so far is to create a file/directory on the host machine that contains the ip address of the host machine. I can then use the docker '-v' option to mount the directory inside the container, and get the ip address from that. It just seems like there should be an easier way to do this.
Swarm issue 1106 mentions in a recent answer
inside swarm I can get the ip of the host machine like so
ip route | awk '/default/ { print $3 }'
Which is fine for many purposes.
But when I talk to that IP I need to use the proper dns name for TLS to work.

Difference between --link and --alias in overlay docker network?

I am reading this official Docker 0.10.3 documentation (at this time, it is still in a branch) and it says:
--net-alias=ALIAS
In addition to --name as described above, a container is discovered by one or more of its configured --net-alias (or --alias in docker network connect command) within the user-defined network. The embedded DNS server maintains the mapping between all of the container aliases and its IP address on a specific user-defined network. A container can have different aliases in different networks by using the --alias option in docker network connect command.
--link=CONTAINER_NAME:ALIAS
Using this option as you run a container gives the embedded DNS an extra entry named ALIAS that points to the IP address of the container identified by CONTAINER_NAME. When using --link the embedded DNS will guarantee that localized lookup result only on that container where the --link is used. This lets processes inside the new container connect to container without without having to know its name or IP.
Does network alias from one container actually is a link from the second container in the same network?
There are two differences between --net-alias and --link:
With --net-alias, one container can access the other container only if they are on the same network. In other words, in addition to --net-alias foo and --net-alias bar, you need to start both containers with --net foobar_net after creating the network with docker network create foobar_net.
With --net-alias foo, all containers in the same network can reach the container by using its alias foo. With --link, only the linked container can reach the container by using the name foo.
Historically, --link was created before libnetwork and all network-related features. Before libnetwork, all containers ran in the same network bridge, and --link only added names to /etc/hosts. Then, custom networks were added and the behavior of --link in user-defined networks was changed.
See also Legacy container links for more information on --link.

Resources