Adding a new NIC to a Docker container in a specific order - networking

I'm trying to have a CentOS container with two network interfaces.
After going through the Docker docs and "googleing" a bit, I found this GitHub issue comment that specifies how to achieve this.
Following it, I created a new network (default type: bridge)
docker network create my-network
Inspecting the new network, I can see that Docker assigned it to the subnetwork 172.18.0.0/16 and the gateway 172.18.0.1/16.
Then, when creating the container, I specifically attach the new network:
docker create -ti --privileged --net=my-network --mac-address 08:00:AA:AA:AA:FF <imageName>
Inside the container, I can check with ifconfig that indeed the interface is present with that IP and mac address:
eth0 Link encap:Ethernet HWaddr 08:00:AA:AA:AA:FF
inet addr:172.18.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::a00:aaff:feaa:aaff/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3 errors:0 dropped:0 overruns:0 frame:0
TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:258 (258.0 b) TX bytes:258 (258.0 b)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
The problem comes when I connect the container to the default Docker network (bridge0 a.k.a bridge):
docker network connect bridge <my-container>
Checking now the interfaces in the container:
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:17 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2941 (2.8 KiB) TX bytes:508 (508.0 b)
eth1 Link encap:Ethernet HWaddr 08:00:AA:AA:AA:FF
inet addr:172.18.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::a00:aaff:feaa:aaff/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:17 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2941 (2.8 KiB) TX bytes:508 (508.0 b)
The interface for my new network gets moved onto eth1, meanwhile the interface for the default networks gets eth0.
Also, when checking the configuration file for the interface (/etc/sysconfig/network-scripts/ifcfg-eth0), I can see that the MAC address specified there differs from the one I manually set up when running the container (08:00:AA:AA:AA:FF):
DEVICE="eth0"
BOOTPROTO="dhcp"
HWADDR="52:54:00:85:11:33"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
MTU="1500"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"
UUID="25016937-1ff9-40d7-b4c3-18e08af0f98d"
In /etc/sysconfig/network-scripts there is only the configuration file for eth0. The file for eth1 (the newly added interface) is missing.
Due to the requirements of the work I'm involved, I need that the first interface has to be always disabled and its MAC address has to be specifically set.
Any other network-related work must go through the new attached NIC.
My question is:
How can I attach a new NIC to the container so eth0 will have the desired MAC address.
Doing this at image level is also fine.

The goal is to have a running container with two NICs: eth0 and eth1.
eth0 will have a specific MAC address (let's say, AA:AA:AA:AA:AA:AA) and will be disabled. All networking will be done through eth1.
I will assume that the Docker image has a user with rights to execute ifdown and/or ifconfig
eth0 is already present in the image and "talks" to the default Docker networ: bridge (created when Docker was installed).
We have to modify the config file for eth0 in the image (/etc/sysconfig/network-scripts/ifcg-eth0) to modify its MAC address: the field called HWADDR in the file.
After this, we have to commit the changes to a new image. Let's call it myImage.
Now, we have to create a new network for the second interface:
docker network create myNetwork
By default it is a bridge network (which is enough in my case).
Since the requirement is to have eth0 with a custom MAC address, we have to create the container without specifying a network; which will connect it to the default bridge network.
docker create -ti --mac-address=AA:AA:AA:AA:AA:AA --privileged --hostname=myHostnane --name=myContainer myImage
It is important to create the container with the --privileged switch so we can take down the eth0 interface.
Now, before starting the container, we connect it to the new network:
docker network connect myNetwork myContainer
Now the container has two interfaces: the original eth0 for the bridge network and the new eth1 for myNetwork network.
At this point, we can start the container:
docker start myContainer
and then execute the order to take down eth0:
docker exec myContainer /bin/bash -c "sudo ifdown eth0"
To take down the interface, we must do this when running a container. The reason is that any changes in the networking files will only persist in its running container, so it's not possible to commit the down interface (old, but still relevant).

Related

Issue Configuring my Shiny Server on a Virtual Machine with Bridged Adapter

I am trying to deploy my Shiny App on a Virtual Machine (CentOS 6.7). I have configured a bridged connection (I think I did it correctly) for the Virtual Machine, and I have my Static IP Address for the web application. The sample application works on localhost:3838.
I am behind a corporate proxy, so I am using a proxy to connect to the internet. The proxy is set in http_proxy. I can also connect to the internet successfully on the Virtual Machine.
When I try to access <my_VM_static_IP_Address>:3838 the website does not connect.
I can ping both the host IP address and the guest (static) IP address successfully from another PC that is connected to the network.
br0 Link encap:Ethernet HWaddr 70:F3:95:03:B5:CC
inet addr:<My Static IP Address> Bcast:<my_broadcast_address> Mask:255.255.254.0
inet6 addr: fe80::72f3:95ff:fe03:b5cc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:69753 errors:0 dropped:0 overruns:0 frame:0
TX packets:9698 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:34003791 (32.4 MiB) TX bytes:843817 (824.0 KiB)
eth0 Link encap:Ethernet HWaddr 70:F3:95:03:B5:CC
inet6 addr: fe80::72f3:95ff:fe03:b5cc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:130187 errors:0 dropped:0 overruns:0 frame:0
TX packets:9704 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:45704171 (43.5 MiB) TX bytes:845299 (825.4 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:439 errors:0 dropped:0 overruns:0 frame:0
TX packets:439 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:814679 (795.5 KiB) TX bytes:814679 (795.5 KiB)
My host default gateway and subnet mask are the same on the VM Guest and the host.
Any support is greatly appreciated!
Investigate whether a firewall on the host prevents port 3838 from being accessible from the rest of the network. It seems like you have a Windows host, correct?
A quick and easy way to check is to telnet to that port on your guest OS's IP from another machine on the network. If the connection is refused with an error, you know something is blocking the port. A blank screen means an open port.
From some other machine on the network, telnet to your guest VM's IP on port 3838. A blocked port will cause telnet to quickly return an error. If the telnet command sits with a blank screen, you have an open port.

How to cusomize docker0 in Docker for different IP range?

I am referring Docker Networking#docker0 for customizing docker0 virtual bridge in Docker.
My ifconfig shows this:
docker0 Link encap:Ethernet HWaddr d6:0d:76:37:ee:04
inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::d40d:76ff:fe37:ee04/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:648 (648.0 B)
eth0 Link encap:Ethernet HWaddr 08:00:27:51:e4:40
inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe51:e440/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:947 errors:0 dropped:0 overruns:0 frame:0
TX packets:618 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:86885 (86.8 KB) TX bytes:71855 (71.8 KB)
I want to give range 10.0.2.15/24 to docker0 interface.
Note:
I am assuming that if I customize docker0 with same IP range as that of eth0 then containers should get IPs from same range.(Please correct me if I am assuming wrong).
For this, I tried adding --fixed-cidr=10.0.2.15/24 in /etc/default/docker file. But it is now working.
Any idea how to achieve it?
Also, if I am following wrong way, please guide me how to achieve it in proper way.
Instead of changing the ip-range of your docker0 interface, you should think about what you want to be available to the outside.
If you want certain ports to link to your application and be available from the outside, take a look at the -p flag.
docker run -p IP:host_port:container_port
Take a look at the documentation for network configuration http://docs.docker.com/articles/networking/
This documentation also explains how to change the ip-range of the docker0 interface. However I think thre are other, better and easier ways to achieve what you want.
(Could you explain exactly what you want by the way?)
What you are trying to do is bordering networking suicide... If docker would let you assign your external NIC-IP (eth0) to the internal bridging-interface (docker0) your system would have two interfaces working on the same route but with different networks behind the interface. Try to think about a post-man who is working in a city and some genius gave all the streets the same name and they all started counting the numbers at 1. Could he deliver any mail? ;)
What I assume you want to achieve is, to be able to connect to whatever your docker container is running, from the "outside"-network. In that case simply add
-p port_on_host:port_inside_container
to your
docker run-command and your docker-content becomes available to the outside world via your host-ip (if your firewall is properly set up)
Cheers
D
i think you should delete docker0 bridge and create docker0 bridge by giving subnet of own .
for more pratical view and images visit : https://support.zenoss.com/hc/en-us/articles/203582809-How-to-Change-the-Default-Docker-Subnet

cannot connect CentOS VM NAT Connection to actual NIC

I have configured a NAT connection on Centos 6.5 VM and it seems it does not connect to my NIC.
service network restart returns
Bringing up interface eth1: Error: No suitable device found: no device found for connection 'eth1'
ifconfig -a is shown below
eth2 Link encap:Ethernet HWaddr 00:0C:29:90:6C:31
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX Packets:90 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 Base address:0x2024
Well your ifconfig states that you have eth2 device. It is my assumption that you originally had it bridged and changed into NAT.
In /etc/sysconfig/network-scripts/
Create file named ifcfg-eth2
And fill it so it would be configured automatically by DHPC server:
DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
I am not sure if it should have rights to be executed (chmod +x).

Bringing up freedompop on beaglebone black angstrom

I have a freedompop Ubee stick that I would like to connect to my beaglebone black (running angstrom with 3.2.0-54-generic kernel). After solving some issues with hotswapping (it's not possible apparently), I am seeing the the interface in using ifconfig. But when I try bringing it up nothing happens:
root#beaglebone:~# ifconfig eth1 up
root#beaglebone:~# udhcpc eth1
udhcpc (v1.20.2) started
Sending discover...
Sending discover...
Sending discover...
Something also strange is that the interface initially has an address:
root#beaglebone:~# ifconfig eth1
eth1 Link encap:Ethernet HWaddr 00:1D:88:53:2F:52
inet addr:192.168.14.2 Bcast:192.168.14.255 Mask:255.255.255.0
inet6 addr: fe80::21d:88ff:fe53:2f52/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:22 errors:0 dropped:0 overruns:0 frame:0
TX packets:48 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2542 (2.4 KiB) TX bytes:9062 (8.8 KiB)
But a few moments ( < 1 minute) later, if I run the same command, eth1 no longer has an address, bcast, etc:
root#beaglebone:~# ifconfig eth1
eth1 Link encap:Ethernet HWaddr 00:1D:88:53:2F:52
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:25 errors:0 dropped:0 overruns:0 frame:0
TX packets:51 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2730 (2.6 KiB) TX bytes:9240 (9.0 KiB)
Under no circumstance (before or after address is stripped in ifconfig) can I ever ping something.
I have tried re-assigning the address, mask, etc, but nothing helps. Bring the interface up or down does not help. I tried manually creating an interfaces file and that didn't help either.
To solve this problem, I had to:
Add an inet dhcp interface in /etc/network/interfaces:
iface eth1 inet dhcp
Add the freedompop as a nameserver in resolve.conf
nameserver 192.168.14.1
Bring up the interface
ifup eth1

Using SSH to connect to virtual box guest machine using IPv6 address

I am using windows, and I'm also running an ubuntu server on virtual box. I've SSH'd into the guest machine countless times in the recent past when the guest machine was connected to a network using IPv4 addresses. This worked back when I was at home and at work. Right now, I'm connected to the university network. Here's the result from ifconfig when executed in my VM.
eth0 Link encap:Ethernet HWaddr 08:00:27:ae:e4:a0
inet6 addr: fe80::a00:27ff:feae:e4a0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:65404 errors:0 dropped:0 overruns:0 frame:0
TX packets:43 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:7588239 (7.5 MB) TX bytes:10610 (10.6 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:16 errors:0 dropped:0 overruns:0 frame:0
TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1296 (1.2 KB) TX bytes:1296 (1.2 KB)
I did some research, and found this post on SO.
IPv6 link-local address format
So I ran netsh interface ipv6 show address on my host machine and this is my vbox network info
Interface 208: VirtualBox Host-Only Network
Addr Type DAD State Valid Life Pref. Life Address
--------- ----------- ---------- ---------- ------------------------
Other Preferred infinite infinite fe80::f8cd:e410:b1b1:c081%208
I then tried pinging the address, and it was successful. I then tried to SSH into the server, using the following command
ssh -6 fe80::f8cd:e410:b1b1:c081%208
And I got this error
"no address associated with name"
I don't understand why I'm getting this error - I've ssh'd into machines by specifying their ipv4 addresses before, and I've never gotten this error before. Could anyone tell me what I might be doing wrong?
Thanks for the help!
Try specifying the interface to the ssh client. however ssh does not have a switch for that, you have to use this syntax:
ipv6%eth1
fe80::f8cd:e410:b1b1:c081%eth0

Resources