I found most of the configuration is for giving static or private network. But I want it to act as a different machine so it will get a separate IP address from the DHCP and I want to do it through nmcli.
Thanks in advance.
If you are using docker as tagged, rather than LXC, use pipework to map the wlan interface from the host to the container
pipework eth2 $CONTAINERID 10.10.9.9/24
or alternatively let the container do the dhcp negotiation for you
pipework eth1 $CONTAINERID dhclient
This setup is based on a macvlan interface so the same concept should work with LXC you just won't get the easy front end.
I'm confused if this is a docker question or an LXC question.
EDIT: as per the comments, wlan interface support in a bridge depends on the wlan vendor. It may work, or it may not work at all.
In any case, you should be able to create a bridge, add your wlan0 interface to the bridge, and then have your LXC container connect to this bridge directly. Then, when you run your DHCP client in the container, it will grab it from the wlan0 interface.
Configure bridge (manually for now)
# ifconfig wlan0 up
# brctl addbr br0
# brctl addif br0 wlan0
# ifconfig br0 up
# dhclient br0
Configure LXC configuration
If using traditional priviliged LXC, edit the container's config file at /var/lib/lxc/$NAME/config,
and update this value to point to your new bridge.
lxc.network.link = br0
Run DHCP in container
# lxc-attach -n $NAME
# dhclient eth0
# ip a
If the output to ip a shows the desired IP, you're all set!
If you want to make the configuration persistent, you'll have to add the bridge to your /etc/network/interfaces file.
IEEE 802.11 doesn’t like multiple MAC addresses on a single client, so bridge and macvlans are not the right solution here.
Use ipvlan in L2 mode.
Related
Since Docker 1.10 (and libnetwork update) we can manually give an IP to a container inside a user-defined network, and that's cool!
I want to give a container an IP address in my LAN (like we can do with Virtual Machines in "bridge" mode). My LAN is 192.168.1.0/24, all my computers have IP addresses inside it. And I want my containers having IPs in this range, in order to reach them from anywhere in my LAN (without NAT/PAT/etc...).
I obviously read Jessie Frazelle's blog post and a lot of others post here and everywhere like :
How to set a docker container's iP?
How to assign specific IP to container and make that accessible outside of VM host?
and so much more, but nothing came out; my containers still have IP addresses "inside" my docker host, and are not reachable for others computers on my LAN.
Reading Jessie Frazelle's blog post, I thought (since she uses public IP) we can do what I want to do?
Edit: Indeed, if I do something like :
network create --subnet 192.168.1.0/24 --gateway 192.168.1.1 homenet
docker run --rm -it --net homenet --ip 192.168.1.100 nginx
The new interface on the docker host (br-[a-z0-9]+) take the '--gateway' IP, which is my router IP. And the same IP on two computers on the network... BOOM
Thanks in advance.
EDIT : This solution is now useless. Since version 1.12, Docker provides two network drivers : macvlan and ipvlan. They allow assigning static IP from the LAN network. See the answer below.
After looking for people who have the same problem, we went to a workaround :
Sum up :
(V)LAN is 192.168.1.0/24
Default Gateway (= router) is 192.168.1.1
Multiple Docker Hosts
Note : We have two NIC : eth0 and eth1 (which is dedicated to Docker)
What do we want :
We want to have containers with ip in the 192.168.1.0/24 network (like computers) without any NAT/PAT/translation/port-forwarding/etc...
Problem
When doing this :
network create --subnet 192.168.1.0/24 --gateway 192.168.1.1 homenet
we are able to give containers the IP we want to, but the bridge created by docker (br-[a-z0-9]+) will have the IP 192.168.1.1, which is our router.
Solution
1. Setup the Docker Network
Use the DefaultGatewayIPv4 parameter :
docker network create --subnet 192.168.1.0/24 --aux-address "DefaultGatewayIPv4=192.168.1.1" homenet
By default, Docker will give to the bridge interface (br-[a-z0-9]+) the first IP, which might be already taken by another machine. The solution is to use the --gateway parameter to tell docker to assign a arbitrary IP (which is available) :
docker network create --subnet 192.168.1.0/24 --aux-address "DefaultGatewayIPv4=192.168.1.1" --gateway=192.168.1.200 homenet
We can specify the bridge name by adding -o com.docker.network.bridge.name=br-home-net to the previous command.
2. Bridge the bridge !
Now we have a bridge (br-[a-z0-9]+) created by Docker. We need to bridge it to a physical interface (in my case I have to NIC, so I'm using eth1 for that):
brctl addif br-home-net eth1
3. Delete the bridge IP
We can now delete the IP address from the bridge, since we don't need one :
ip a del 192.168.1.200/24 dev br-home-net
The IP 192.168.1.200 can be used as bridge on multiple docker host, since we don't use it, and we remove it.
Docker now supports Macvlan and IPvlan network drivers. The Docker documentation for both network drivers can be found here.
With both drivers you can implement your desired scenario (configure a container to behave like a virtual machine in bridge mode):
Macvlan: Allows a single physical network interface (master device) to have an arbitrary number of slave devices, each with it's own MAC adresses.
Requires Linux kernel v3.9–3.19 or 4.0+.
IPvlan: Allows you to create an arbitrary number of slave devices for your master device which all share the same MAC address.
Requires Linux kernel v4.2+ (support for earlier kernels exists but is buggy).
See the kernel.org IPVLAN Driver HOWTO for further information.
Container connectivity is achieved by putting one of the slave devices into the network namespace of the container to be configured. The master devices remains on the host operating system (default namespace).
As a rule of thumb you should use the IPvlan driver if the Linux host that is connected to the external switch / router has a policy configured that allows only one MAC per port. That's often the case in VMWare ESXi environments!
Another important thing to remember (Macvlan and IPvlan): Traffic to and from the master device cannot be sent to and from slave devices. If you need to enable master to slave communication see section "Communication with the host (default-ns)" in the "IPVLAN – The beginning" paper published by one of the IPvlan authors (Mahesh Bandewar).
Use the official Docker driver:
As of Docker v1.12.0-rc2, the new MACVLAN driver is now available in an official Docker release:
MacVlan driver is out of experimental #23524
These new drivers have been well documented by the author(s), with usage examples.
End of the day it should provide similar functionality, be easier to setup, and with fewer bugs / other quirks.
Seeing Containers on the Docker host:
Only caveat with the new official macvlan driver is that the docker host machine cannot see / communicate with its own containers. Which might be desirable or not, depending on your specific situation.
This issue can be worked-around if you have more than 1 NIC on your docker host machine. And both NICs are connected to your LAN. Then can either A) dedicate 1 of your docker hosts's 2 nics to be for docker exclusively. And be using the remaining nic for the host to access the LAN.
Or B) by adding specific routes to only those containers you need to access via the 2nd NIC. For example:
sudo route add -host $container_ip gw $lan_router_ip $if_device_nic2
Method A) is useful if you want to access all your containers from the docker host and you have multiple hardwired links.
Wheras method B) is useful if you only require access to a few specific containers from the docker host. Or if your 2nd NIC is a wifi card and would be much slower for handling all of your LAN traffic. For example on a laptop computer.
Installation:
If cannot see the pre-release -rc2 candidate on ubuntu 16.04, temporarily add or modify this line to your /etc/apt/sources.list to say:
deb https://apt.dockerproject.org/repo ubuntu-xenial testing
instead of main (which is stable releases).
I no longer recommended this solution. So it's been removed. It was using bridge driver and brctrl
.
There is a better and official driver now. See other answer on this page: https://stackoverflow.com/a/36470828/287510
Here is an example of using macvlan. It starts a web server at http://10.0.2.1/.
These commands and Docker Compose file work on QNAP and QNAP's Container Station. Notice that QNAP's network interface is qvs0.
Commands:
The blog post "Using Docker macvlan networks"[1][2] by Lars Kellogg-Stedman explains what the commands mean.
docker network create -d macvlan -o parent=qvs0 --subnet 10.0.0.0/8 --gateway 10.0.0.1 --ip-range 10.0.2.0/24 --aux-address "host=10.0.2.254" macvlan0
ip link del macvlan0-shim link qvs0 type macvlan mode bridge
ip link add macvlan0-shim link qvs0 type macvlan mode bridge
ip addr add 10.0.2.254/32 dev macvlan0-shim
ip link set macvlan0-shim up
ip route add 10.0.2.0/24 dev macvlan0-shim
docker run --network="macvlan0" --ip=10.0.2.1 -p 80:80 nginx
Docker Compose
Use version 2 because version 3 does not support the other network configs, such as gateway, ip_range, and aux_address.
version: "2.3"
services:
HTTPd:
image: nginx:latest
ports:
- "80:80/tcp"
- "80:80/udp"
networks:
macvlan0:
ipv4_address: "10.0.2.1"
networks:
macvlan0:
driver: macvlan
driver_opts:
parent: qvs0
ipam:
config:
- subnet: "10.0.0.0/8"
gateway: "10.0.0.1"
ip_range: "10.0.2.0/24"
aux_address: "host=10.0.2.254"
It's possible map a physical interface into a container via pipework.
Connect a container to a local physical interface
pipework eth2 $(docker run -d hipache /usr/sbin/hipache) 50.19.169.157/24
pipework eth3 $(docker run -d hipache /usr/sbin/hipache) 107.22.140.5/24
There may be a native way now but I haven't looked into that for the 1.10 release.
When I create my container, I want to set a specific container's IP address in the same LAN.
Is that possible? If not, after the creation can I edit the DHCP IP address?
Considering the conclusion of the (now old October 2013) article "How to configure Docker to start containers on a specific IP address range", this doesn't seem to be possible (or at least "done automatically for you by Docker") yet.
Update Nov 2015: a similar problem is discussed in docker/machine issue 1709, which include the recent workaround (Nov 2015)proposed by Tobias Munk (schmunk42) for docker machine
(for container see the next section):
A workaround for some use-cases could be to create machines like so:
192.168.98.100
docker-machine create -d virtualbox --virtualbox-hostonly-cidr "192.168.98.1/24" m98
192.168.97.100
docker-machine create -d virtualbox --virtualbox-hostonly-cidr "192.168.97.1/24" m97
192.168.96.100
docker-machine create -d virtualbox --virtualbox-hostonly-cidr "192.168.96.1/24" m96
If there's no other machine with the same cidr (Classless Inter-Domain Routing), the machine should always get the .100 IP upon start.
Another workaround:
(see my script in "How do I create a docker machine with a specific URL using docker-machine and VirtualBox?")
My virtualbox has dhcp range 192.168.99.100 - 255 and I want to set an IP before 100.
I've found a simple trick to set a static IP: after create a machine I run this command and restart the machine:
echo "ifconfig eth1 192.168.99.50 netmask 255.255.255.0 broadcast 192.168.99.255 up" \
| docker-machine ssh prova-discovery sudo tee /var/lib/boot2docker/bootsync.sh > /dev/null
This command create a file bootsync.sh that is searched by boot2docker startup scripts and executed.
Now during machine boot the command is executed and set static IP.
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
test-1 - virtualbox Running tcp://192.168.99.50:2376 test-1 (mast
Michele Tedeschi (micheletedeschi) adds
I've updated the commands with:
echo "kill `more /var/run/udhcpc.eth1.pid`\nifconfig eth1 192.168.99.50 netmask 255.255.255.0 broadcast 192.168.99.255 up" | docker-machine ssh prova-discovery sudo tee /var/lib/boot2docker/bootsync.sh > /dev/null
then run command (only the first time)
docker-machine regenerate-certs prova-discovery
now the IP will not be changed by the DHCP
(replace prova-discovery by the name of your docker-machine)
April 2015:
The article mentions the possibility to create your own bridge (but that doesn't assign one of those IP addresses to a container though):
create your own bridge, configure it with a fixed address, tell Docker to use it. Done.
If you do it manually, it will look like this (on Ubuntu):
stop docker
ip link add br0 type bridge
ip addr add 172.30.1.1/20 dev br0
ip link set br0 up
docker -d -b br0
To assign a static IP within the range of an existing bridge IP range, you can try "How can I set a static IP address in a Docker container?", using a static script which creates the bridge and a pair of peer interfaces.
Update July 2015:
The idea mention above is also detailed in "How can I set a static IP address in a Docker container?" using:
Building your own bridge
The result should be that the Docker server starts successfully and is now prepared to bind containers to the new bridge.
After pausing to verify the bridge’s configuration, try creating a container — you will see that its IP address is in your new IP address range, which Docker will have auto-detected.
you can use the brctl show command to see Docker add and remove interfaces from the bridge as you start and stop containers, and can run ip addr and ip route inside a container to see that it has been given an address in the bridge’s IP address range and has been told to use the Docker host’s IP address on the bridge as its default gateway to the rest of the Internet.
Start docker with: -b=br0 (that is also what the echo 'DOCKER_OPTS="-b=bridge0"' >> /etc/default/docker can set for you by default)
Use pipework (192.168.1.1 below being the default gateway ip address):
pipework br0 container-name 192.168.1.10/24#192.168.1.1
I am new to openstack and I followed the installation guide of icehouse for ubuntu 12.04/14.04
I chose 3 node architecture. Controller, Nova, Neutron.
The 3 nodes are installed in VM's. I used nested KVM. Inside VM's kvm is supported so nova will use virt_type=kvm. In controller I created 2 nics. eth0 is a NAT interface with ip 203.0.113.94 and eth1 a host only interface with ip 10.0.0.11.
In nova there are 3 nics. eth0 NAT - 203.0.113.23, eth1 host only 10.0.0.31 and eth2 another host only 10.0.1.31
In neutron 3 nics. eth0 NAT 203.0.113.234, eth1 host only 10.0.0.21 and eth2 another hosty only 10.0.1.21 (during installation guide in neutron node i created a br-ex (and a port to eth0) which took the settings of eth0 and eth0 settings are:
auto eth0 iface eth0 inet manual up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down)
Everything seemed fine. I can create networks, routers etc, boot instances but I have this error.
When I launch an instance it takes a fixed ip but when I log in into instance (cirros) can't ping anything. ifconfig with no ip.
I noticed that in demo-net (tenant network) properties under subnet in the ports field it has 3 ports. 172.16.1.1 network:router_interface active 172.16.1.3 network:dhcp active 172.16.1.6 compute:nova down
I searched for solutions over the net but couldn't find anything!
Any help?
Ask me if you want specific logs because I don't know which ones to post!
Thanks anyway!
Looks like you are using Fixed IP to ping..If so please assign floating IP to your instance, and then try to ping..
If you have already assigned floating IP and you are pinging using that IP..please upload log of your instance
I am facing issue with accessing Open stack VM's on LAN.
I have setup single machine(192.168.2.15) opensatck using devstack, so
all VM's are running inside this machine
My machine(192.168.2.15) has one network card(eth0) and
I have nova networking, have not installed neutron.
I have assigned static IP on eth0 of all the LAN machine( such as 192.168.2.15 and 192.168.2.16) in /etc/network/interfaces file.
System information of the Openstack Machine is as below:
Memory usage: 19% IP address for virbr0: 192.168.122.1
Swap usage: 0% IP address for br100: 10.0.0.1
Below works fine
I can access internet from VM1(10.0.0.2 which is auto assigned IP).
I can ping LAN machine(192.168.2.16) from VM1.
Openstack machine(192.168.2.15) can ping VM1(10.0.0.2).
VM1(10.0.0.2) can ping VM2(10.0.0.3).
But LAN machine 192.168.2.16 is not able to ping VM1(10.0.0.2)
So please suggest how can it be achieved ? And Please consider me as very new to Openstack and networking.
Thanks !!!
You need to assign a floating IP to the VMs you create if you want a host from outside the openstack network to connect to it. The internal IPs are only accessible from inside the openstack network.
See how to assign a floating IP to a VM here: http://docs.openstack.org/user-guide/content/floating_ip_allocate.html
To access the VM's floating IP from another host (that is not the devstack host) you should make sure that the devstack host is configured to forward packets. You can do this with:
sudo bash
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
See more details here:
http://barakme.tumblr.com/post/70895539608/openstack-in-a-box-setting-up-devstack-havana-on-your
Adding a route to client machine to openstack VM, helped me.
I am trying to set up a traffic control server between the network and the firewall-router.
The server has two network devices:
Firewall <--> Server <---> NETWORK
It is running CentOS 6.4 x64 and I would like to use Etherape.
My idea is to have eth0 connected directly to our router and eth1 to our network.
eth1 would have two virtual interfaces, one with an IP to ssh the server and the other just forwarding with IPTables to eth0 with no IP. Of course, eth0 would not have any IP (we don't want to change the Gateway).
Any suggestion or better way to do this?
Thank you very much!!
Ok, finally it was quite easy. Install brctl and etherape, then:
brctl addbr br0
brctl stp br0 off
brctl addif br0 eth0
brctl addif br0 eth1
echo 1 > /proc/sys/net/ipv4/ip_forward
ifconfig eth0 up
ifconfig eth1 up
ifconfig br0 up
service network restart
ifconfig br0 XX.YY.ZZ.AA
That is a temporal configuration. If you reboot you have to re-do it. Here is a way to make it persistent:
http://www.tldp.org/HOWTO/Ethernet-Bridge-netfilter-HOWTO.html#toc3.3
Finally, (installing if you are in a Windows Box, Xming and Putty and) connecting as root to XX.YY.ZZ.AA with X11 redirection, execute etherape and you will have you remote traffic control.
To make it easier, I will recommend to add the filter:
ip and not ((src net XX.YY.ZZ.AA) or dst net XX.YY.ZZ.AA)
To avoid the X11 traffic between the server and your box.