Multiple IPs + bridge for KVM - ip

I got a problem at the moment and really don't know where the mistake is. I got a Root-Server from my ISP. This Root-Server has already one IP included and today i booked two more IP-Addresses. So what I want to do now is to map this two new IP-Adresses to two virtual Machines but also hold the included IP for the Root-Server. So how I realize this?
I thought something like:
br0 - holds the original IP of the Root-Server
br0:0 - holds first IP of first virtual Machine
br0:1 - holds second IP of second virtual Machine
But this doesn't work. Any Ideas. I'm really frustrated. Worked the hole Day on it and no solution.

I was also struggling with similar scenario, I've got server and got to point that setting up bridge did cut me out and had to restart to be able to reach it , anyway I've managed to handle it by iptables ..
#create alias for your second ip address (lets say its 111.222.333.2 , local address 192.168.1.2)
ifconfig eth0:1 111.222.333.2
#you should add netmask to be proper if you've got subnet
#now you should be able to ping this second address from outside world - try it,
#that is if you have not set up firewall to block pings ... flush iptables rules if you are not sure...
#set up NAT rule (network-address-translate : outside ip-> local ip and back local ip->outside ip)
#assumes your virtual machines lives as 192.168.1.2
iptables -t nat -A PREROUTING -d 111.222.333.2 -j DNAT --to-destination 192.168.1.2
iptables -t nat -A POSTROUTING -s 192.168.1.2 -j SNAT --to-source 111.222.333.2
This did help me with server which has multiple IP addresses and KVM virtual machines,
which were originally run in default network (forward mode=nat), so they had internet through NAT and internal IP at first , this also gives them outside-world public IP address.
You can also do these redirects on port-by-port basis by adjusting iptables rule to set address like -d 111.222.333.2:80 -p tcp and also adding port to local address ...
You may also need to turn on device IP forwarding, you can check that by for example sysctl -a | grep forward (where you should see it on for your eth0 device) , optionally adjusting it by proper sysctl command like
sysctl -w net.ipv4.ip_forward=1

Map br0 to VM1 and VM2 as TAP DEVICE and in VM1 and VM2 you can see that as eth device;
Assign IP1 and IP2 to VM1 and VM2 respectively; With this configuration you can ping from VM1 to VM2 and from host machine to any guest machine(VM1 or VM2);
The following link will help you setting up TAP device for VM via bridge; See qemu-ifup script specified there and understand it well.

Related

Using Squid and/or iptables to share ip address over a bridge

Edited for additional clarity and added links to other attempted solutions.
I have been attempting this for several days now with one other developer, and we are getting nowhere and there are a number of comments on-line about how there are no examples to do this sort of thing (including someone who wrote some c code to do something similar though not exactly this). We have attempted to implement the solution described on SuperUser as well, but so far it does not seem like the local http server receives any of the requests as expected.
What we are trying to do:
On a device (test device) that sits between another device (mini computer) and the network. We want the test device to use the ip address of the mini computer to communicate with the control server -- in other words, we don't want it to have to have its own IP address but use that of the minicomputer for control commands (e.g., block network traffic, resume network traffic). Things are set up like so:
Mini Computer| | Test Device | | LAN
Ethernet |<-->|eth_minicomp<-->br0<-->eth_network|<-->| Ethernet
So for traffic that is:
coming from the control IP address, AND
destined for the mini computer IP address
We want the test device to intercept (and NOT forward), but use locally.
Whereas for traffic that is:
comping from the test device, AND
destined for the control IP address
We want it going out the eth_network interface with the src address being the mini computer ip address.
Latest Attempt
I have a device set up as a transparent bridge which works:
# Bring interfaces down
ip link set dev eth_minicomp down
ip link set dev eth_network down
# Create bridge
ip link add name br0 type bridge
ip link set dev br0 up
# Remove IP addresses from interfaces
ip address flush dev eth_minicomp
ip address add 0.0.0.0 dev eth_minicomp
ip address flush dev eth_network
ip address add 0.0.0.0 dev eth_network
# Bring interfaces back up
ip link set dev eth_minicomp up
ip link set dev eth_network up
# Set promisc (not sure about on br0, but should not have an effect)
ip link set dev eth_minicomp promisc on
ip link set dev eth_network promisc on
ip link set dev br0 promisc on
# Add interfaces to bridge
ip link set dev eth_minicomp master br0
ip link set dev eth_network master br0
I had been hoping to use iptables/tproxy or perhaps Squid to handle this by routing the desired TCP/IP traffice to lo (127.0.0.1), but cannot seem to get this to work. My latest attempt was trying to use
sysctl net.ipv4.ip_forward=1
sysctl net.ipv4.conf.lo.rp_filter=1
iptables -t mangle -F
iptables -t mangle -X
iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 0x01/0x01
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -s $CONTROLLER_IP -p tcp -j TPROXY \
--tproxy-mark 0x1/0x1 --on-port 80
ip route flush table 100
ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100
TPROXY seem to require at least the net.ipv4.ip_forward set 1,2, however, following the procedure on the Squid TPROXY Feature page does not seem to be set up for this type of solution.
And various permutations on -s, -d, --on-port, etc. It seems that I could use the Suid man in the middle setup to do something like this, but I do not see how. Trying to search for Suid man in the middle or Squid localhost proxy on SO returns a lot of not-quite-what-i'm-looking-for questions.
So how do we route these packets to a local server on the test device for handling? RTFM responses are more than welcome, we just cant find the fabulous manual.
Got it working with help from a team member using ebtables and iptables.
The biggest surprise in getting this working was finding out that if you use ebtables to create an Ethernet bridge, you have to DROP the Ethernet frames in order for them to get kicked up to the network layer. We all thought that DROP actually dropped the Ethernet frame and therefore the TCP/IP packets. Go figure.
We now have a device that can share the MAC and IP address of the computer to which it is attached and still communicate without disrupting the computer.
INT_IP=169.254.1.1
SRC_IP=192.168.1.2
DST_IP=192.168.1.3
EXT_PORT=80
INT_PORT=54321
# Bring interfaces to bridge down
ip link set dev eth1 down
ip link set dev eth2 down
# Remove any ip addresses on the interfaces
ip address flush dev eth1
ip address flush dev eth2
ip address add 0.0.0.0 dev eth1
ip address add 0.0.0.0 dev eth2
# Bring interfaces back up
ip link set dev eth1 up
ip link set dev eth2 up
# Set promiscuous on the interfaces
ip link set dev eth1 promisc on
ip link set dev eth2 promisc on
# Create bridge
ip link add name br0 type bridge
ip link set dev br0 up
# Add interfaces to bridge
ip link set dev eth1 master br0
ip link set dev eth2 master br0
# Add a local private IP to the bridge
ip address add $INT_IP dev "br0"
# Allow forwarding
sysctl -w net.ipv4.ip_forward=1
# Set up ethernet bridge with ebtables.
# NOTE the drop. Completely counterintuitive.
ebtables -t broute -A BROUTING -p IPv4 --ip-source $SRC_IP \
--ip-destination $DST_IP --ip-proto tcp --ip-dport \
$EXT_PORT -j redirect --redirect-target DROP
ebtables -t broute -A BROUTING -p IPv4 --ip-proto tcp \
--ip-sport $INT_PORT -j redirect --redirect-target \
DROP
# Set up iptables to handle diverting requests that originate
# from $SRC_IP destined for $DST_IP on port $EXT_PORT and send
# them to $INT_IP and $EXT_PORT in stead where you can have a
# service / thingy to handle them.
iptables -t nat -A PREROUTING -p tcp -s $SRC_IP -d $DST_IP \
--dport $EXT_PORT -j DNAT \
--to-destination $INT_IP:$INT_PORT
iptables -t nat -A POSTROUTING -p tcp -d $INT_IP \
--dport $EXT_PORT -j SNAT --to-source \
$DST_IP:$EXT_PORT
iptables -t nat -A POSTROUTING -j MASQUERADE
Now if you try to reach $DST_IP on port $EXT_PORT from $SRC_IP, it will be routed to $INT_IP on $INT_PORT in stead. Conversely, if you try to send data to $INT_IP on $INT_PORT from the system on which you configured this, all traffic will go to $SRC_IP on $EXT_PORT
-2 karma! Woohoo!

How does docker0 bridge work internally inside the host?

I am trying to understand how the bridged docker0 interface works.
When docker daemon starts up, it creates a bridged device docker0;
When a container starts up, it creates a interface vthn and bind to docker0
say we issue a ping command from inside the container to a external host
[root#f505f022eb5b app]# ping 130.49.40.130
PING 130.49.40.130 (130.49.40.130) 56(84) bytes of data.
64 bytes from 130.49.40.130: icmp_seq=1 ttl=52 time=11.9 ms
so apprently my host eth0 is receiving this ping back, but how does this package get forwarded to the container? There are serveral questions to ask
eth0 and docker0 are not bridged, how come docker0 get the packets from eth0?
even if docker0 got the packets, how it works internally sending packets to vth0? does it internally maintains some Maps so it can convert packets to between different mac address?
how is iptables related here?
Cheers.
Docker is not doing anything specifically magical here and your question is not really docker dependant/related.
docker0 is just a network bridge. As soon as this bridge is created (upon starting the docker service) you can assume that a new machine (in this case in a VM/docker form) has joined the your network.
When pinging the docker container from host or vice versa you are basically pinging another machine inside your network.
Regarding docker, unless you have created a new network interface (which I doubt so since you are pinging eth0) you are basically pinging yourself.
If you run the container as:
docker run -i -t --rm -p 10.0.0.99:80:8080 ubuntu:16.04
You are telling docker to create a NAT rule in iptables to forward any packets going to 10.0.0.99:80 to your docker container on port 8080.
When you run the container as:
docker run -i -t --rm -p --net=host ubuntu:16.04
Then you are saying the docker container should have the same network stack as the host so all the packets going to host will also arrive to your docker container via the docker0 bridge.
To answer your question, how does a container ping an external host, this is also achieved via NAT.
If you list your Iptables / NAT rules using: sudo iptables -t nat -L
You will likely see, something similar to the below (docker subnet may be different)
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 anywhere
This is basically saying NAT any outgoing packets originating from the docker subnet. So the outgoing packets will appear to originate from the docker host machine. When the ping packets return, the NAT table will be used to determine that a docker host actually made the request, and the packet gets forwarded to the docker veth.

How to set a specific fixed IP address when I create a docker machine or container?

When I create my container, I want to set a specific container's IP address in the same LAN.
Is that possible? If not, after the creation can I edit the DHCP IP address?
Considering the conclusion of the (now old October 2013) article "How to configure Docker to start containers on a specific IP address range", this doesn't seem to be possible (or at least "done automatically for you by Docker") yet.
Update Nov 2015: a similar problem is discussed in docker/machine issue 1709, which include the recent workaround (Nov 2015)proposed by Tobias Munk (schmunk42) for docker machine
(for container see the next section):
A workaround for some use-cases could be to create machines like so:
192.168.98.100
docker-machine create -d virtualbox --virtualbox-hostonly-cidr "192.168.98.1/24" m98
192.168.97.100
docker-machine create -d virtualbox --virtualbox-hostonly-cidr "192.168.97.1/24" m97
192.168.96.100
docker-machine create -d virtualbox --virtualbox-hostonly-cidr "192.168.96.1/24" m96
If there's no other machine with the same cidr (Classless Inter-Domain Routing), the machine should always get the .100 IP upon start.
Another workaround:
(see my script in "How do I create a docker machine with a specific URL using docker-machine and VirtualBox?")
My virtualbox has dhcp range 192.168.99.100 - 255 and I want to set an IP before 100.
I've found a simple trick to set a static IP: after create a machine I run this command and restart the machine:
echo "ifconfig eth1 192.168.99.50 netmask 255.255.255.0 broadcast 192.168.99.255 up" \
| docker-machine ssh prova-discovery sudo tee /var/lib/boot2docker/bootsync.sh > /dev/null
This command create a file bootsync.sh that is searched by boot2docker startup scripts and executed.
Now during machine boot the command is executed and set static IP.
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
test-1 - virtualbox Running tcp://192.168.99.50:2376 test-1 (mast
Michele Tedeschi (micheletedeschi) adds
I've updated the commands with:
echo "kill `more /var/run/udhcpc.eth1.pid`\nifconfig eth1 192.168.99.50 netmask 255.255.255.0 broadcast 192.168.99.255 up" | docker-machine ssh prova-discovery sudo tee /var/lib/boot2docker/bootsync.sh > /dev/null
then run command (only the first time)
docker-machine regenerate-certs prova-discovery
now the IP will not be changed by the DHCP
(replace prova-discovery by the name of your docker-machine)
April 2015:
The article mentions the possibility to create your own bridge (but that doesn't assign one of those IP addresses to a container though):
create your own bridge, configure it with a fixed address, tell Docker to use it. Done.
If you do it manually, it will look like this (on Ubuntu):
stop docker
ip link add br0 type bridge
ip addr add 172.30.1.1/20 dev br0
ip link set br0 up
docker -d -b br0
To assign a static IP within the range of an existing bridge IP range, you can try "How can I set a static IP address in a Docker container?", using a static script which creates the bridge and a pair of peer interfaces.
Update July 2015:
The idea mention above is also detailed in "How can I set a static IP address in a Docker container?" using:
Building your own bridge
The result should be that the Docker server starts successfully and is now prepared to bind containers to the new bridge.
After pausing to verify the bridge’s configuration, try creating a container — you will see that its IP address is in your new IP address range, which Docker will have auto-detected.
you can use the brctl show command to see Docker add and remove interfaces from the bridge as you start and stop containers, and can run ip addr and ip route inside a container to see that it has been given an address in the bridge’s IP address range and has been told to use the Docker host’s IP address on the bridge as its default gateway to the rest of the Internet.
Start docker with: -b=br0 (that is also what the echo 'DOCKER_OPTS="-b=bridge0"' >> /etc/default/docker can set for you by default)
Use pipework (192.168.1.1 below being the default gateway ip address):
pipework br0 container-name 192.168.1.10/24#192.168.1.1

Openstack VM is not accessible on LAN

I am facing issue with accessing Open stack VM's on LAN.
I have setup single machine(192.168.2.15) opensatck using devstack, so
all VM's are running inside this machine
My machine(192.168.2.15) has one network card(eth0) and
I have nova networking, have not installed neutron.
I have assigned static IP on eth0 of all the LAN machine( such as 192.168.2.15 and 192.168.2.16) in /etc/network/interfaces file.
System information of the Openstack Machine is as below:
Memory usage: 19% IP address for virbr0: 192.168.122.1
Swap usage: 0% IP address for br100: 10.0.0.1
Below works fine
I can access internet from VM1(10.0.0.2 which is auto assigned IP).
I can ping LAN machine(192.168.2.16) from VM1.
Openstack machine(192.168.2.15) can ping VM1(10.0.0.2).
VM1(10.0.0.2) can ping VM2(10.0.0.3).
But LAN machine 192.168.2.16 is not able to ping VM1(10.0.0.2)
So please suggest how can it be achieved ? And Please consider me as very new to Openstack and networking.
Thanks !!!
You need to assign a floating IP to the VMs you create if you want a host from outside the openstack network to connect to it. The internal IPs are only accessible from inside the openstack network.
See how to assign a floating IP to a VM here: http://docs.openstack.org/user-guide/content/floating_ip_allocate.html
To access the VM's floating IP from another host (that is not the devstack host) you should make sure that the devstack host is configured to forward packets. You can do this with:
sudo bash
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
See more details here:
http://barakme.tumblr.com/post/70895539608/openstack-in-a-box-setting-up-devstack-havana-on-your
Adding a route to client machine to openstack VM, helped me.

Forwarding within local network to same network

I have X-Wrt based on OpenWrt 8.09 on my router
I have home LAN of few computers on which I have some network servers (SVN, web, etc). For each of service I made forwarding on my router (Linksys wrt54gl) to access it from the Internet (<my_external_ip>:<external_port> -> <some_internal_ip>:<internal_port>)
But within my local network this resources by above request is unreachable (so I need make some reconfiguration <some_internal_ip>:<internal_port> to access).
I added some line to my /etc/hosts
<my_external_ip> localhost
So now all requests from local network to <my_external_ip> forwards to my router but further redirection to appropriate port not works.
Advise proper redirection please.
You need to install an IP redirect for calls going out of the internal network and directed to the public IP. Normally these packets get discarded. You want to reroute them, DNATting to the destination server, but also masqueraded so that the server, seeing as you, its client, are in its same network, doesn't respond directly to you with its internal IP (which you, the client, not having sent the packet there, would discard).
I found this on OpenWRT groups:
iptables -t nat -A prerouting_rule -d YOURPUBLICIP -p tcp --dport PORT -j DNAT --to YOURSERVER
iptables -A forwarding_rule -p tcp --dport PORT -d YOURSERVER -j ACCEPT
iptables -t nat -A postrouting_rule -s YOURNETWORK -p tcp --dport PORT -d YOURSERVER -j MASQUERADE
https://forum.openwrt.org/viewtopic.php?id=4030
If I remember correctly OpenWrt allows you to define custom DNS entries. So maybe simply give a proper local names to your sources (ie. svnserver.local) and map them to specific local IPs. This way you do not even need to go through router to access local resources from local network.

Resources