Setting up a connection between a virtio and a mininet host - networking

I am trying to create an environment like this:
(https://i.stack.imgur.com/sUOWb.png)
Right now I have a connection (eth1 -> eth2) and (eth3 -> eth4). (like the black arrows show). I also have a connection made between docker1 and docker2 in Mininet. How can I make a connection from eth2 to the docker1 in mininet and from docker2 to eth3 so that the packets will pass like the blue route shows it? I need to make the red connection somehow.
I tried using bridges but unfortunatelly right now the dock1 and dock2 are connected via one and one device can be connected to just one bridge. If I would add eth2 and eth3 to the bridge the packets wouldn't use the dock1 and dock2, so that is not a solution for me :/ .

If you add eth2 and eth3 to the bridge that includes dock1 and dock2 the traffic should go pass dock1 and dock2 and later to eth4

Related

Pi 4: Wireless network interface doesn't work if both eth0 and wlan0 are active/connected

I'm running Pi4 with Raspbian for my home automation, and it's connected to its both network interface: eth0 (ethernet) and wlan0 (wifi).
The wlan0 is connected to the network 10.10.10.0/24, which is the VLAN for management. This VLAN is configured on the unifi edgerouter x and uap-ac-lite access point. If only wlan0 is active (i.e, I only use the wifi on the Pi), the Pi should be able to see devices on the other VLANs, for example 10.10.50.0/24 for IoT devices.
However, as the Pi is running Unifi controller, I also need to connect it to the edge router's physical network 192.168.10.0/24 so I can manage the access point. This means, the eth0 is active, which somehow makes VLAN 10.10.50.0/24 inaccessible. I disconnect the ethernet cable and the 10.10.50.0/24 is accessible again.
My best guess is that if both interfaces are enabled, only 1 of them (eth0 in this case) will be used for the default routing. Is it possible to make both routing accessible, depending on the destination networks?
Never mind, I have found the answer: Simply change priority of the wifi network routing by adding metric 100 to wlan0 section in dhcpcd.conf

Sending packets through a virtual interface whose subnet is also same as subnet of another interface

I have a Linux machine with two interfaces eth0 and eth1.
eth0 has 192.168.2.30 and eth1 has 172.16.30.20. eth0 is connected to a router which is the gateway too for the WAN. eth1 is connected to LAN. All is working well until I had to connect a
set of devices with IP rage 192.168.2.5 - 192.168.2.15 to the LAN to which eth1 is also connected.
I want to send a multicast packet to these devices. Since the multicast works on the same subnet, I created an IP alias using following.
system("ifconfig eth1:1 192.168.2.100 netmask 255.255.255.0 up");
Despite adding the above, the packets are not going through eth1. This is found to be because eth0 is also having the same subnet as that of eth1: 1.
I tried calling ip route add <multicast ip> dev eth1. But, no success.
Appreciate if anyone could offer suggestions.
From the looks of it you have at least two problems here and depending on the solution you choose other issues may arise.
Problem one, Overlapping subnets: The absolute 100% correct way to resolve this is to change the subnets so they don't overlap. I can't stress enough how important this is in your situation. If these computer on 192.168.2.5 - 192.168.2.15 are suppose to be connected to the same network as eth0 then you need to reconsider your setup as this would never work because you will create a networking loop or bad routes.
In the first situation where 192.168.2.5 - 192.168.2.15 and 192.168.2.15 aren't physically connected in any way and if someone above you says you can't do this you can try creating a NAT on eth1 so that your system sees the subnet on a different network. But this can make understanding the routes confusing and may interfere with multicast traffic.
After this is done run a tracerroute to ensure traffic is passing correctly. If not please provide the output and the route you expect it to take along with the current setup.
If multicasting doesn't work still then I recommend to create another question for it.

Multicast from docker to host's eth

currently i am trying to send some multicast data from my docker's application throughout my host's eth2 host interface. i did use --net=host option in docker and this was working perfectly, and unfortunately since i need to open multiple instances that uses same port, its impossible for me to use net=host anymore. i have to go through the bridged mode of docker0
in my docker's interface i have eth0 which is linked to my docker0 as
10.101.131.60.
therefore i did : route add -net 225.1.1.0/28 dev eth0 to pass all multicast packets that my app send to 225.1.1.0/28 to my eth0, which is connected to host as docker0.
therefore, i used wireshark to listen whether my application really throws the message packet to my eth0 inside container, and it really does sending multicasts when i use wireshark inside container.
now throughout i also used wireshark to listen to my docker0 and the packets were there. Now how do i "Forward all my multicast packets" throughout docker0 to my eth2? i used several iptables approach, but none of them seemed to be useful, perhaps that its being ignored?
any help would be appreciated... thanks!
You can use an IGMP proxy like this: https://sourceforge.net/projects/igmpproxy/
Good luck

Docker on CentOS with bridge to LAN network

I have a server VLAN of 10.101.10.0/24 and my Docker host is 10.101.10.31. How do I configure a bridge network on my Docker host (VM) so that all the containers can connect directly to my LAN network without having to redirect ports around on the default 172.17.0.0/16? I tried searching but all the howtos I've found so far have resulted in losing SSH session which I had to go into the VM from a console to revert the steps I did.
There's multiple ways this can be done. The two I've had most success with are routing a subnet to a docker bridge and using a custom bridge on the host LAN.
Docker Bridge, Routed Network
This has the benefit of only needing native docker tools to configure docker. It has the down side of needing to add a route to your network, which is outside of dockers remit and usually manual (or relies on the "networking team").
Enable IP forwarding
/etc/sysctl.conf: net.ipv4.ip_forward = 1
sysctl -p /etc/sysctl.conf
Create a docker bridge with new subnet on your VM network, say 10.101.11.0/24
docker network create routed0 --subnet 10.101.11.0/24
Tell the rest of the network that 10.101.11.0/24 should be routed via 10.101.10.X where X is IP of your docker host. This is the external router/gateway/"network guy" config. On a linux gateway you could add a route with:
ip route add 10.101.11.0/24 via 10.101.10.31
Create containers on the bridge with 10.101.11.0/24 addresses.
docker run --net routed0 busybox ping 10.101.10.31
docker run --net routed0 busybox ping 8.8.8.8
Then your done. Containers have routable IP addresses.
If you're ok with the network side, or run something like RIP/OSPF on the network or Calico that takes care of routing then this is the cleanest solution.
Custom Bridge, Existing Network (and interface)
This has the benefit of not requiring any external network setup. The downside is the setup on the docker host is more complex. The main interface requires this bridge at boot time so it's not a native docker network setup. Pipework or manual container setup is required.
Using a VM can make this a little more complicated as you are running extra interfaces with extra MAC addresses over the main VM's interface which will need additional "Promiscuous" config first to allow this to work.
The permanent network config for bridged interfaces varies by distro. The following commands outline how to set the interface up and will disappear after reboot. You are going to need console access or a seperate route into your VM as you are changing the main network interface config.
Create a bridge on the host.
ip link add name shared0 type bridge
ip link set shared0 up
In /etc/sysconfig/network-scripts/ifcfg-br0
DEVICE=shared0
TYPE=Bridge
BOOTPROTO=static
DNS1=8.8.8.8
GATEWAY=10.101.10.1
IPADDR=10.101.10.31
NETMASK=255.255.255.0
ONBOOT=yes
Attach the primary interface to the bridge, usually eth0
ip link set eth0 up
ip link set eth0 master shared0
In /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
TYPE=Ethernet
IPV6INIT=no
USERCTL=no
BRIDGE=shared0
Reconfigure your bridge to have eth0's ip config.
ip addr add dev shared0 10.101.10.31/24
ip route add default via 10.101.10.1
Attach containers to bridge with 10.101.10.0/24 addresses.
CONTAINERID=$(docker run -d --net=none busybox sleep 600)
pipework shared1 $CONTAINERID 10.101.10.43/24#10.101.10.Y
Or use a DHCP client inside the container
pipework shared1 $CONTAINERID dhclient
Docker macvlan network
Docker has since added a network driver called macvlan that can make a container appear to be directly connected to the physical network the host is on. The container is attached to a parent interface on the host.
docker network create -d macvlan \
--subnet=10.101.10.0/24 \
--gateway=10.101.10.1 \
-o parent=eth0 pub_net
This will suffer from the same VM/softswitch problems where the network and interface will need be promiscuous with regard mac addresses.

debian networking sets wrong ip

I'm currently trying to automate our beaglebone flashing - therefore we have to manually change the ip address.
I created a script which basically adds sth. like:
# The primary network interface
auto eth0
iface eth0 inet static
address theip
netmask 255.255.255.0
gateway gateway
to /etc/network/interfaces
After adding this I restart networking via:
service networking restart
Which returns "ok", but ifconfig doesn't return "theip" it seems like it just ignores the changes and still uses dhcp.
When rebooting the system, the ip is changed and everything works as expected, but I don't want to restart the system. So how do I correctly restart the networking?
Thanks in advance,
Lukas
Do ip addr flush dev eth0 first and then restart the networking service.
Explanation
The /etc/network/interfaces file is used by the ifupdown system. This is different than the graphical NetworkManager system that manages the network by default.
When you add lines to control eth0 in the /etc/network/interfaces file, most graphical network managers assume you are now using the ifupdown service for that interface and drops the option to manage it.
The ifupdown system is less complicated and less sophisticated. Since eth0 is new to the ifupdown system, it assumes that it is unconfigured and tries to "add" the specified address using the ip command. Since the interface already has an ip address assigned by dhclient for that network, I suspect it is erroring out. You then need to put the interface in a known state for ifupdown to be able to start managing it. That is without an address assigned to the interface via the ip command.

Resources