Bridging commands and concept: Ubuntu 12.04 LTS - networking

I am using bridging as a technique to connect 2 virtual interfaces together in Ubuntu 12.04.
One of the interfaces is a mininet interface (www.mininet.org).
I am getting a lot of TCP retransmission packets, and the connectivity is extremely slow.
Trying to debug this issue.
I have tried to enable STP on the bridge, but it doesn't happen:
~$ brctl show
bridge name bridge id STP enabled interfaces
s1 0000.f643bed86249 no s1-eth1
s1-eth2
s1-eth3
s2 0000.caf874f68248 no s2-eth1
~$ sudo brctl stp s2 on
~$ brctl show
bridge name bridge id STP enabled interfaces
s1 0000.f643bed86249 no s1-eth1
s1-eth2
s1-eth3
s2 0000.caf874f68248 no s2-eth1
I am confused as to why this command does not work.
Also, auto-negotiation is off in these interfaces.
Does autonegotiation matter for virtual interfaces?
Should I manually set auto-negotiation to 'on' or set the duplex and speed of virtual interfaces?
Also, ping and dns work perfectly fine. For http traffic, SYN, SYN-ACK and ACK is as expected, however, the GET/POST request gets retransmitted 5-6 time immediately after the first GET/POST.
This is a confusing thing for me now and any links/pointers/commands will be helpful.
Please direct me to the right forum if this is not a question for stackoverflow. TIA.

The STP is founded to solve the Lay2 looping and the broadcast storm that the Lay2 looping cause. It's nothing about the TCP retransmission.
Maybe you can check the DNS resolvf time out in your case, and turn on the web server debug log.

Related

How to use the RJ45 tool in the CORE network emulator?

I have recently installed the CORE Network Emulator, and have already read the relevant parts of the the docs. CORE promises to be able to connect the virtual networks you create in it with physical once. However, I am having trouble connecting my virtual network to the physical one, which the RJ45 tool promises to do. From what I have read, in the CORE NetEm you can assign a network interface to the RJ45 tool, which then bridges your physical device to the network.
I have tried creating a basic topology, with one virtual host, a router, and then my computer with the RJ45 tool and I am trying to see if I can reach my computer from the host or vice versa with a ping command, but all I get is "network is unreachable."
Unfortunately, the CORE docs don't go into detail in how to use this tool and I wasn't able to find any other sources on the internet which have to do anything with it.
Here you can find the documenation: http://coreemu.github.io/core/usage.html#connecting-with-physical-networks
Does anyone have any experience with CORE and can help me out with this?
Many thanks!
The CORE RJ45 tool creates a Linux bridge between a virtual interface and a physical one.
Example: if you have node n1 linked to an RJ45 node assigned to eth0, after pressing "Start", on the underlying host you'll have a bridge with the n1:eth0 veth0 pair device and your host's eth0 device enslaved.
You'll need to configure routing between your virtual and physical networks. In the above example, suppose n1:eth0 is 10.0.0.1/24. When you plug a physical device into eth0, that device needs a route back to 10.0.0.1. That device may be on the same subnet, for example if it has the address 10.0.0.2/24. If your physical device has an address on a different subnet, you'll need to manually add a route to reach the 10.0.0.0/24 network, via the connecting interface.
I had the same problem. My CORE version is v.5.3.0 (20190615) on Ubuntu 18.04 LTS w/ Linux 5.0.0-37 generic on x86_64. Have OSPF v2, v3, Zegra, and IPForward correctly configured at r1, so that vpc1 can send and receive data successfully.
The RJ45 port of a built-in physical interface on the CORE host was mapped to a virtual endpoint for connecting the 2nd real computer, rpc 192.168.10.10/24 with a virtual switch sw1. Another virtual PC, vpc1 192.168.10.20/24 and a router r1 with 192.168.10.1/24 and 10.0.10.1/24 two interfaces.
Can ping from rpc to vpc1 and to r1 at 192.168.10.1 but not 10.0.10.1 or beyond. However, using the two-node tool or virtual terminal of vpc1, I can also traceroute and ping r1 and beyond.
The reason why the traffic of the real remote PC rpc could not be routed by r1 from 192.168.10.1 to 10.0.10.1 and back was because its WiFi was left on with the gateway configured to a FiOS router. You cannot have two gateways. Once the WiFi got turned off, the traceroute and ping can reach r1 and beyond.
This could also be the root cause of your problem.

gre tunnel issues - one sided communication

I have two machines:
Ubuntu 16.04 server VM (172.18.6.10)
Proxmox VE5 station (192.168.6.30)
they are communicating through a third machine that forwards packets between the two. I want to create a gre tunnel between the two machines and to do that and make it persistent I have edited the /etc/network/interfaces and added a gre interface and tunnel to be made on boot up as the following:
After they were created I have tried to ping one machine from the other to check connectivity, pinging the gre interface IP address (10.10.10.1 and 10.10.10.2). The issue is that when I ping the Proxmox machine from Ubuntu I get no feedback, but when I run tcpdump on gre1 on Porxmox I see that the packets are received and there is a ICMP reply outgoing:
When I run the ping the other way around and check it with tcpdump on the Ubuntu machine I get nothing. I understand that the issue is when packets leave Proxmox to Ubuntu via gre1 and get lost or blocked because Ubuntu can clearly send Proxmox packets but the reply never comes back. How can I fix this?
Check if you have packet forwarding enabled for the kernel of the 3rd machine that you user for the communication of the other 2 machines
Check /etc/sysctl.conf and see if you have this:
net.ipv4.ip_forward = 1
if it's commented (#) uncomment it, save the file and issue a:
sysctl -p
Then try again the pings...

Proxmox with OPNsense as pci-passthrough setup used as Firewall/Router/IPsec/PrivateLAN/MultipleExtIPs

This setup should be based on a proxmox, being behind a opnsense VM hosted on the Proxmox itself which will protect proxmox, offer a firewall, a privat LAN and DHCP/DNS to the VMs and offer a IPsec connection into the LAN to access all VMs/Proxmox which are not NATed.
The server is the typical Hetzner Server, so only on NIC but multiple IPs or/subnets on this NIC.
Proxmox Server with 1 NIC(eth0)
3 Public 1IPs, IP2/3 are routed by MAC in the datacenter (to eth0)
eth0 is PCI-Passthroughed to the OPNsense KVM
A private network on vmbr30, 10.1.7.0/24
An IPsec mobile client connect (172.16.0.0/24) to LAN
To better outline the setup, i create this [drawing][1]: (not sure its perfect, tell me what to improve)
Questions:
How to setup such a scenario using PCI-Passthrough instead of the Bridged Mode.
Follow ups
I) Why i cannot access PROXMOX.2 but access VMEXT.11 (ARP?)
II) is why do i need a from * to * IPSEC chain rule to get ipsec running. That is most probably a very much opnsense related question.
III) I tried to handle the 2 additional external IPs by adding virtual ips in OPNsense, adding a 1:1 nat to the internal LAN ip and opening the firewall for the ports needed ( for each private lan IP ) - but yet i could not get it running. The question is, should each private IP have a seperate MAC or not? What is specifically needed to get a multi-ip setup on WAN
General high level perspective
Adding the pci-passthrough
A bit out of scope, but what you will need is
a serial console/LARA to the proxmox host.
a working LAN connection from opnsense (in my case vmbr30) to proxmox private ( 10.1.7.2 ) and vice versa. You will need this when you only have the tty console and need to reconfigure the opnsense intefaces to add em0 as the new WAN device
You might have a working IPsec connection before or opened WAN ssh/gui for further configuration of opnsense after the passthrough
In general its this guide - in short
vi /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
update-grub
vi /etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
Then reboot and ensure you have a iommu table
find /sys/kernel/iommu_groups/ -type l
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
Now find your network card
lspci -nn
in my case
00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I219-LM [8086:15b7] (rev 31)
After this command, you detach eth0 from proxmox and lose network connection. Ensure you have a tty! Please replace "8086 15b7" and 00:1f.6 with your pci-slot ( see above)
echo "8086 15b7" > /sys/bus/pci/drivers/pci-stub/new_id && echo 0000:00:1f.6 > /sys/bus/pci/devices/0000:00:1f.6/driver/unbind && echo 0000:00:1f.6 > /sys/bus/pci/drivers/pci-stub/bind
Now edit your VM and add the PCI network card:
vim /etc/pve/qemu-server/100.conf
and add ( replace 00:1f.6)
machine: q35
hostpci0: 00:1f.6
Boot opnsense connect using ssh root#10.1.7.1 from your tty proxmox host, edit the interfaces, add em0 as your WAN interface and set it on DHCP - reboot your opnsense instance and it should be up again.
add a serial console to your opnsense
In case you need a fast disaster recovery or your opnsense instance is borked, a CLI based serial is very handy, especially if you connect using LARA/iLO whatever.
Do get this done, add
vim /etc/pve/qemu-server/100.conf
and add
serial0: socket
Now in your opnsense instance
vim /conf/config.xml
and add / change this
<secondaryconsole>serial</secondaryconsole>
<serialspeed>9600</serialspeed>
Be sure you replace the current serialspeed with 9600. No reboot your opnsense vm and then
qm terminal 100
Press Enter again and you should see the login prompt
hint: you can also set your primaryconsole to serial, helps you get into boot prompts and more and debug that.
more on this under https://pve.proxmox.com/wiki/Serial_Terminal
Network interfaces on Proxmox
auto vmbr30
iface vmbr30 inet static
address 10.1.7.2
address 10.1.7.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
pre-up sleep 2
metric 1
OPNsense
WAN is External-IP1, attached em0 (eth0 pci-passthrough), DHCP
LAN is 10.1.7.1, attached to vmbr30
Multi IP Setup
Yet, i only cover the ExtraIP part, not the extra Subnet-Part. To be able to use the extra IPs, you have to disable seperate MACs for each ip in the robot - so all extra IPs have the same MAC ( IP1,IP2,IP3 )
Then, in OPN, for each extern IP you add a Virtual IP in Firewall-VirtualIPs(For every Extra IP, not the Main IP you bound WAN to). Give each Virtual IP a good description, since it will be in the select box later.
Now you can go to either Firewall->NAT->Forward, for each port
Destination: The ExtIP you want to forward from (IP2/IP3)
Dest port rang: your ports to forward, like ssh
Redirect target IP: your LAN VM/IP to map on, like 10.1.7.52
Set the redirect port, like ssh
Now you have two options, the first one considered the better, but could be more maintenance.
For every domain you access the IP2/IP3 services with, you should define local DNS "overrides" mapping on the actually private IP. This will ensure that you can communicate from the inner to your services and avoids the issues you would have since you used NATing before.
Otherwise you need to care about NAT reflection - otherwise your LAN boxes will not be able to access the external IP2/IP3, which can lead to issues in Web applications at least. Do this setup and activate outbound rules and NAT reflection:
What is working:
OPN can route a]5]5ccess the internet and has the right IP on WAN
OPN can access any client in the LAN ( VMPRIV.151 and VMEXT.11 and PROXMOX.2)
i can connect with a IPSec mobile client to OPNsense, offering access to LAN (10.1.7.0/24) from a virtual ip range 172.16.0.0/24
i can access 10.1.7.1 ( opnsense ) while connected with IPsec
i can access VMEXT using the IPsec client
i can forward ports or 1:1NAT from the extra IP2/IP3 to specific private VMs
Bottom Line
This setup works out a lot better then the alternative with the bridged mode i described. There is no more async-routing anymore, there is no need for a shorewall on proxmox, no need for a complex bridge setup on proxmox and it performs a lot better since we can use checksum offloding again.
Downsides
Disaster recovery
For disaster recovery, you need some more skills and tools. You need a LARA/iPO serial console the the proxmox hv ( since you have no internet connection ) and you will need to configure you opnsense instance to allow serial consoles as mentioned here, so you can access opnsense while you have no VNC connection at all and now SSH connection either ( even from local LAN, since network could be broken ). It works fairly well, but it needs to be trained once to be as fast as the alternatives
Cluster
As far as i can see, this setup is not able to be used in a cluster proxmox env. You can setup a cluster initially, i did by using a tinc-switch setup locally on the proxmox hv using Seperate Cluster Network. Setup the first is easy, no interruption. The second join needs to already taken into LARA/iPO mode since you need to shutdown and remove the VMs for the join ( so the gateway will be down ). You can do so by temporary using the eth0 NIC for internet. But after you joined, moved your VMs in again, you will not be able to start the VMs ( and thus the gateway will not be started). You cannot start the VMS, since you have no quorum - and you have no quorum since you have no internet to join the cluster. So finally a hen-egg issue i cannot see to be overcome. If that should be handled, only by actually a KVM not being part of the proxmox VMs, but rather standalone qemu - not desired by me right now.

Sending Multicast Packets from Docker Container (to multicast group)

I have an application that sends messages over UDP multicast that I've been attempting to put under docker. I've been running into much headwind trying to send multicast packets from a docker container.
I have been able to send messages through the --net=host option on running the docker container. I would, however, like to stick with a bridge configuration.
I would like to get some insight in what needs to be done in order to publish messages through the standard docker bridge configuration. I'm attempting to publish messages on 239.9.60.250 with port 16000. I have tried publishing udp port 16000 through the following argument on docker run.
-P 0.0.0.0:16000:16000/udp
This doesn't give me any change in behavior and my host doesn't see any multicast traffic.
Docker network drivers have no IGMP/PIM support, so you should really establish a direct Layer 2 connection from the container to the physical switch/router.
As you have found out yourself, docker's default bridge network will not help you here.
I haven't tested it with multicast, but you should be able to achieve that with Pipework.
macvlan driver should help you with your problem, but is currently experimental as of Docker Engine 1.11

multicast packages are there but can not be accessed

my box runs ubuntu 14.04. it is an old 32bit box with 4 ether nics.
what i want to achieve is multicast routing from an upstream interface (eth2.8 - dynamic ip) to a downstream interfcae (eth0.13 - 192.168.40.1).
my laptop attached to above box via eth0.13 can read multicast from 40.1 like a charm.
i verified that by running vlc as a server on 40.1
cvlc -vvv ./POS-Movie-927x521.mov --sout udp:239.255.12.42 --ttl 12
and receiving the stream on my laptop with
vlc udp://#239.255.12.42
that works even the other way round, sending with my laptop and receiving on the serverside.
so why is it not possible to access multicast packages via eth2.8?
joining works. i can verify arriving packages by
sudo tcpdump -i eth2.8 -n multicast
but it seems simply impossible to access these packages without tcpdump!
this exactly describes what i am experiencing, alone the solution is not the same.
here some sysctl parameter:
net.ipv4.conf.eth2/8.rp_filter = 1
net.ipv4.conf.eth2/8.mc_forwarding
= 1
net.ipv4.conf.eth2/8.forwarding = 1
there is no difference between sysctl params of eth2.8 and eth0.13.
and yes, this happens even if the firewall is down!
any hint appreciated, you'll make my week!
/markus
the unicast route to the upstream hosts where missing!
the interface did accept incoming igmp traffic from an ip in its own class c net but refused packets from other hosts.
unluckily the upstream is from some completely diffent network.
a simple "ip route add ip/mask dev eth2.8" finally solved all problems.

Resources