Terminal not seeing ping messages from TUN port - ip

Hi I'm working on a project and I had a question involving ping commands and how they interface over network TUN ports.
Basically I'm sending out ping requests which are routed to my TUN port and the reply's are sent to the TUN port over the VPN. There are no other internet interfaces (i.e. no wifi/ethernet). Using wireshark and tcpdump I can see that the correct reply messages are seen on the TUN0 port but terminal does not see the replys and instead shows 100% drop rate. The issue seems to be that the TUN0 port is not properly linking back to the kernal? (total guess I'm quite new to IP routing).
The IP address of the TUN is 10.0.0.73 and I am pinging a computer with IP address 10.0.0.28
Bellow is a snippet from the tcpdump on TUN0 this is a request and reply that to my untrained eye should work:
23:08:52.257566 IP (tos 0x0, ttl 64, id 11185, offset 0, flags [DF], proto ICMP (1), length 84)
10.0.0.73 > 10.0.0.28: ICMP echo request, id 24667, seq 2, length 64
23:09:11.508002 IP (tos 0x0, ttl 64, id 13315, offset 0, flags [none], proto ICMP (1), length 84)
10.0.0.28 > 10.0.0.73: ICMP echo reply, id 24667, seq 2, length 64
Based on other posts I checked my ip route list and the output is as such
pi#raspberrypi:~$ sudo ip route list
10.0.0.0/24 dev tun0 proto kernel scope link src 10.0.0.73
and the ifconfig is this:
pi#raspberrypi:~$ ifconfig tun0
tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:10.0.0.73 P-t-P 10.0.0.73 Mask:255.255.255.0
...

Turns out the issue was that the replies were showing up in incorrect orders and greatly delayed, when I fixed the network connections this issue went away without changing any configurations in the iptables

Related

Receive specific multicast message on a client connected over VPN

Case:
[ Subnet A , 192.168.2.0/24, Padavan firmware based internet gw ]
[ Subnet B , 192.168.1.0/24, Padavan firmware based internet gw ]
Host from subnet A (2.155) is connected via VPN (possible options: PPTP, OpenVPN, L2TP w/o ipsec) to subnet B, and receives address, saying 1.245/32
In subnet B exists host (1.10/32) which sends multicast datagramms to 224.0.0.50:9898 ; On router I see them with
tcpdump -i br0 -c 10 dst host 224.0.0.50 and port 9898 and multicast
13:46:54.345369 IP 192.168.1.10.4321 > 224.0.0.50.9898: UDP, length 135
I am looking for solutions, to receive/forward those broadcast messages, so they could be seen by hosts, connected via VPN
On router B, which is Padavan firmware based, I have, and limited to udpxy, igmproxy utilities, if needed.
On client host, I am debian based, and generally not limited in tools.
Datagrams are proprietary protocol, i.e. not a iptv or video stream.
Any ideas are welcomed.
[UPD] Additional info - per discussion in comments
That's a very specific hardware device, which is not very chatty in ethernet terms (saying max 1-2 datagramms in 5 seconds), thus for sure should be pretty forwardable. Unfortunately, It sends status updates purely via broadcasting. in Subnet A do exist similar device + control software. Thus I am looking for a way datagramms broadcasted to 224.0.0.50:9898 in subnet B to re-appear in subnet A. May be with help of some tool. May be smcroute, may be udpxy, maybe igmproxy
As I don't like to leave resolved questions unanswered, here is currently working solution
In subnet B I have installed openVPN server endpoint, configured as L2.
In subnet A, on a control host I have installed openvpn client, that connects to subnet B, assigned interface is tapz
20: tapz: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
link/ether 0a:da:be:96:78:d9 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.245/24 brd 192.168.1.255 scope global noprefixroute tapz
valid_lft forever preferred_lft forever
inet6 fe80::8da:beff:fe96:78d9/64 scope link
valid_lft forever preferred_lft forever
So now on a control host I have:
broadcasting from local device on physical ethernet enp5s0
sudo tcpdump -i enp5s0 -c 10 dst host 224.0.0.50 and port 9898 and multicast
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp5s0, link-type EN10MB (Ethernet), capture size 262144 bytes
13:55:05.642963 IP lumi-gateway-v3_miio56591509.4321 > 224.0.0.50.9898: UDP,
length 136
and now I also receive broadcasts from remote network device on tapz
sudo tcpdump -i tapz -c 10 dst host 224.0.0.50 and port 9898 and multicast
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tapz, link-type EN10MB (Ethernet), capture size 262144 bytes
13:53:49.141751 IP 192.168.1.10.4321 > 224.0.0.50.9898: UDP, length 135
So far that it what I was looking for I am getting necessary datagrams on a VPN client. OpenVPN on remote side can be also optimized on filter of information forwarded for multicasts.
For those who come here, with the same question.
When you will have necessary multicast on tap0,
you can create bridge from, saying, eth0 and tap0
For notes of everyone interested, who would came here.
ip link add br0 type bridge
ip link set tap0 master br0
ip link set eth0 master br0
POC - both multicasts on single interface
sudo tcpdump -i br0 dst host 224.0.0.50 and port 9898
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br0, link-type EN10MB (Ethernet), capture size 262144 bytes
21:09:51.823632 IP 192.168.1.10.4321 > 224.0.0.50.9898: UDP, length 135
21:09:55.045138 IP 192.168.2.214.4321 > 224.0.0.50.9898: UDP, length 136

Cannot ping gateway for external Openstack network

I installed Openstack Ansible, Pike version. There is a separate network controller and on it one physical network interface. We created VLAN 139 that leads the traffic to gateway. Config file for that part looks like:
/etc/network/interfaces
...
auto eno1.139
iface eno1.139 inet manual
vlan-raw-device eno1
# OpenStack Networking VLAN bridge
auto br-vlan
iface br-vlan inet manual
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports eno1.139
We created an external Openstack network using:
openstack network create --external --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 139 provider1
and all the other steps (subnet, router, etc)
As per documentation, first test should be pinging default gateway from router namespace. When I try that it is not working:
root#infra1-neutron-agents-container-e800e983:/# ip netns exec qrouter-eb842b12-9a35-4a93-baa9-38cc73531d9f ping 139.25.25.193
When I do TCP dump on physical network interface of controller node I can see packets going out without any problem:
openstackadmin#clcontroller:~$ sudo tcpdump -i eno1 --immediate-mode -e -n | grep 139.25.25.193
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eno1, link-type EN10MB (Ethernet), capture size 262144 bytes
16:30:09.182894 fa:16:3e:d4:b6:a1 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 50: vlan 139, p 0, ethertype 802.1Q, vlan 139, p 0, ethertype ARP, Request who-has 139.25.25.193 tell 139.25.25.200, length 28
I see ARP request getting to gateway that has 139.25.25.193 and I am trying to ping:
hpadmin#hos-gw01:~$ sudo tcpdump -i any --immediate-mode -e -n | grep 139.25.25.193
[sudo] password for hpadmin:
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
15:53:29.857281 B fa:16:3e:d4:b6:a1 ethertype 802.1Q (0x8100), length 62: vlan 139, p 0, ethertype 802.1Q, vlan 139, p 0, ethertype ARP, Request who-has 139.25.25.193 tell 139.25.25.200, length 38
15:53:29.857281 B fa:16:3e:d4:b6:a1 ethertype 802.1Q (0x8100), length 58: vlan 139, p 0, ethertype ARP, Request who-has 139.25.25.193 tell 139.25.25.200, length 38
but what is confusing is my gateway is not responding to those ARP requests.
If I try to do same thing from stand alone Linux machine connected to same network segment and same VLAN everything works perfect.
Any idea what the problem might be? Thanks in advance.
It seems that problem was that external OpenStack network was set up to be on VLAN 139. Once we changed it to be flat everything started working without any problems. I am still confused, though, why gateway did not sent ARP responses.

Connectivity issue - tcpdump reports ping a success, ping itself does not

I am having some connectivity issues. The arp table is not populated even if the arp request are successfully transferred on the wire. Which leads to unsuccessfully ping. When I add a arp entry manually, tcpdump shows replies but not ping itself.
Ping packets got reply in tcpdump but not in ping.
Pinging 12.12.12.10 from 12.12.12.7
Ping Screen:
From 12.12.12.7 icmp_seq=594 Destination Host Unreachable
From 12.12.12.7 icmp_seq=595 Destination Host Unreachable
From 12.12.12.7 icmp_seq=596 Destination Host Unreachable
From 12.12.12.7 icmp_seq=598 Destination Host Unreachable
From 12.12.12.7 icmp_seq=599 Destination Host Unreachable
Tcpdump output:
08:18:38.117478 IP 12.12.12.7 > 12.12.12.10: ICMP echo request, id 64559, seq 197, length 64
08:18:38.118852 IP 12.12.12.10 > 12.12.12.7: ICMP echo reply, id 64559, seq 197, length 64
08:18:39.117471 IP 12.12.12.7 > 12.12.12.10: ICMP echo request, id 64559, seq 198, length 64
08:18:39.119015 IP 12.12.12.10 > 12.12.12.7: ICMP echo reply, id 64559, seq 198, length 64
All help is much appreciated.
I have seen this issue too, could you check your firewall policy is set up right?

How to forward packet from eth0 to a tun/tap device?

Here is the topo: HostA(eth0) ---- (eth0)HostB
I have created a tun/tap device on HostB, for say tun0 or tap0. When eth0 of HostB receives a packet from HostA, maybe a ICMPv6(NS, echo request, etc.) or a UDP/TCP packet(encapsulated with IPv6 header), I want to forward this packet from eth0 to tap0. After doing something to this packet, I also want to send a reply back to HostA, through tap0 and eth0.
I cannot find a way to do that, can some one help me or give some hints?
This is an extremely basic routing question, probably unsuitable for Stack Overflow.
You need something like this on Host B:
HostB# sysctl -w net.ipv6.conf.all.forwarding=1
HostB# ip -6 addr add 2001:db8:0:0::1/64 dev eth0
HostB# ip -6 addr add 2001:db8:0:1::1/64 dev tun0
Then on Host A:
HostA# ip -6 addr add 2001:db8:0:0::2/64 dev eth0
HostA# ip -6 route add default via 2001:db8:0:0::1 dev eth0
HostA# ping6 2001:db8:0:1::2 # <-- should work if that host exists on tun0

Pinging between two tap devices on the same machine

I have two virtual TAP interfaces tap0 and tap1 on my machine. They have IPs 10.0.0.1 and 10.0.0.2 respectively. They are both connected to each other using socat. Both have netmasks 255.255.255.0 (and hence are on the same subnet). With this setup, I try pinging 10.0.0.2 through tap0 and vice versa. This doesn't seem to work for some reason. Although tcpdump shows ARP packets from tap0 reaching tap1, there are no ARP replies and hence no ICMP requests and hence no ICMP replies. Using a TUN device instead of a TAP device bypasses the ARP request/response cycle, but now the ICMP requests show up at tap1 with no ICMP response coming back.
I have tried a couple of things like enabling ip_forward ( echo 1 > /proc/sys/net/ipv4/ip_forward) and disabling reverse path filtering ( echo 0 > /proc/sys/net/ipv4/conf/tap0/rp_filter and echo 0 > /proc/sys/net/ipv4/conf/tap1/rp_filter ).
Here are the commands to reproduce my problem :
sudo socat TUN:10.0.0.1/24,tun-type=tap TUN:10.0.0.2/24,tun-type=tap
sudo ifconfig tap0 10.0.0.1 netmask 255.255.255.0
sudo ifconfig tap1 10.0.0.2 netmask 255.255.255.0
ping -Itap0 10.0.0.2
tcpdump -Itap0 -n
tcpdump -Itap1 -n

Resources