Can not read packets from a tun interface - networking

My plan is to read from one tun interface and write to another.
Here are my command when I set up the interface:
sudo ip tuntap add dev router0 mod tun
sudo ip addr add 10.0.0.138/24 dev router0
sudo ip link set dev router0 up
Here is the output of ip addr show dev router0
8: router0: <NO-CARRIER,POINTOPOINT,MULTICAST,NOARP,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 500
link/none
inet 10.0.0.138/24 scope global router0
valid_lft forever preferred_lft forever
When I try to ping 10.0.0.138 listen on the interface using tshark via sudo tshark -i router0, nothing happens.
Here is my ping 10.0.0.138 output:
PING 10.0.0.138 (10.0.0.138) 56(84) bytes of data.
64 bytes from 10.0.0.138: icmp_seq=1 ttl=64 time=0.063 ms
64 bytes from 10.0.0.138: icmp_seq=2 ttl=64 time=0.058 ms
Here is my sudo tshark -i router0 output:
Capturing on 'router0'
Nothing is coming through
What is the problem?

Related

Getting ping 'DUP' response from host machine running raspbian buster lite image with QEMU

I ran the raspbian image with the following command:
qemu-system-arm -kernel kernel-qemu-4.19.50-buster -cpu arm1176 -m 256 -M versatilepb -dtb versatile-pb.dtb -no-reboot -serial stdio -append "root=/dev/sda2 panic=1 rootfstype=ext4 rw" -drive "file=2020-02-13-raspbian-buster-lite.img,index=0,media=disk,format=raw" -net user,hostfwd=tcp::5022-:22 -net nic -net user,smb=/dev/shm/
Booting the image completed successfully.
Withing guest machine I get the following routing table:
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 202 0 0 eth0
10.0.2.0 0.0.0.0 255.255.255.0 U 202 0 0 eth0
Now when pinging the gateway at 10.0.2.2 works fine, but when pinging the host machine or the host gateway at 10.0.0.138 I get:
pi#raspberrypi:~$ ping 10.0.0.138
PING 10.0.0.138 (10.0.0.138) 56(84) bytes of data.
64 bytes from 10.0.0.138: icmp_seq=1 ttl=255 time=1.19 ms
64 bytes from 10.0.0.138: icmp_seq=1 ttl=255 time=1.23 ms (DUP!)
I verified that 10.0.0.138 isn't defined as broadcast address, and there are no IP duplications. Any idea how to debug from here? Thanks
As Peter Maydell suggested, merging the two options into one "-net user,smb=/dev/shm/,hostfwd=tcp::5022-:22" solved the case.
This is because QEMU creates a new 'user' network backend for each use of '-net user' on the command line, so in the original commandline there were two backends, each of which was responding to ping packets.

Problem with connecting IPSec IKEv2 from Ubuntu 18.04

There is a computer with Ubuntu 18.04 it is located behind the NAT router and receives the address in the subnet 192.168.1.0/24. For example 192.168.1.11
I connect from this computer to the VPN server using the IPSec IKEv2 protocol but neither systemctl start strongswan nor ipsec start do not raise the connection, I'm can connect in only one way:
sudo charon-cmd --cert ca-cert.pem --host vpn_domain_or_IP --identity your_username
After connecting I get the address from the NAT subnet on the VPN server 10.10.10.0/24 for example 10.10.10.11 VPN works and all traffic goes through the tunnel. But the connection to the local network completely disappears, requests from subnet 192.168.1.0/24 to address 192.168.1.11 and from my computer to any of the subnet addresses 192.168.1.0/24 do not pass
Output ip a:
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 18:d6:c7:14:ff:04 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.11/24 brd 192.168.1.255 scope global dynamic noprefixroute eth0
valid_lft 562sec preferred_lft 562sec
15: ipsec0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UNKNOWN group default qlen 500
link/none
inet 10.10.10.11/32 scope global ipsec0
valid_lft forever preferred_lft forever
inet6 fe80::5b2:78:42:d7/64 scope link stable-privacy
valid_lft forever preferred_lft forever
Ping
:~# ping 192.168.1.11
PING 192.168.1.11 (192.168.1.11) 56(84) bytes of data.
64 bytes from 192.168.1.11: icmp_seq=1 ttl=64 time=0.071 ms
64 bytes from 192.168.1.11: icmp_seq=2 ttl=64 time=0.070 ms
64 bytes from 192.168.1.11: icmp_seq=3 ttl=64 time=0.069 ms
64 bytes from 192.168.1.11: icmp_seq=4 ttl=64 time=0.072 ms
64 bytes from 192.168.1.11: icmp_seq=5 ttl=64 time=0.067 ms
^C
--- 192.168.1.11 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4075ms
rtt min/avg/max/mdev = 0.067/0.069/0.072/0.010 ms
:~# ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
^C
--- 192.168.1.1 ping statistics ---
6 packets transmitted, 0 received, 100% packet loss, time 5105ms
All configurations are identical to this resource.
The referred resource has leftsubnet=0.0.0.0/0 set. That causes the VPN connection to default to route everything through the VPN. So simplest is if You can change that. I also want to do this (so add all public-ranges in that list and omit private ranges, maybe besides a special private range to reach the servers LAN). Otherwise You have to manage Your local routing on connecting client "manually". (If both sides use strongwan it should be possible to narrow it on eighter side without breaking the SA completely, but not certain whether specifying multiple subnets works with IKEv1 between strongswan client and server or whether You would need to define multiple SAs then.)
Regarding "only way to establish connection"... I'm wondering whether that means You really have the example confiuration (ike2-rw in ipsec.conf) and started daemon and it is not working - but the example is working on server. I had problems with the Strongswan on Ubuntu 18.04 server side (the VPN gateway), it was connecting but connection came not up. The client I did not try. But I found the Ubuntu 18.04 package is broken (or was back then, a few monmth ago) and upgraded my Ubuntu. With 19.04 it works like a charm. (What is Your journal for the strongswan service saying and syslog - or better the /var/log/charon.log when You try to bring up the client as per documentation?)

Receive specific multicast message on a client connected over VPN

Case:
[ Subnet A , 192.168.2.0/24, Padavan firmware based internet gw ]
[ Subnet B , 192.168.1.0/24, Padavan firmware based internet gw ]
Host from subnet A (2.155) is connected via VPN (possible options: PPTP, OpenVPN, L2TP w/o ipsec) to subnet B, and receives address, saying 1.245/32
In subnet B exists host (1.10/32) which sends multicast datagramms to 224.0.0.50:9898 ; On router I see them with
tcpdump -i br0 -c 10 dst host 224.0.0.50 and port 9898 and multicast
13:46:54.345369 IP 192.168.1.10.4321 > 224.0.0.50.9898: UDP, length 135
I am looking for solutions, to receive/forward those broadcast messages, so they could be seen by hosts, connected via VPN
On router B, which is Padavan firmware based, I have, and limited to udpxy, igmproxy utilities, if needed.
On client host, I am debian based, and generally not limited in tools.
Datagrams are proprietary protocol, i.e. not a iptv or video stream.
Any ideas are welcomed.
[UPD] Additional info - per discussion in comments
That's a very specific hardware device, which is not very chatty in ethernet terms (saying max 1-2 datagramms in 5 seconds), thus for sure should be pretty forwardable. Unfortunately, It sends status updates purely via broadcasting. in Subnet A do exist similar device + control software. Thus I am looking for a way datagramms broadcasted to 224.0.0.50:9898 in subnet B to re-appear in subnet A. May be with help of some tool. May be smcroute, may be udpxy, maybe igmproxy
As I don't like to leave resolved questions unanswered, here is currently working solution
In subnet B I have installed openVPN server endpoint, configured as L2.
In subnet A, on a control host I have installed openvpn client, that connects to subnet B, assigned interface is tapz
20: tapz: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
link/ether 0a:da:be:96:78:d9 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.245/24 brd 192.168.1.255 scope global noprefixroute tapz
valid_lft forever preferred_lft forever
inet6 fe80::8da:beff:fe96:78d9/64 scope link
valid_lft forever preferred_lft forever
So now on a control host I have:
broadcasting from local device on physical ethernet enp5s0
sudo tcpdump -i enp5s0 -c 10 dst host 224.0.0.50 and port 9898 and multicast
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp5s0, link-type EN10MB (Ethernet), capture size 262144 bytes
13:55:05.642963 IP lumi-gateway-v3_miio56591509.4321 > 224.0.0.50.9898: UDP,
length 136
and now I also receive broadcasts from remote network device on tapz
sudo tcpdump -i tapz -c 10 dst host 224.0.0.50 and port 9898 and multicast
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tapz, link-type EN10MB (Ethernet), capture size 262144 bytes
13:53:49.141751 IP 192.168.1.10.4321 > 224.0.0.50.9898: UDP, length 135
So far that it what I was looking for I am getting necessary datagrams on a VPN client. OpenVPN on remote side can be also optimized on filter of information forwarded for multicasts.
For those who come here, with the same question.
When you will have necessary multicast on tap0,
you can create bridge from, saying, eth0 and tap0
For notes of everyone interested, who would came here.
ip link add br0 type bridge
ip link set tap0 master br0
ip link set eth0 master br0
POC - both multicasts on single interface
sudo tcpdump -i br0 dst host 224.0.0.50 and port 9898
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br0, link-type EN10MB (Ethernet), capture size 262144 bytes
21:09:51.823632 IP 192.168.1.10.4321 > 224.0.0.50.9898: UDP, length 135
21:09:55.045138 IP 192.168.2.214.4321 > 224.0.0.50.9898: UDP, length 136

why failed to forward public ip to docker NAT ip

For example, on the physical machine:
# ip addr
5: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 ..
inet 10.32.230.90/24 scope global eth2
valid_lft forever preferred_lft forever
inet 10.32.230.61/24 scope global secondary eth2
valid_lft forever preferred_lft forever
8: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
link/ether 02:42:65:1b:b0:25 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
"10.32.230.90" is the main IP of this machine, and "10.32.230.61" is secondary added with "ip addr add 10.32.230.61/24 dev eth2".
After creating a docker instance, with IP = 172.17.0.10, I add the following rules to connect native IP with secondary IP:
# iptables -A POSTROUTING -j MASQUERADE
# iptables -t nat -A PREROUTING -d 10.32.230.61 -j DNAT --to 172.17.0.10
# echo 1 > /proc/sys/net/ipv4/ip_forward
But it doesn't work because external PC still cannot get access to 10.32.230.61, but can get access to 10.32.230.90. What's the solution?
(From a certain PC, which IP is, for example, 10.32.230.95)
# ping 10.32.230.90
PING 10.32.230.90 (10.32.230.90) 56(84) bytes of data.
64 bytes from 10.32.230.90: icmp_seq=1 ttl=52 time=280 ms
64 bytes from 10.32.230.90: icmp_seq=2 ttl=52 time=336 ms
^C
# ping 10.32.230.61
(Timeout..)
I am sure that there is no IP confliction: 10.32.230.61 is not used by any other hosts.

how to mount ipv6 address to linux

I am trying to mount a folder from ubuntu system having ip ipv6 as well as ipv4 address
root#:/home# ifconfig
br0 Link encap:Ethernet HWaddr 16:37:81:2e:ce:e9
inet addr:10.0.3.24 Bcast:10.0.7.255 Mask:255.255.248.0
inet6 addr: 2001:db8::60fe:5bff:febc:912/64 Scope:Global
inet6 addr: 2001:db8::e8a6:7d68:16b8:3d86/64 Scope:Global
I can able to ping ipv6 address from different linux system :
[root#Abhitesh home]# ping6 2001:db8::60fe:5bff:febc:912
PING 2001:db8::60fe:5bff:febc:912(2001:db8::60fe:5bff:febc:912) 56 data bytes
64 bytes from 2001:db8::60fe:5bff:febc:912: icmp_seq=1 ttl=64 time=0.968 ms
64 bytes from 2001:db8::60fe:5bff:febc:912: icmp_seq=2 ttl=64 time=1.07 ms
I am getting error, when i am trying to mount with ipv6
[root#Abhitesh home]# mount -t nfs 2001:db8::60fe:5bff:febc:912:/home/abhitesh /home/mount/
mount.nfs: mount system call failed
[root#Abhitesh home]#
with ipv4 mount command is working.
Is that my command is wrong or i need to configure something, to mount with ipv6.
In my system ipv6 is enabled.
cat /proc/sys/net/ipv6/conf/all/disable_ipv6
0
Command is wrong NFS mount command should be
mount -t nfs [2001:db8::60fe:5bff:febc:912]:/home/abhitesh /home/mount/
IPv6 address should be in [].
and for CIFS mount command should be
mount -t cifs -o username=xxxx,password=yyyyy //2001:db8::60fe:5bff:febc:914/public /home/mount
For those who might ever need it, posting below the entries that worked for me. Using Arch linux and link-local IPv6 addresses.
/etc/exports entry on the server
/dir1 fe80::blah:blah:blah:blah(rw,sync,nohide)
/etc/fstab entry on client
[fe80::boom:boom:boom:boom%wlan0]:/dir1 /home/a/b/c nfs noatime,noauto,users 0 0
Command when mounting manually
sudo mount [fe80::boom:boom:boom:boom%wlan0]:/dir1 /home/a/b/c

Resources