capturing tcp packets via tcpdump - tcp

I'm trying to capture tcp packets from a GPS device(client) configured to my server's 11050 port of eth1 interface. I wanna capture these packets to a file. The result is not in a human readable format. Below are list of the commands i tried with, but no results. please help...
tcpdump -w test.pcap -i eth1 tcp port 11050
tcpdump -i eth1 -X -s 11050 -w test1
test1, test.pcap both read the below!!!
��ق���ق���ʣ�N�%��S�U$��E�MO#5�&߶y?vf��6++�qe��>ۀ

tcpdump will write the captured data in a format suitable for re-parsing later with tcpdump, wireshark, Tshark, etc.
Re-read the file with tcpdump -r test.pcap and you'll get human-readable output:
$ tcpdump -r ./test.pcap
reading from file ./test.pcap, link-type EN10MB (Ethernet)
23:25:32.646075 ARP, Request who-has moxi-00067F274580 tell haig, length 28
23:25:32.646322 ARP, Reply moxi-00067F274580 is-at 00:06:7f:27:45:80 (oui Unknown), length 46
23:25:34.567932 IP haig.36941 > 192.168.0.1.domain: 36648+ A? www.google.com. (32)
...

Related

TCP push packet not delivered from tun

I setup simple packet intercept program, using two tuns, setup like this:
# ip tuntap add mode tun name tun0
# ip link set tun0 up
# ip addr add 10.0.0.0/31 dev tun0
# ip tuntap add mode tun name tun1
# ip link set tun1 up
# ip addr add 10.0.1.0/31 dev tun1
and redirect output to the program like this:
# ip rule add fwmark 1 table 1
# ip route add default dev tun0 table 1
# iptables -t mangle -A OUTPUT --source 192.168.1.0 -o enp34s0 -p tcp --dport 9732 -j MARK --set-mark 1
# iptables -t nat -A POSTROUTING --source 10.0.1.1 -o enp34s0 -j MASQUERADE
I enabled ip_forward and disabled rp_filter. Packets received on tun0 are processed, modified and ip/tcp checksums are updated.
I can even correctly intercept tcp handshake SYN -> ACK,SYN -> ACK part of communication, but after that, any incoming packet would be correctly intercepted modified and send out of tun, but it would never be delivered to local application.
Ok, found a problem, whilst I did recalculate the checksum, it only calculated correct one only for the TCP packets without any payload, thus TCP handshak get through and nothing else.

Receive specific multicast message on a client connected over VPN

Case:
[ Subnet A , 192.168.2.0/24, Padavan firmware based internet gw ]
[ Subnet B , 192.168.1.0/24, Padavan firmware based internet gw ]
Host from subnet A (2.155) is connected via VPN (possible options: PPTP, OpenVPN, L2TP w/o ipsec) to subnet B, and receives address, saying 1.245/32
In subnet B exists host (1.10/32) which sends multicast datagramms to 224.0.0.50:9898 ; On router I see them with
tcpdump -i br0 -c 10 dst host 224.0.0.50 and port 9898 and multicast
13:46:54.345369 IP 192.168.1.10.4321 > 224.0.0.50.9898: UDP, length 135
I am looking for solutions, to receive/forward those broadcast messages, so they could be seen by hosts, connected via VPN
On router B, which is Padavan firmware based, I have, and limited to udpxy, igmproxy utilities, if needed.
On client host, I am debian based, and generally not limited in tools.
Datagrams are proprietary protocol, i.e. not a iptv or video stream.
Any ideas are welcomed.
[UPD] Additional info - per discussion in comments
That's a very specific hardware device, which is not very chatty in ethernet terms (saying max 1-2 datagramms in 5 seconds), thus for sure should be pretty forwardable. Unfortunately, It sends status updates purely via broadcasting. in Subnet A do exist similar device + control software. Thus I am looking for a way datagramms broadcasted to 224.0.0.50:9898 in subnet B to re-appear in subnet A. May be with help of some tool. May be smcroute, may be udpxy, maybe igmproxy
As I don't like to leave resolved questions unanswered, here is currently working solution
In subnet B I have installed openVPN server endpoint, configured as L2.
In subnet A, on a control host I have installed openvpn client, that connects to subnet B, assigned interface is tapz
20: tapz: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
link/ether 0a:da:be:96:78:d9 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.245/24 brd 192.168.1.255 scope global noprefixroute tapz
valid_lft forever preferred_lft forever
inet6 fe80::8da:beff:fe96:78d9/64 scope link
valid_lft forever preferred_lft forever
So now on a control host I have:
broadcasting from local device on physical ethernet enp5s0
sudo tcpdump -i enp5s0 -c 10 dst host 224.0.0.50 and port 9898 and multicast
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp5s0, link-type EN10MB (Ethernet), capture size 262144 bytes
13:55:05.642963 IP lumi-gateway-v3_miio56591509.4321 > 224.0.0.50.9898: UDP,
length 136
and now I also receive broadcasts from remote network device on tapz
sudo tcpdump -i tapz -c 10 dst host 224.0.0.50 and port 9898 and multicast
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tapz, link-type EN10MB (Ethernet), capture size 262144 bytes
13:53:49.141751 IP 192.168.1.10.4321 > 224.0.0.50.9898: UDP, length 135
So far that it what I was looking for I am getting necessary datagrams on a VPN client. OpenVPN on remote side can be also optimized on filter of information forwarded for multicasts.
For those who come here, with the same question.
When you will have necessary multicast on tap0,
you can create bridge from, saying, eth0 and tap0
For notes of everyone interested, who would came here.
ip link add br0 type bridge
ip link set tap0 master br0
ip link set eth0 master br0
POC - both multicasts on single interface
sudo tcpdump -i br0 dst host 224.0.0.50 and port 9898
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br0, link-type EN10MB (Ethernet), capture size 262144 bytes
21:09:51.823632 IP 192.168.1.10.4321 > 224.0.0.50.9898: UDP, length 135
21:09:55.045138 IP 192.168.2.214.4321 > 224.0.0.50.9898: UDP, length 136

Cannot ping gateway for external Openstack network

I installed Openstack Ansible, Pike version. There is a separate network controller and on it one physical network interface. We created VLAN 139 that leads the traffic to gateway. Config file for that part looks like:
/etc/network/interfaces
...
auto eno1.139
iface eno1.139 inet manual
vlan-raw-device eno1
# OpenStack Networking VLAN bridge
auto br-vlan
iface br-vlan inet manual
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports eno1.139
We created an external Openstack network using:
openstack network create --external --share --provider-physical-network vlan --provider-network-type vlan --provider-segment 139 provider1
and all the other steps (subnet, router, etc)
As per documentation, first test should be pinging default gateway from router namespace. When I try that it is not working:
root#infra1-neutron-agents-container-e800e983:/# ip netns exec qrouter-eb842b12-9a35-4a93-baa9-38cc73531d9f ping 139.25.25.193
When I do TCP dump on physical network interface of controller node I can see packets going out without any problem:
openstackadmin#clcontroller:~$ sudo tcpdump -i eno1 --immediate-mode -e -n | grep 139.25.25.193
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eno1, link-type EN10MB (Ethernet), capture size 262144 bytes
16:30:09.182894 fa:16:3e:d4:b6:a1 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 50: vlan 139, p 0, ethertype 802.1Q, vlan 139, p 0, ethertype ARP, Request who-has 139.25.25.193 tell 139.25.25.200, length 28
I see ARP request getting to gateway that has 139.25.25.193 and I am trying to ping:
hpadmin#hos-gw01:~$ sudo tcpdump -i any --immediate-mode -e -n | grep 139.25.25.193
[sudo] password for hpadmin:
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
15:53:29.857281 B fa:16:3e:d4:b6:a1 ethertype 802.1Q (0x8100), length 62: vlan 139, p 0, ethertype 802.1Q, vlan 139, p 0, ethertype ARP, Request who-has 139.25.25.193 tell 139.25.25.200, length 38
15:53:29.857281 B fa:16:3e:d4:b6:a1 ethertype 802.1Q (0x8100), length 58: vlan 139, p 0, ethertype ARP, Request who-has 139.25.25.193 tell 139.25.25.200, length 38
but what is confusing is my gateway is not responding to those ARP requests.
If I try to do same thing from stand alone Linux machine connected to same network segment and same VLAN everything works perfect.
Any idea what the problem might be? Thanks in advance.
It seems that problem was that external OpenStack network was set up to be on VLAN 139. Once we changed it to be flat everything started working without any problems. I am still confused, though, why gateway did not sent ARP responses.

IPTable rules to restrict eth1 access to ports 80 and 443

I have a service listening to customer traffic on ports 80 and 443 of eth1. The servers hosting my service also host other admin/privileged access content on eth0 and localhost
I am trying to setup iptable rules to lock down eth1 on servers which is on same network as clients (block things like ssh through eth1/ accessing internal services running on port 9904 etc.) I also want to make sure that the rules dont forbid regular access to eth1:80 and eth1:443. I have come up with below rules but wanted to review with iptable gurus on possible issues with this rule.
-A INPUT -i eth1 -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -i eth1 -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -i eth1 -j DROP
Do the rules above suffice
How does above differ from the rules found when googling
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-A INPUT -i eth1 -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -i eth1 -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -i eth1 -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -j DROP
-A INPUT -i eth1 -p tcp -j ACCEPT
-A INPUT -i eth1 -j DROP
thanks i got this answered in https://serverfault.com/questions/834534/iptable-rules-to-restrict-eth1-access-to-ports-80-and-443 , adding it here for completeness
The first set of rules first allow all incoming packets on your ports
80 and 443. Then it drops ALL other incoming packets (except those
already accepted).
The second set of rules first allow all incoming packets on ports 80
and 443. Then it drops incoming connections (excluding 80 and 443 that
are already accepted), which are packets with only the SYN flag set.
Then it allows all incoming packets.
The difference here is what happens to your OUTGOING connections. In
the first ruleset, if you attempt to connect to another server, any
packets that server sends in response will be dropped so you will
never receive any data. In the second case, those packets will be
allowed since the first packet from the remote server will have both
SYN and ACK set and therefore pass the SYN test, and any following
packets will not have SYN set at all, and therefore pass the test.
This has been traditionally done using conntrack which requires the
kernel to keep track of every connection in the firewall, with a
command like
-A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
that matches the incoming packet either to an existing connection, or
a connection related to some other existing connection (eg FTP data
connections). If you aren't using FTP or other protocols that use
multiple random ports, then the second ruleset achieves basically the
same result without the overhead of tracking and inspecting these
connections.

Netperf reporting zero throughput

I've installed netperf 2.6 in two sites and trying to run the netperf benchmark, but All I'm getting is zero Throughput... Does anyone knows how to use netperf properly? (I was following the official documentation)
I run this at a server:
./netserver -p xxxxx
the output is:
Starting netserver with host 'IN(6)ADDR_ANY' port 'xxxxx' and family AF_UNSPEC
In the other side I run:
./netperf -s 5 -H a.b.c.d -p xxxxx
The output is:
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to a.b.c.d () port 0 AF_INET : demo
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 10.00 0.00
any ideas?
A netperf test has two "connections." The first is the "control connection" over which information about the test setup and result is exchanged. For the benchmarking itself a "data connection" is used. The control connection will use the control port you've specified with the global "-p" option. The data connection will by default use a port number chosen by the networking stack where the netserver runs.
Both have to be open through firewalls for a test to be successful.
If only the control port is open, you will see the test banners get displayed because the control connection is established. Since the data connection cannot be established, that will report zero.
You can specify an explicit port number for the data connection with a test-specific "-P" option. So, if you opened a second port number, 9992, you would start the netserver as before, and then your netperf command would become:
./netperf -s 5 -H a.b.c.d -p xxxxx -- -P ,9992
That comma is important. The test-specific -P option allows specifying both the local and remote port numbers for the data connection. The remote port number follows a comma.
terminal1:
$ sudo netserver -D -4 -L 0.0.0.0 -p 9991
Starting netserver with host '0.0.0.0' port '9991' and family AF_INET
terminal2:
$ sudo netperf -H 192.168.2.103 -l 60 -t TCP_STREAM
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.2.103 (192.168.2.103) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 524288 524288 60.02 89.66

Resources