multicast packages are there but can not be accessed - networking

my box runs ubuntu 14.04. it is an old 32bit box with 4 ether nics.
what i want to achieve is multicast routing from an upstream interface (eth2.8 - dynamic ip) to a downstream interfcae (eth0.13 - 192.168.40.1).
my laptop attached to above box via eth0.13 can read multicast from 40.1 like a charm.
i verified that by running vlc as a server on 40.1
cvlc -vvv ./POS-Movie-927x521.mov --sout udp:239.255.12.42 --ttl 12
and receiving the stream on my laptop with
vlc udp://#239.255.12.42
that works even the other way round, sending with my laptop and receiving on the serverside.
so why is it not possible to access multicast packages via eth2.8?
joining works. i can verify arriving packages by
sudo tcpdump -i eth2.8 -n multicast
but it seems simply impossible to access these packages without tcpdump!
this exactly describes what i am experiencing, alone the solution is not the same.
here some sysctl parameter:
net.ipv4.conf.eth2/8.rp_filter = 1
net.ipv4.conf.eth2/8.mc_forwarding
= 1
net.ipv4.conf.eth2/8.forwarding = 1
there is no difference between sysctl params of eth2.8 and eth0.13.
and yes, this happens even if the firewall is down!
any hint appreciated, you'll make my week!
/markus

the unicast route to the upstream hosts where missing!
the interface did accept incoming igmp traffic from an ip in its own class c net but refused packets from other hosts.
unluckily the upstream is from some completely diffent network.
a simple "ip route add ip/mask dev eth2.8" finally solved all problems.

Related

Unable to receive snmp-traps on UDP 162

Thanks in advance for the help.
Issue:
Unable to receive snmp-traps on udp 162 port.
Scenario: Trying to put a nexus 5672 in OpenNMS for monitoring
Pre-Checks done:
I am able to snmpwalk the nexus 5k from my linux node on which
OpenNMS is installed.
I am even able to do snmpgets.
I see snmp traffic on udp 161 but they are primarily because of the snmp-get's that opennms is doing.
UNABLE TO SEE ANYTHING WHEN I DO A TCPDUMP ON 162 PORT :(
I have checked if any ACLs are set locally but they are not, iptables as a service is stopped.
I have verified that the snmp-configs are properly pushed.
Configs are pushed on the loop-back interface and there are not acl-groups on the nexus 5k either and there is not firewall between the nexus 5k and the OpenNMS Hosted Linux System
Please help, i do not know what i am missing.
Ok, first of all, there are two concepts with SNMP, the first one is polling for data to get data from sensors or discover elements from your device. The monitoring application sends requests to your Nexus device. This is what you do when you issue a snmpwalk or snmpget command. The Nexus device has an SNMP agent running which is listening on port 161/UDP.
The second one is, your Nexus device can send messages to your monitoring application. Your monitoring application with OpenNMS needs to have a listener running on port 162/UDP, called SNMP Traps or SNMP Informs.
So trying to debug the problem not getting SNMP Traps with snmpget or snmpwalk does not help in the first place. The communication is initialized by the Nexus device and OpenNMS is the listener for the traps.
I would try to debug the problem with the following steps:
Ensure OpenNMS has Trapd enabled and is listening on the right interfaces, e.g. with ss -lnpu sport = :162
Make sure you don't have a firewall on your OpenNMS server which blocks traffic to 162/UDP, e.g. iptables -L
Use tcpdump to see if the SNMP trap from your Nexus arrives on the OpenNMS server by looking at traffic with target port 162 with protocol UDP.
If you're SNMP trap is received from the OpenNMS server, you can then start looking in trapd.log of your OpenNMS server and verify if community settings for the IP is correct. OpenNMS will use the community which is configured for the senders IP address to process the trap
In hope this helps
This got resolved. Everything from the Linux end, and the OpenNMS SNMP end was good. However, the network device had SNMP configs wrongly pushed. I changed it to use the default VRF rather than the loopback address, and it started working.

gre tunnel issues - one sided communication

I have two machines:
Ubuntu 16.04 server VM (172.18.6.10)
Proxmox VE5 station (192.168.6.30)
they are communicating through a third machine that forwards packets between the two. I want to create a gre tunnel between the two machines and to do that and make it persistent I have edited the /etc/network/interfaces and added a gre interface and tunnel to be made on boot up as the following:
After they were created I have tried to ping one machine from the other to check connectivity, pinging the gre interface IP address (10.10.10.1 and 10.10.10.2). The issue is that when I ping the Proxmox machine from Ubuntu I get no feedback, but when I run tcpdump on gre1 on Porxmox I see that the packets are received and there is a ICMP reply outgoing:
When I run the ping the other way around and check it with tcpdump on the Ubuntu machine I get nothing. I understand that the issue is when packets leave Proxmox to Ubuntu via gre1 and get lost or blocked because Ubuntu can clearly send Proxmox packets but the reply never comes back. How can I fix this?
Check if you have packet forwarding enabled for the kernel of the 3rd machine that you user for the communication of the other 2 machines
Check /etc/sysctl.conf and see if you have this:
net.ipv4.ip_forward = 1
if it's commented (#) uncomment it, save the file and issue a:
sysctl -p
Then try again the pings...

Sending Multicast Packets from Docker Container (to multicast group)

I have an application that sends messages over UDP multicast that I've been attempting to put under docker. I've been running into much headwind trying to send multicast packets from a docker container.
I have been able to send messages through the --net=host option on running the docker container. I would, however, like to stick with a bridge configuration.
I would like to get some insight in what needs to be done in order to publish messages through the standard docker bridge configuration. I'm attempting to publish messages on 239.9.60.250 with port 16000. I have tried publishing udp port 16000 through the following argument on docker run.
-P 0.0.0.0:16000:16000/udp
This doesn't give me any change in behavior and my host doesn't see any multicast traffic.
Docker network drivers have no IGMP/PIM support, so you should really establish a direct Layer 2 connection from the container to the physical switch/router.
As you have found out yourself, docker's default bridge network will not help you here.
I haven't tested it with multicast, but you should be able to achieve that with Pipework.
macvlan driver should help you with your problem, but is currently experimental as of Docker Engine 1.11

Capturing windows XP localhost TCP traffic

I have done a fair amount of reading on this subject - capturing windows xp localhost TCP traffic.
There seem to be a couple of methods:
1/Using RawCap.exe wont work as windows XP handles localhost not through the normal network stack
2/Using a tool like SocketSniff which will look at winsock calls for a particular process (I may try this)
3/Using proxocket dlls to output a cap file for all winsock traffic for a particular application (May not work depending on version of the application or version of windows.
4/Wireshark wont work in this scenario for the same kind of reason that RawCap.exe wont work
I have read in detail this article on wireshark https://wiki.wireshark.org/CaptureSetup/Loopback and my question references this section:
So let's say I decide to install a windows loopback adapter.
Next I need to do this :
1. go to MS Loopback adapter properties, set IP 10.0.0.10, MASK 255.255.255.0
2. ipconfig /all and look at the MAC-ID for your new adapter.
3. arp -s 10.0.0.10 <MAC-ID>
4. route add 10.0.0.10 10.0.0.10 mask 255.255.255.255
5. to test: "telnet 10.0.0.10"
Now there is something things I dont understand which I would like explained about this sequence of steps. I have an application I want to watch which makes calls to 127.0.0.1 or 'localhost'.
I install my MS Loopback adapter, set its IP and Mask.
I then grab the MAc address.
I then via arp add a static cache entry so 10.0.0.10 resolves to the physical device.
I then add a route from 10.0.0.10 to itself, 10.0.0.10
Now at this point surely capturing on this MS Loopback adapter still wont pickup 127.0.0.1 or localhost will it? It would only pick that up if I had my application pointing at 10.0.0.10 as 'localhost'?
Can somebody please clarify - perhaps my understanding is incorrect and it indeed would work??
I decided to try SocketSniff - and it solved my problem entirely - it picked up the calls the application I wanted to monitor was making and I was able to continue happily programming after that!

Bridging commands and concept: Ubuntu 12.04 LTS

I am using bridging as a technique to connect 2 virtual interfaces together in Ubuntu 12.04.
One of the interfaces is a mininet interface (www.mininet.org).
I am getting a lot of TCP retransmission packets, and the connectivity is extremely slow.
Trying to debug this issue.
I have tried to enable STP on the bridge, but it doesn't happen:
~$ brctl show
bridge name bridge id STP enabled interfaces
s1 0000.f643bed86249 no s1-eth1
s1-eth2
s1-eth3
s2 0000.caf874f68248 no s2-eth1
~$ sudo brctl stp s2 on
~$ brctl show
bridge name bridge id STP enabled interfaces
s1 0000.f643bed86249 no s1-eth1
s1-eth2
s1-eth3
s2 0000.caf874f68248 no s2-eth1
I am confused as to why this command does not work.
Also, auto-negotiation is off in these interfaces.
Does autonegotiation matter for virtual interfaces?
Should I manually set auto-negotiation to 'on' or set the duplex and speed of virtual interfaces?
Also, ping and dns work perfectly fine. For http traffic, SYN, SYN-ACK and ACK is as expected, however, the GET/POST request gets retransmitted 5-6 time immediately after the first GET/POST.
This is a confusing thing for me now and any links/pointers/commands will be helpful.
Please direct me to the right forum if this is not a question for stackoverflow. TIA.
The STP is founded to solve the Lay2 looping and the broadcast storm that the Lay2 looping cause. It's nothing about the TCP retransmission.
Maybe you can check the DNS resolvf time out in your case, and turn on the web server debug log.

Resources