I am starting to learn SDN with ovs-ofctl and mininet, and I am configuring a switch following some tutorials, but there's something I don't catch.
When I start my topology with:
sudo mn --topo single,2 --controller remote --switch ovsk
Now if I want to add a simple flow between h1 and h5, I do:
sh ovs-ofctl add-flow s1 in_port=1,actions=output:2
sh ovs-ofctl add-flow s1 in_port=2,actions=output:1
And if I test the connectivity between hosts all is ok.
But now, deleting all flows, if I try:
sh ovs-ofctl add-flow s1 in_port=1,dl_type=0x806,nw_dst=10.0.0.2,actions=output:2
sh ovs-ofctl add-flow s1 in_port=2,dl_type=0x806,nw_dst=10.0.0.1,actions=output:1
Now if I try to ping, there is no reachability, but if I execute:
sh ovs-ofctl add-flow s1 action=NORMAL
Now I can ping again between hosts.
What am I missing here? Specifying dl_type=0x806 in the command is not enough to allow only ethernet using ARP traffic? Why ping fails there?
I think the main reason is a confusion between all involved protocols.
(1) Ping is done using ICMP, in particular ICMP echo request and ICMP echo reply messages. These messages are encapsulated in IP packets, which are in turn encapsulated in it Ethernet packets. In this case Ethernet next header field (i think it is actually called ethertype in general and dl_type here) is set to IP, which is 0x0800.
A more in-depth guide on how to read ICMP packets in wireshark can be found here.
(2) ARP is necessary for end-systems to match IP addresses to MAC addresses. ARP is encapsulated directly into Ethernet frames, where ethernet next header is set to value 0x806
Thus
sh ovs-ofctl add-flow s1 in_port=1,dl_type=0x806,nw_dst=10.0.0.2,actions=output:2
will allow only ARP packets to pass through, while dropping every non-ARP ethernet frame. Thus ping packets are being dropped.
(3) The last question is why this works.
sh ovs-ofctl add-flow s1 action=NORMAL
I am not familiar with the details of OVS. From what I understand from here, action=NORMAL will make OVS act as a normal linux bridge, which does normal ethernet bridge operation, which involves forwarding all frames based on normal MAC learning rules.
Also, since there is no match part in this rule, it should match every packet. I do not know how this one would work.
sh ovs-ofctl add-flow s1 in_port=1,dl_type=0x806,nw_dst=10.0.0.2,actions=NORMAL
(4) This reference has a table at the bottom, which lists openflow rules to match common network protocols.
Related
Could someone kindly explain why the following flow config (these flows are the only flows on the bridge) does not work as expected ?
I can ping the hosts on each side, but other traffic (e.g. web/ssh etc. does not pass).
ovs-ofctl add-flow xl1 dl_type=0x800,nw_src=10.2.0.0/20,nw_dst=10.2.1.0/24,actions=output:73
ovs-ofctl add-flow xl1 dl_type=0x800,nw_src=10.2.0.0/20,nw_dst=10.2.2.0/24,actions=output:76
ovs-ofctl add-flow xl1 arp,nw_dst=10.2.1.0/24,actions=output:73
ovs-ofctl add-flow xl1 arp,nw_dst=10.2.2.0/24,actions=output:76
The traces certainly seem to suggest the traffic should pass:
ovs-appctl ofproto/trace xl1 in_port=73,tcp,nw_src=10.
2.1.1,nw_dst=10.2.2.1,tcp_dst=22
Bridge: xl1
Flow: tcp,metadata=0,in_port=73,vlan_tci=0x0000,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00,nw_src=10.2.1.1,nw_dst=10.2.2.1,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=22,tcp_flags=0x000
Rule: table=0 cookie=0 ip,nw_src=10.2.0.0/20,nw_dst=10.2.2.0/24
OpenFlow actions=output:76
Final flow: unchanged
Megaflow: skb_priority=0,ip,in_port=73,nw_src=10.2.0.0/20,nw_dst=10.2.2.1,nw_frag=no
Datapath actions: 76
ovs-appctl ofproto/trace xl1 in_port=76,tcp,nw_src=10.
2.2.1,nw_dst=10.2.1.1,tcp_dst=22
Bridge: xl1
Flow: tcp,metadata=0,in_port=76,vlan_tci=0x0000,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00,nw_src=10.2.2.1,nw_dst=10.2.1.1,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=22,tcp_flags=0x000
Rule: table=0 cookie=0 ip,nw_src=10.2.0.0/20,nw_dst=10.2.1.0/24
OpenFlow actions=output:73
Final flow: unchanged
Megaflow: skb_priority=0,ip,in_port=76,nw_src=10.2.0.0/20,nw_dst=10.2.1.1,nw_frag=no
Datapath actions: 73
One issue is that,
10.2.1.0/24 and 10.2.2.0/24 are not in the same network.
So, if a host, like 10.10.1.1 is looking for 2.0. It may not send out an arp for 2.0. It may send out an arp for its own gateway in the 1.0 network.
You can use this command:
arp -n
on the hosts on 1.0 network to look into the arp table of these hosts.
You can also use:
tcpdump -i "eth-name"
on the openvSwitch to see what you can get if you are sending out a non-icmp packet on 1.0 or 2.0 network.
I am trying to understand how the bridged docker0 interface works.
When docker daemon starts up, it creates a bridged device docker0;
When a container starts up, it creates a interface vthn and bind to docker0
say we issue a ping command from inside the container to a external host
[root#f505f022eb5b app]# ping 130.49.40.130
PING 130.49.40.130 (130.49.40.130) 56(84) bytes of data.
64 bytes from 130.49.40.130: icmp_seq=1 ttl=52 time=11.9 ms
so apprently my host eth0 is receiving this ping back, but how does this package get forwarded to the container? There are serveral questions to ask
eth0 and docker0 are not bridged, how come docker0 get the packets from eth0?
even if docker0 got the packets, how it works internally sending packets to vth0? does it internally maintains some Maps so it can convert packets to between different mac address?
how is iptables related here?
Cheers.
Docker is not doing anything specifically magical here and your question is not really docker dependant/related.
docker0 is just a network bridge. As soon as this bridge is created (upon starting the docker service) you can assume that a new machine (in this case in a VM/docker form) has joined the your network.
When pinging the docker container from host or vice versa you are basically pinging another machine inside your network.
Regarding docker, unless you have created a new network interface (which I doubt so since you are pinging eth0) you are basically pinging yourself.
If you run the container as:
docker run -i -t --rm -p 10.0.0.99:80:8080 ubuntu:16.04
You are telling docker to create a NAT rule in iptables to forward any packets going to 10.0.0.99:80 to your docker container on port 8080.
When you run the container as:
docker run -i -t --rm -p --net=host ubuntu:16.04
Then you are saying the docker container should have the same network stack as the host so all the packets going to host will also arrive to your docker container via the docker0 bridge.
To answer your question, how does a container ping an external host, this is also achieved via NAT.
If you list your Iptables / NAT rules using: sudo iptables -t nat -L
You will likely see, something similar to the below (docker subnet may be different)
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 anywhere
This is basically saying NAT any outgoing packets originating from the docker subnet. So the outgoing packets will appear to originate from the docker host machine. When the ping packets return, the NAT table will be used to determine that a docker host actually made the request, and the packet gets forwarded to the docker veth.
I am new to Mininet and created a topology. I need to enable ECN in the switch created in the mininet topology.
How to enable ECN in the switch?
Thanks in advance
Regards
Hassaan Afridi
Since you use OVS version 2.0.2 your switch supports at least up to OpenFlow version 1.3. Explicit Congestion Notification (ECN) fields implemented from OpenFlow ver 1.1 and above. In order for the field to be applied thought you have to tell to mininet that you are going to use a version above 1.0 which is the default. To launch the mininet topo we have to go with a remote controller so we can pass flow modifications manually. To start mininet in the terminal we go with
sudo mn --topo single,3 --mac --controller remote --switch ovsk,protocols=OpenFlow13
Mininet is ok but we have to create a bridge to talk to the switch and there we will tell the switch that by this bridge we will pass OpenFlow ver 1.3 flow modifications. To do that in a new terminal we ssh at the mininet vm and we create the bridge with
sudo ovs-vsctl set bridge s1 protocols=OpenFlow13
So now we have a door opened to the switch to talk to and pass our flow mods in which we must define the openflow protocol version again. For a single mod we can do something like
sudo ovs-ofctl -O OpenFlow13 add-flow s1 in_port=1,actions=output:2
and
sudo ovs-ofctl -O OpenFlow13 add-flow s1 in_port=2,actions=output:1
Now we have passed 2 flow modifications manually and the ping between h1 and h2 should work perfectly. To install ECN flow mods we can do something like
sudo ovs-ofctl -O OpenFlow13 add-flow s1 dl_type=0x0800,nw_ecn=3,actions=output:3
Notice that as stated in the documentation of OpenFlow
When dl_type=0x0800 or 0x86dd is specified,matches the ecn bits in IP ToS or IPv6 traffic class fields .When dl_type is wildcarded or set to a value other than 0x0800 or 0x86dd, the value of nw_ecn is ignored
I'm trying to send a broadcast message using netcat.
I have firewalls open and sending a regular message like this works for me:
host: nc -l 192.168.1.121 12101
client: echo "hello" | nc 192.168.1.121 12100
But I can't get something like this to work.
host: nc -lu 0.0.0.0 12101
client: echo "hello" | nc -u 255.255.255.255 12100
Am I using the right flags? Note, the host is on Mac and the client on Linux. Can you give me an example that works for broadcasting a message?
Thanks!
The GNU version of netcat might be broken. (I can't get to work under 0.7.1 anyway.) See http://sourceforge.net/p/netcat/bugs/8/
I've gotten socat to work. Code below does UDP broadcast to port 24000.
socat - UDP-DATAGRAM:255.255.255.255:24000,broadcast
(In socat-world "-" means "stdin".)
You're not saying you want to broadcast, which is done using the -b option to nc/netcat.
nc -h 2>&1 | grep -- -b
-b allow broadcasts
A simple example that works on Ubuntu. All the info in is in the other answers, but I had to piece it together, so thought I would share the result.
server
nc -luk 12101
client
echo -n "test data" | nc -u -b 255.255.255.255 12101
The client will hang until you do Ctrl-C
Sorry, if I am assuming wrong but you mentioned that you have your firewalls set up correctly so I am guessing that the host and client are not on the same subnet???
If that is the case and this firewall is also acting also as a router (or if the packet has to go through a router) then it is going to process that packet but it will not forward it out its other interfaces. If you wanted that to happen then you would need to send a directed broadcast. For example; for the subnet 192.168.1.0/24 the directed broadcast would be 192.168.1.255, the last IP in the subnet. Then the firewall, assuming it had a route to 192.168.1.0/24 and that it is set up to forward directed broadcast, would forward that broadcast out to the destination or next hop. Configuring your device to forward directed broadcast... you would need to reference its documentation. For Cisco IOS you would type in, under the interface, "ip directed-broadcast".
255.255.255.255 is a limited broadcast and is not going to get pass your routers regardless, it is solely intended for the layer 2 link that it resides.
As for how netcat is set up:
-l 0.0.0.0 12101, tells netcat to listen on port 12101 on all interfaces that are up and with an IP address assigned. The -u is not needed as it is telling netcat to listen on a unix domain socket, google IPC :) (this is the biggest reason that your scenario is not working.)
The below should work to get a broadcast forwarded to another network via netcat:
server: nc -l 0.0.0.0 12101
host: echo "hello" | nc 192.168.1.255 12101
Hope that helps, sorry if that was long winded or off from what you were looking for :)
I have 2 servers(serv1,serv2) that communicate and i'm trying to sniff packets matching certain criteria that gets transferred from serv1 to serv2. Tshark is installed on my Desktop(desk1). I have written the following script:
while true; do
tshark -a duration:10 -i eth0 -R "(sip.CSeq.method == "OPTIONS") && (sip.Status-Code) && ip.src eq serv1" -Tfields -e sip.response-time > response.time.`date +%F-%T`
done
This script seems to run fine when run on serv1(since serv1 is sending packets to serv2). However, when i try to run this on desk1, it cant capture any packets. They all are on the same LAN. What am i missing?
Assuming that either serv1 or serv2 are on the same physical ethernet switch as desk1, you can sniff transit traffic between serv1 and serv2 by using a feature called SPAN (Switch Port Analyzer).
Assume your server is on FastEtheret4/2 and your desktop is on FastEthernet4/3 of the Cisco Switch... you should telnet or ssh into the switch and enter these commands...
4507R#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
4507R(config)#monitor session 1 source interface fastethernet 4/2
!--- This configures interface Fast Ethernet 4/2 as source port.
4507R(config)#monitor session 1 destination interface fastethernet 4/3
!--- The configures interface Fast Ethernet 0/3 as destination port.
4507R#show monitor session 1
Session 1
---------
Type : Local Session
Source Ports :
Both : Fa4/2
Destination Ports : Fa4/3
4507R#
This feature is not limited to Cisco devices... Juniper / HP / Extreme and other Enterprise ethernet switch vendors also support it.
How about using the misnamed tcpdump which will capture all traffic from the wire. What I suggest doing is just capturing packets on the interface. Do not filter at the capture level. After you can filter the pcap file. Something like this
tcpdump -w myfile.pcap -n -nn -i eth0
If your LAN is a switched network (most are) or your desktop NIC doesn't support promiscuous mode, then you won't be able to see any of the packets. Verify both of those things.