Could someone kindly explain why the following flow config (these flows are the only flows on the bridge) does not work as expected ?
I can ping the hosts on each side, but other traffic (e.g. web/ssh etc. does not pass).
ovs-ofctl add-flow xl1 dl_type=0x800,nw_src=10.2.0.0/20,nw_dst=10.2.1.0/24,actions=output:73
ovs-ofctl add-flow xl1 dl_type=0x800,nw_src=10.2.0.0/20,nw_dst=10.2.2.0/24,actions=output:76
ovs-ofctl add-flow xl1 arp,nw_dst=10.2.1.0/24,actions=output:73
ovs-ofctl add-flow xl1 arp,nw_dst=10.2.2.0/24,actions=output:76
The traces certainly seem to suggest the traffic should pass:
ovs-appctl ofproto/trace xl1 in_port=73,tcp,nw_src=10.
2.1.1,nw_dst=10.2.2.1,tcp_dst=22
Bridge: xl1
Flow: tcp,metadata=0,in_port=73,vlan_tci=0x0000,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00,nw_src=10.2.1.1,nw_dst=10.2.2.1,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=22,tcp_flags=0x000
Rule: table=0 cookie=0 ip,nw_src=10.2.0.0/20,nw_dst=10.2.2.0/24
OpenFlow actions=output:76
Final flow: unchanged
Megaflow: skb_priority=0,ip,in_port=73,nw_src=10.2.0.0/20,nw_dst=10.2.2.1,nw_frag=no
Datapath actions: 76
ovs-appctl ofproto/trace xl1 in_port=76,tcp,nw_src=10.
2.2.1,nw_dst=10.2.1.1,tcp_dst=22
Bridge: xl1
Flow: tcp,metadata=0,in_port=76,vlan_tci=0x0000,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00,nw_src=10.2.2.1,nw_dst=10.2.1.1,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=22,tcp_flags=0x000
Rule: table=0 cookie=0 ip,nw_src=10.2.0.0/20,nw_dst=10.2.1.0/24
OpenFlow actions=output:73
Final flow: unchanged
Megaflow: skb_priority=0,ip,in_port=76,nw_src=10.2.0.0/20,nw_dst=10.2.1.1,nw_frag=no
Datapath actions: 73
One issue is that,
10.2.1.0/24 and 10.2.2.0/24 are not in the same network.
So, if a host, like 10.10.1.1 is looking for 2.0. It may not send out an arp for 2.0. It may send out an arp for its own gateway in the 1.0 network.
You can use this command:
arp -n
on the hosts on 1.0 network to look into the arp table of these hosts.
You can also use:
tcpdump -i "eth-name"
on the openvSwitch to see what you can get if you are sending out a non-icmp packet on 1.0 or 2.0 network.
Related
Background
I have a strange use-case where my VPN cannot be on any of the private subnets, but, also cannot use a TAP interface. The machine will be moving through different subnets, and requires access to the entire private address space by design. A single blocked IP would be considered a failure of design.
So, these are all off limits:
10.0.0.0/8
172.16.0.0/12
192.168.0.0/16
169.254.0.0/16
In searching for a solution, I came across RFC 5735, which defines:
192.0.2.0/24 TEST-NET-1
198.51.100.0/24 TEST-NET-2
203.0.113.0/24 TEST-NET-3
As:
For use in documentation and example code. It is often used in conjunction with domain names
example.com or example.net in vendor and protocol documentation. As described in [RFC5737], addresses within this block do not legitimately appear on the public Internet and can be used without any coordination with IANA or an Internet registry.
Which, was a "Jackpot" moment for me and my use case.
Config
I configured an OpenVPN server as such:
local 0.0.0.0
port 443
proto tcp
dev tun
topology subnet
server 203.0.113.0 255.255.255.0 # TEST-NET-3 RFC 5735
push "route 203.0.113.0 255.255.255.0"
...[Snip]...
With Client:
client
nobind
dev tun
proto tcp
...[Snip]...
And ufw rules:
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 203.0.113.0/24 -o ens160 -j MASQUERADE
COMMIT
However, upon running I get /sbin/ip route add 203.0.113.0/24 via 203.0.113.1 RTNETLINK answers: File exists in the error logs. While the VPN completes the rest of its connection successfully.
No connection
Running the following commands:
Server: sudo python3 -m http.server 80
Client: curl -X GET / 203.0.113.1
Results in:
curl: (28) Failed to connect to 203.0.113.1 port 80: Connection timed out
I have tried:
/sbin/ip route replace 203.0.113.0/24 dev tun 0 on client and server.
/sbin/ip route change 203.0.113.0/24 dev tun 0 on client and server.
Adding route 203.0.113.0 255.255.255.0 to the server.
Adding push "route 203.0.113.0 255.255.255.0 127.0.0.1" to server
And none of it seems to work.
Does anyone have any idea how I can force the client to push this traffic over the VPN to my server, instead of to the public IP?
This does actually work!
Just dont forget to allow connections within your firewall. I fixed my config with:
sudo ufw allow in on tun0
However, 198.18.0.0/15 and 100.64.0.0/10 defined as Benchmarking and Shared address space respectively, may be more appropriate choices, since being able to forward TEST-NET addresses may be considered a bug.
I am starting to learn SDN with ovs-ofctl and mininet, and I am configuring a switch following some tutorials, but there's something I don't catch.
When I start my topology with:
sudo mn --topo single,2 --controller remote --switch ovsk
Now if I want to add a simple flow between h1 and h5, I do:
sh ovs-ofctl add-flow s1 in_port=1,actions=output:2
sh ovs-ofctl add-flow s1 in_port=2,actions=output:1
And if I test the connectivity between hosts all is ok.
But now, deleting all flows, if I try:
sh ovs-ofctl add-flow s1 in_port=1,dl_type=0x806,nw_dst=10.0.0.2,actions=output:2
sh ovs-ofctl add-flow s1 in_port=2,dl_type=0x806,nw_dst=10.0.0.1,actions=output:1
Now if I try to ping, there is no reachability, but if I execute:
sh ovs-ofctl add-flow s1 action=NORMAL
Now I can ping again between hosts.
What am I missing here? Specifying dl_type=0x806 in the command is not enough to allow only ethernet using ARP traffic? Why ping fails there?
I think the main reason is a confusion between all involved protocols.
(1) Ping is done using ICMP, in particular ICMP echo request and ICMP echo reply messages. These messages are encapsulated in IP packets, which are in turn encapsulated in it Ethernet packets. In this case Ethernet next header field (i think it is actually called ethertype in general and dl_type here) is set to IP, which is 0x0800.
A more in-depth guide on how to read ICMP packets in wireshark can be found here.
(2) ARP is necessary for end-systems to match IP addresses to MAC addresses. ARP is encapsulated directly into Ethernet frames, where ethernet next header is set to value 0x806
Thus
sh ovs-ofctl add-flow s1 in_port=1,dl_type=0x806,nw_dst=10.0.0.2,actions=output:2
will allow only ARP packets to pass through, while dropping every non-ARP ethernet frame. Thus ping packets are being dropped.
(3) The last question is why this works.
sh ovs-ofctl add-flow s1 action=NORMAL
I am not familiar with the details of OVS. From what I understand from here, action=NORMAL will make OVS act as a normal linux bridge, which does normal ethernet bridge operation, which involves forwarding all frames based on normal MAC learning rules.
Also, since there is no match part in this rule, it should match every packet. I do not know how this one would work.
sh ovs-ofctl add-flow s1 in_port=1,dl_type=0x806,nw_dst=10.0.0.2,actions=NORMAL
(4) This reference has a table at the bottom, which lists openflow rules to match common network protocols.
How do I allow a Cloud Composer Airflow DAG to connect to a REST API via VPN gateway? The cluster is connected to the according VPC.
The kube-proxy is able to reach the API, yet the containers can not.
I have SSH'd into the kube-proxy and containers and tried a traceroute. The containers' traceroute ends with the kube-proxy. The kube-proxy has 4 hops before reaching destination.
I have dumped the iptables from the kube-proxy, they do not specify anything in regards to NATing the VPCs subnet with the containers.
The VPC subnet also does not show up in the containers, which is by design.
Some reading material:
https://www.stackrox.com/post/2020/01/kubernetes-networking-demystified/
EDIT1: More info:
Let's assume the VPN connects the VPC to the remote 10.200.0.0 network.
The VPC has multiple subnets. The primary range is e.g. 10.10.0.0/20. For each Kubernetes cluster we have two more subnet, one for each pod (10.16.0.0/14) and another for services (10.20.0.0/20). The gateway is 10.10.0.1.
Each pod again has it's own range, where pod_1 is 10.16.0.0/14, pod_2 is 10.16.1.0/14, pod_3 10.16.3.0/14 and so on.
One of the kube-proxies has multiple network adapters. It resides in the 10.10.0.0 network with eth0 and has a cbr0 bridge to 10.16.0.0. Through said kube-proxy via the bridge the workers for Airflow are connecting to the network. One worker e.g. 10.16.0.1 has only one network adapter.
The kube-proxy can reach the 10.200.0.0 network. The Airflow workers can not.
How do we get the worker to access the 10.200.0.0 network? Do we need to change the iptables of the kube-proxy?
One of the possible solutions would be to forward the packages from the kube virtual interface to the node's real one. E.g. adding the following rules to ip tables:
iptables -A FORWARD -i cbr0 -o eth0 -d 10.200.0.0/25 -j ACCEPT
iptables -A FORWARD -i eth0 -o cbr0 -s 10.200.0.0/25 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
I am new to Mininet and created a topology. I need to enable ECN in the switch created in the mininet topology.
How to enable ECN in the switch?
Thanks in advance
Regards
Hassaan Afridi
Since you use OVS version 2.0.2 your switch supports at least up to OpenFlow version 1.3. Explicit Congestion Notification (ECN) fields implemented from OpenFlow ver 1.1 and above. In order for the field to be applied thought you have to tell to mininet that you are going to use a version above 1.0 which is the default. To launch the mininet topo we have to go with a remote controller so we can pass flow modifications manually. To start mininet in the terminal we go with
sudo mn --topo single,3 --mac --controller remote --switch ovsk,protocols=OpenFlow13
Mininet is ok but we have to create a bridge to talk to the switch and there we will tell the switch that by this bridge we will pass OpenFlow ver 1.3 flow modifications. To do that in a new terminal we ssh at the mininet vm and we create the bridge with
sudo ovs-vsctl set bridge s1 protocols=OpenFlow13
So now we have a door opened to the switch to talk to and pass our flow mods in which we must define the openflow protocol version again. For a single mod we can do something like
sudo ovs-ofctl -O OpenFlow13 add-flow s1 in_port=1,actions=output:2
and
sudo ovs-ofctl -O OpenFlow13 add-flow s1 in_port=2,actions=output:1
Now we have passed 2 flow modifications manually and the ping between h1 and h2 should work perfectly. To install ECN flow mods we can do something like
sudo ovs-ofctl -O OpenFlow13 add-flow s1 dl_type=0x0800,nw_ecn=3,actions=output:3
Notice that as stated in the documentation of OpenFlow
When dl_type=0x0800 or 0x86dd is specified,matches the ecn bits in IP ToS or IPv6 traffic class fields .When dl_type is wildcarded or set to a value other than 0x0800 or 0x86dd, the value of nw_ecn is ignored
I have X-Wrt based on OpenWrt 8.09 on my router
I have home LAN of few computers on which I have some network servers (SVN, web, etc). For each of service I made forwarding on my router (Linksys wrt54gl) to access it from the Internet (<my_external_ip>:<external_port> -> <some_internal_ip>:<internal_port>)
But within my local network this resources by above request is unreachable (so I need make some reconfiguration <some_internal_ip>:<internal_port> to access).
I added some line to my /etc/hosts
<my_external_ip> localhost
So now all requests from local network to <my_external_ip> forwards to my router but further redirection to appropriate port not works.
Advise proper redirection please.
You need to install an IP redirect for calls going out of the internal network and directed to the public IP. Normally these packets get discarded. You want to reroute them, DNATting to the destination server, but also masqueraded so that the server, seeing as you, its client, are in its same network, doesn't respond directly to you with its internal IP (which you, the client, not having sent the packet there, would discard).
I found this on OpenWRT groups:
iptables -t nat -A prerouting_rule -d YOURPUBLICIP -p tcp --dport PORT -j DNAT --to YOURSERVER
iptables -A forwarding_rule -p tcp --dport PORT -d YOURSERVER -j ACCEPT
iptables -t nat -A postrouting_rule -s YOURNETWORK -p tcp --dport PORT -d YOURSERVER -j MASQUERADE
https://forum.openwrt.org/viewtopic.php?id=4030
If I remember correctly OpenWrt allows you to define custom DNS entries. So maybe simply give a proper local names to your sources (ie. svnserver.local) and map them to specific local IPs. This way you do not even need to go through router to access local resources from local network.