I have a device that is connected to my laptop via USB ethernet. But because i have more than one device connected like this with the same IP address, i created network namespaces. I was able to enable internet access FROM the network namespace by using virtual ethernet interface pairs and routing to the interface that has internet access - something like this:
## Add radar interface to namespace
ip netns add net-test
ip link set dev enxxx netns net-test
ip netns exec net-test ip link set dev enxxx up
ip netns exec net-test ip address add 192.168.0.1/24 dev enxxx
ip netns exec net-test ip link set dev lo up
## Setting up the dhcp server on the created netns.
netns-exec net-test dnsmasq -C "/etc/dnsmasq.conf"
## Create and add virtual ethernet to namespace
ip link add veth0 type veth peer name veth1
ip link set veth1 netns net-test
ip addr add 10.0.0.1/24 dev veth0
ip netns exec net-test ip addr add 10.0.0.2/24 dev veth1
## Set up internet access through namespace
ip link set veth0 up
ip netns exec net-test ip link set veth1 up
## Setup ip forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -A FORWARD -o eth0 -i veth0 -j ACCEPT
iptables -A FORWARD -i eth0 -o veth0 -j ACCEPT
iptables -t nat -A POSTROUTING -s 10.0.0.2/24 -o eth0 -j MASQUERADE
ip netns exec net-test ip route add default via 10.0.0.1
But I am still unable to access the internet from the device itself. I SSH into it and can ping the veth1 interface, but not the paired veth0 in the root namespace and 8.8.8.8 that i otherwise could in the namespace.
How can I access the internet from the device in the namespace?
I needed to do all the routing again inside the namespace as well.
So in addition to the script from above, you also needed to do the following inside the namespace:
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -A FORWARD -o veth1 -i enxxx -j ACCEPT
iptables -A FORWARD -i veth1 -o enxxx -j ACCEPT
iptables -t nat -A POSTROUTING -s 192.168.0.1/24 -o veth1 -j MASQUERADE
From then, you can access the internet via from the device.
Related
I have two network interfaces on a node. One is internal network and the other is external network. Internal network is 192.168.50.0/255.255.255.0(internal network).
And external network is 192.168.0.0/255.255.255.0. Kubernetes consists of 192.168.50.0/255.255.255.0. I want to approach internal network from another local nodes without using internal network interface. How can I solve this problem?
Without subnet masks , I do not understand how they are different networks.
But , in any case , you need to enable routing packets from one interface to another. I assume you are on Linux node , there you may enable ip-forwarding.
echo 1 >> /proc/sys/net/ipv4/ip_forward
Then set up some rules in iptables to perform the natting and forwarding:
Example rules:
# Always accept loopback traffic
iptables -A INPUT -i lo -j ACCEPT
# We allow traffic from the LAN side
iptables -A INPUT -i eth0 -j ACCEPT
######################################################################
#
# ROUTING
#
######################################################################
# eth0 is LAN
# eth1 is WAN
# Allow established connections
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
# Masquerade.
iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
# fowarding
iptables -A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
# Allow outgoing connections from the LAN side.
iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT
https://serverfault.com/questions/453254/routing-between-two-networks-on-linux
I have OpenStack running on a Fedora laptop. Openstack hates network interfaces that are managed by NetworkManager, so I set up a dummy interface that's used as the port for the br-ex interface that OpenStack allows instances to communicate through to the outside world. I can connect to the floating ips fine, but they can't get past the subnet that br-ex has. I'd like them to be to reach addresses external to the laptop. I suspect some iptables nat/masquerading magic is required. Does anyone have any ideas?
For Centos7 OpenStack with 3 nodes you should use networking:
just install net-tools and disable NetworkManager:
yum install net-tools -y;
systemctl disable NetworkManager.service
systemctl stop NetworkManager.service
chkconfig network on
Also You need IP tables no firewalld.
yum install -y iptables-services
systemctl enable iptables.service
systemctl disable firewalld.service
systemctl stop firewalld.service
For controller node have one NIC
For Network and compute nodes have 2 NICs
Edit interfaces on all nodes:
for Network eth0: ip:X.X.X.X (external) eth1:10.0.0.1 - no gateway
for Controller node eth0: ip:10.0.0.2 - gateway 10.0.0.1
for compute node eth0: ip:10.0.0.3 - gateway 10.0.0.1
Set up iptables like:
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A POSTROUTING -o eth0-j MASQUERADE
iptables -A FORWARD -i eth1 -o eth0-j ACCEPT
iptables -A FORWARD -i eth0-o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
service iptables save
Also enable forwarding. In file: /etc/sysctkl.conf add line:
net.ipv4.ip_forward = 1
And execute command:
sysctl –p
Should work.
I have a requirement in which need to block certain processes to consume network data using VPN interface ( tun0).
physical interface(cellular data) -> tun0- >user space program->physical interface-> destination.
pls correct me if i am wrong , the above way the traffic flows though when VPN is enabled.
so if i want to block one particular process network packet not to forwared to tun0 interface, i have applied the iptable rules for both the physical interface and the tun0 interface. still the application is able to use the network data using the tun0 interface.
is there a way to block the traffic at tun0 interface?
dont know which rules you set but maybe this fix
(allow only tun0, reject others)
iptables -A INPUT -i tun0 -j ACCEPT
iptables -A OUTPUT -o tun0 -j ACCEPT
iptables -A INPUT -i ! tun0 -j REJECT
iptables -A OUTPUT -o ! tun0 -j REJECT
I installed docker on two hosts (Virtual Machines). I'd like to make the containers on different host to be able to connect each other.
Here's VM1's and VM2's ifconfig output:
VM1
bridge0 : inet addr:172.17.52.1 Bcast:172.17.52.255 Mask:255.255.255.0
docker0 : inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
eth0 : inet addr:192.168.122.129 Bcast:192.168.122.255 Mask:255.255.255.0
VM2
bridge0 : inet addr:172.17.53.1 Bcast:172.17.53.255 Mask:255.255.255.0
docker0 : inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
eth0 : inet addr:192.168.122.77 Bcast:192.168.122.255 Mask:255.255.255.0
bridge is used for the container. I have made some network configurations:
iptables -t nat -A POSTROUTING -s 172.17.52.0/24 ! -d 172.17.0.0/16 -j MASQUERADE (on VM1)
iptables -t nat -A POSTROUTING -s 172.17.53.0/24 ! -d 172.17.0.0/16 -j MASQUERADE (on VM2)
route add -net 172.17.52.0 netmask 255.255.255.0 gw 192.168.122.129 (on VM2)
route add -net 172.17.53.0 netmask 255.255.255.0 gw 192.168.122.77 (on VM1)
I get no output when a container pings another container
(172.17.52.X ping 172.17.53.X)
VM1 can ping VM2 successfully. The container on VM1 can also ping VM2 successfully, but I get no output when the container on VM1 pings the container on VM2.
One very easy way to achieve this would be by using Weave.
You can install it with:
sudo wget -O /usr/local/bin/weave \
https://github.com/zettio/weave/releases/download/latest_release/weave
sudo chmod a+x /usr/local/bin/weave
VM1
sudo weave launch
C=$(sudo weave run 10.2.1.1/24 -t -i busybox)
VM2
sudo weave launch 192.168.122.129
C=$(sudo weave run 10.2.1.2/24 -t -i busybox)
docker exec $C ping -c 3 10.2.1.1/24
You have just create a virtual network with containers. The beauty is that these VMs can be anywhere, as long as at least one of them has public IP with port 6783 open.
You can even enable NaCL crypto by running weave launch -password "<MySecret>" or (exporting WEAVE_PASSWORD="<MySecret>" prior to weave launch).
I am trying to do some network testing with a 10G network card which has 2 ports (eth1, eth2).
In order to test, I would use something like iperf to do bandwidth testing:
I connect a cable directly from port 1(eth1) to port 2(eth2).
ip addresses:
eth1: 192.168.20.1/24
eth2: 192.168.20.2/24
Terminal 1:
user#host:~$ iperf -s -B 192.168.20.1
Terminal 2:
user#host:~$ iperf -c 192.168.20.1
Results:
------------------------------------------------------------
Client connecting to 192.168.20.1, TCP port 5001
TCP window size: 169 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.20.1 port 53293 connected with 192.168.20.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 41.6 GBytes 35.7 Gbits/sec
As you can see, the data is not going through the cable at all but just through the localhost or memory which is why I am getting speeds above 10G.
Is it possible to hide eth1 from the command "iperf -c 192.168.20.1" so that the data is forced through the cable?
Update 1:
I have now tried the following after a reference made by #Mardanian :
Note: Ports are now eth2/eth3 (not eth1/eth2)
eth2 has mac address 00:40:c7:6c:01:12
eth3 has mac address 00:40:c7:6c:01:13
ifconfig eth2 192.168.20.1/24 up
ifconfig eth3 192.168.20.2/24 up
arp -s 192.168.20.3 00:40:c7:6c:01:12
arp -s 192.168.20.4 00:40:c7:6c:01:13
ip route add 192.168.20.3 dev eth3
ip route add 192.168.20.4 dev eth2
iptables -t nat -A POSTROUTING -d 192.168.20.4 -j SNAT --to-source 192.168.20.3
iptables -t nat -A POSTROUTING -d 192.168.20.3 -j SNAT --to-source 192.168.20.4
iptables -t nat -A PREROUTING -d 192.168.20.3 -j DNAT --to-destination 192.168.20.1
iptables -t nat -A PREROUTING -d 192.168.20.4 -j DNAT --to-destination 192.168.20.2
iperf -s -B 192.168.20.3
bind failed: Cannot assign requested address
These dummy addresses do not seem to work properly, I can't seem to bind or even ping them.
arp -an
? (192.168.20.3) at 00:40:c7:6c:01:12 [ether] PERM on eth2
? (192.168.20.4) at 00:40:c7:6c:01:13 [ether] PERM on eth3
As far as I understand, arp doesn't bind an ip address to an interface, it just tells the system that in order find a certain ip, it lets the system know which interface to go through - that is why I cannot bind to the dummy ip addresses. If I bind to the real ip addresses, then I still would be going through the local system.
iperf will always use loopback if it detects the destination is local. Force kernel to route it through inteface. see linux: disable using loopback and send data via wire between 2 eth cards of one comp