why failed to forward public ip to docker NAT ip - networking

For example, on the physical machine:
# ip addr
5: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 ..
inet 10.32.230.90/24 scope global eth2
valid_lft forever preferred_lft forever
inet 10.32.230.61/24 scope global secondary eth2
valid_lft forever preferred_lft forever
8: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
link/ether 02:42:65:1b:b0:25 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
"10.32.230.90" is the main IP of this machine, and "10.32.230.61" is secondary added with "ip addr add 10.32.230.61/24 dev eth2".
After creating a docker instance, with IP = 172.17.0.10, I add the following rules to connect native IP with secondary IP:
# iptables -A POSTROUTING -j MASQUERADE
# iptables -t nat -A PREROUTING -d 10.32.230.61 -j DNAT --to 172.17.0.10
# echo 1 > /proc/sys/net/ipv4/ip_forward
But it doesn't work because external PC still cannot get access to 10.32.230.61, but can get access to 10.32.230.90. What's the solution?
(From a certain PC, which IP is, for example, 10.32.230.95)
# ping 10.32.230.90
PING 10.32.230.90 (10.32.230.90) 56(84) bytes of data.
64 bytes from 10.32.230.90: icmp_seq=1 ttl=52 time=280 ms
64 bytes from 10.32.230.90: icmp_seq=2 ttl=52 time=336 ms
^C
# ping 10.32.230.61
(Timeout..)
I am sure that there is no IP confliction: 10.32.230.61 is not used by any other hosts.

Related

How to get the name resolution working on an Ubuntu 19.04 VirtualBox VM?

I'm using multiple VirtualBox Ubuntu 18.10/19.04 VMs on a Windows 7 host. At one moment on one of them the name resolution stopped working. The connection to the internet is still working.
ax#buildvm:~$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=51 time=40.5 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=51 time=35.5 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=51 time=42.4 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=51 time=36.2 ms
^C
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 35.456/38.635/42.408/2.906 ms
ax#buildvm:~$ ping google.com
ping: google.com: Temporary failure in name resolution
How to get the name resolution working?
additional info
ax#buildvm:~$ ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:fb:bc:af brd ff:ff:ff:ff:ff:ff
inet 10.0.2.5/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 947sec preferred_lft 947sec
inet6 fe80::a00:27ff:fefb:bcaf/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:27:32:88 brd ff:ff:ff:ff:ff:ff
inet 192.168.56.106/24 brd 192.168.56.255 scope global dynamic enp0s8
valid_lft 947sec preferred_lft 947sec
inet6 fe80::a00:27ff:fe27:3288/64 scope link
valid_lft forever preferred_lft forever
ax#buildvm:~$ cat /etc/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0
search fritz.box
This blog article provides the solution:
$ sudo rm /etc/resolv.conf
$ sudo ln -s /var/run/systemd/resolve/resolv.conf /etc/resolv.conf
$ sudo systemctl restart systemd-resolved.service

Why can I ping the Ip of a different Network Interface of my server?

I have my local Machine (10.0.0.2/16) directly connected to the eth4 network interface of my server.
The connection works as expected and I can traceroute the ip of eth4, namely 10.0.0.1.
However, I can also traceroute the ip 10.1.0.23 of the other interface (eth5), even though it is on the wrong subnet!
In the following you see the settings of my local machine and my server.
On my local Machine (Arch Linux)
Output of ip addr:
....
2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 3c:97:0e:8a:a1:5a brd ff:ff:ff:ff:ff:ff
inet 10.0.0.2/16 brd 10.0.255.255 scope global enp0s25
valid_lft forever preferred_lft forever
inet6 fe80::7a0b:adb3:2eef:a3a8/64 scope link
valid_lft forever preferred_lft forever
....
Traceroutes
% sudo traceroute -I 10.0.0.1
traceroute to 10.0.0.1 (10.0.0.1), 30 hops max, 60 byte packets
1 10.0.0.1 (10.0.0.1) 0.184 ms 0.170 ms 0.163 ms
% sudo traceroute -I 10.1.0.23
traceroute to 10.1.0.23 (10.1.0.23), 30 hops max, 60 byte packets
1 10.1.0.23 (10.1.0.23) 0.240 ms 0.169 ms 0.166 ms
On Server (Debian)
My /etc/network/interfaces.
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
#iface eth5 inet dhcp
auto eth5
allow-hotplug eth5
iface eth5 inet static
address 10.1.0.23
netmask 255.255.0.0
gateway 10.1.0.1
## Automatically load eth4 interface at boot
auto eth4
allow-hotplug eth4
# Configure network interface at eth4
iface eth4 inet static
address 10.0.0.1
netmask 255.255.0.0
gateway 10.0.0.1
Output of ip addr:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
...
6: eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:08:a2:0a:e8:86 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/16 brd 10.0.255.255 scope global eth4
valid_lft forever preferred_lft forever
inet6 fe80::208:a2ff:fe0a:e886/64 scope link
valid_lft forever preferred_lft forever
7: eth5: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 00:08:a2:0a:e8:87 brd ff:ff:ff:ff:ff:ff
inet 10.1.0.23/16 brd 10.1.255.255 scope global eth5
valid_lft forever preferred_lft forever
Output of ip route:
default via 10.1.0.1 dev eth5
10.0.0.0/16 dev eth4 proto kernel scope link src 10.0.0.1
10.1.0.0/16 dev eth5 proto kernel scope link src 10.1.0.23
Why wouldn't you expect this behavior. As you can see from your Debian server's routing tables, it knows how to route packets to your arch linux machine, so it can respond if it wants to.
I can see two likely questions you might be having:
Why does it choose to respond?
You haven't given us your firewall rules, or told us whether your server has ip_forwarding enabled. Even without IP forwarding enabled, Linux will see a locally received packet for any of its local addresses as an INPUT packet (in terms of iptables and access control decisions), not a forwarded packet. So it will respond even if forwarding is disabled.
If you don't want this behavior you could add an iptables rule to the INPUT chain to drop the packet being received on the server.
Why is there only one hop in the traceroute
You might expect that in order to respond the packet would need to traverse (be forwarded) and so you would get two hops in your traceroute one for eth4 and one for eth5. However, as mentioned above, any locally received ppacket will be treated as input if it matches one of the local IPs. Your arch linux box presumably uses the Debian server as its default route. So, it sends a packet with the Debian server's MAC address, hoping the Debian server will forward it. That makes it a locally received packet at the ethernet level on the Debian serevr. The server then cehcks teh IP address, finds it is local, doesn't care it's for another ethernet and locally receives it at the IP layer.
If you don't want that behavior, fix in firewall rules.

Ubuntu 16.04 reboots with a different ip address then the static one assigned in /etc/network/interfaces

When my server reboots the ip address for eth0 is 192.168.1.2 when it should be 192.168.1.100 per the static ip address settings in /etc/network/interfaces. After boot if I run service networking restart it will assigning 192.168.1.100 to eth0. Also I don't know if this matters but the hostname displayed in my router is different the the hostname displayed in /etc/hosts.
/etc/network/interfaces
auto lo eth0
iface lo inet loopback
# IPv4 address
auto eth0
iface eth0 inet static
address 192.168.1.100
netmask 255.255.255.0
broadcast 192.168.1.255
network 192.168.1.0
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether a4:1f:72:7c:61:8d brd ff:ff:ff:ff:ff:ff
inet 192.168.1.100/24 brd 192.168.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.1.2/24 brd 192.168.1.255 scope global secondary dynamic eth0
valid_lft 85312sec preferred_lft 85312sec
ip route show
10.8.0.0/24 dev tun0 proto kernel scope link src 10.8.0.1
169.254.0.0/16 dev eth0 scope link metric 1000
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.100
192.168.1.1 dev eth0 proto dhcp scope link src 192.168.1.2 metric 1024
I don't know about 16.04 but in previous versions you have Network Manager daemon setting the IPs. Use the applet 'nm-applet' to set up your static address. Right click it and go for 'edit connections'.
https://help.ubuntu.com/community/NetworkManager
I fixed part of the problem it was as simple as changing the file to the parameters outline below.
/etc/network/interfaces
# IPv4 address
iface eth0 inet static
address 192.168.1.100
netmask 255.255.255.0
gateway 192.168.1.1
dns-nameservers 8.8.8.8
Now when the server boots up it will auto assigning 192.168.1.100 although it will also still assigning 192.168.1.2. If I find a way to have it stop assigning the 2nd ip address I will update my answer. Thanks

Network unreachable inside docker container without --net=host parameter

Problem: there is no internet connection in the docker container.
Symptoms: ping 8.8.8.8 doesn't work. Wireshark from host system gives back:
19 10.866212113 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=0/0, ttl=64
20 11.867231972 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=1/256, ttl=64
21 12.868331353 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=2/512, ttl=64
22 13.869400083 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=3/768, ttl=64
But! If container was started with --net=host internet would work perfectly.
What I've tried so far:
altering DNS
adding --ip-masq=true to /etc/default/docker (with restart off)
enabling everything related to masquerade / ip_forward
altering default route
everything suggested here
Host config:
$ sudo route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.4.2.1 0.0.0.0 UG 0 0 0 eno1.3001
default 10.3.2.1 0.0.0.0 UG 100 0 0 eno2
10.3.2.0 * 255.255.254.0 U 100 0 0 eno2
10.4.2.0 * 255.255.254.0 U 0 0 0 eno1.3001
nerv8.i 10.3.2.1 255.255.255.255 UGH 100 0 0 eno2
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
sudo iptables -L, cat /etc/network/interfaces, ifconfig, iptables -t nat -L -nv
Everything is fine, forwarding is also enabled:
$ sudo sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
This is the not full answer you are looking for. But I would like to give some explanation on why the internet is working
If container was started with --net=host internet would work
perfectly.
Docker by default supports three networks. In this mode(HOST) container will share the host’s network stack and all interfaces from the host will be available to the container. The container’s host name will match the hostname on the host system
# docker run -it --net=host ubuntu:14.04 /bin/bash
root#labadmin-VirtualBox:/# hostname
labadmin-VirtualBox
Even the IP configuration is same as the host system's IP configuration
root#labadmin-VirtualBox:/# ip addr | grep -A 2 eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:b5:82:2f brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
3: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
root#labadmin-VirtualBox:/# exit
exit
HOST SYSTEM IP CONFIGURATION
# ip addr | grep -A 2 eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:b5:82:2f brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
3: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
Refer this for more information about docker networking.
Can you run "sudo ifconfig" and see if the range of IPs for your internet connection (typically wlan0) is colliding with the range for docker0 interface 172.17.0.0 ?
I had this issue with my office network (while it was working fine at home) that it ran on 172.17.0.X and Docker tried to pick exactly that range.
This might be of help: http://jpetazzo.github.io/2013/10/16/configure-docker-bridge-network/
I ended up creating my own bridge network for Docker.
Check that net.ipv4.conf.all.forwarding (not net.ipv4.ip_forward) is set to 1, if not, turn it on:
$ sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 0
$ sysctl net.ipv4.conf.all.forwarding=1
$ sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 1

SmartOS configuring zone networking issue - no connectivity

I am experimenting a bit with SmartOS on a spare dedicated server.
I have 2 IP adresses on the server.
for ex 1.1.1.1 and 2.2.2.2 (They are not in the same range).
The global zone was configured my global zone to use the IP 1.1.1.1
Here is the configuration of my global zone
[root#global ~]# dladm show-link
LINK CLASS MTU STATE BRIDGE OVER
igb0 phys 1500 up -- --
igb1 phys 1500 up -- --
net0 vnic 1500 ? -- igb0
[root#global ~]# dladm show-phys
LINK MEDIA STATE SPEED DUPLEX DEVICE
igb0 Ethernet up 1000 full igb0
igb1 Ethernet up 1000 full igb1
[root#global ~]# ifconfig
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
igb0: flags=1004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 1500 index 2
inet 1.1.1.1 netmask ffffff00 broadcast 1.1.1.255
ether c:c4:7a:2:xx:xx
igb1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 0.0.0.0 netmask 0
ether c:c4:7a:2:xx:xx
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128
I configured my zone the following way
[root#global ~]# vmadm get 84c201d4-806c-4677-97f9-bc6da7ad9375 | json nics
[
{
"interface": "net0",
"mac": "02:00:00:78:xx:xx",
"nic_tag": "admin",
"gateway": "2.2.2.254",
"ip": "2.2.2.2",
"netmask": "255.255.255.0",
"primary": true
}
]
[root#84c201d4-806c-4677-97f9-bc6da7ad9375 ~]# ifconfig
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
net0: flags=40001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,L3PROTECT> mtu 1500 index 2
inet 2.2.2.2 netmask ffffff00 broadcast 2.2.2.255
ether 2:0:0:78:xx:xx
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128
[root#84c201d4-806c-4677-97f9-bc6da7ad9375 ~]# dladm show-link
LINK CLASS MTU STATE BRIDGE OVER
net0 vnic 1500 up -- ?
[root#84c201d4-806c-4677-97f9-bc6da7ad9375 ~]# netstat -rn
Routing Table: IPv4
Destination Gateway Flags Ref Use Interface
-------------------- -------------------- ----- ----- ---------- ---------
default 87.98.252.254 UG 2 47 net0
87.98.252.0 87.98.252.162 U 4 23 net0
127.0.0.1 127.0.0.1 UH 2 0 lo0
However i have no connectivity to the internet in my zone.
Is there anything misconfigured?
I suggest you want to bypass your second real IP to guest zone.
According to wiki you should configure tag for second NIC (igb1) and use it in your guest zone.

Resources