SmartOS configuring zone networking issue - no connectivity - networking

I am experimenting a bit with SmartOS on a spare dedicated server.
I have 2 IP adresses on the server.
for ex 1.1.1.1 and 2.2.2.2 (They are not in the same range).
The global zone was configured my global zone to use the IP 1.1.1.1
Here is the configuration of my global zone
[root#global ~]# dladm show-link
LINK CLASS MTU STATE BRIDGE OVER
igb0 phys 1500 up -- --
igb1 phys 1500 up -- --
net0 vnic 1500 ? -- igb0
[root#global ~]# dladm show-phys
LINK MEDIA STATE SPEED DUPLEX DEVICE
igb0 Ethernet up 1000 full igb0
igb1 Ethernet up 1000 full igb1
[root#global ~]# ifconfig
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
igb0: flags=1004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 1500 index 2
inet 1.1.1.1 netmask ffffff00 broadcast 1.1.1.255
ether c:c4:7a:2:xx:xx
igb1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 0.0.0.0 netmask 0
ether c:c4:7a:2:xx:xx
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128
I configured my zone the following way
[root#global ~]# vmadm get 84c201d4-806c-4677-97f9-bc6da7ad9375 | json nics
[
{
"interface": "net0",
"mac": "02:00:00:78:xx:xx",
"nic_tag": "admin",
"gateway": "2.2.2.254",
"ip": "2.2.2.2",
"netmask": "255.255.255.0",
"primary": true
}
]
[root#84c201d4-806c-4677-97f9-bc6da7ad9375 ~]# ifconfig
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
net0: flags=40001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,L3PROTECT> mtu 1500 index 2
inet 2.2.2.2 netmask ffffff00 broadcast 2.2.2.255
ether 2:0:0:78:xx:xx
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128
[root#84c201d4-806c-4677-97f9-bc6da7ad9375 ~]# dladm show-link
LINK CLASS MTU STATE BRIDGE OVER
net0 vnic 1500 up -- ?
[root#84c201d4-806c-4677-97f9-bc6da7ad9375 ~]# netstat -rn
Routing Table: IPv4
Destination Gateway Flags Ref Use Interface
-------------------- -------------------- ----- ----- ---------- ---------
default 87.98.252.254 UG 2 47 net0
87.98.252.0 87.98.252.162 U 4 23 net0
127.0.0.1 127.0.0.1 UH 2 0 lo0
However i have no connectivity to the internet in my zone.
Is there anything misconfigured?

I suggest you want to bypass your second real IP to guest zone.
According to wiki you should configure tag for second NIC (igb1) and use it in your guest zone.

Related

Nginx different IPs, same port - bind() fail

I'm trying to serve 2 different frontends on same 443 port but with different IP's. However nginx -t fails with nginx: [emerg] bind() to 10.10.1.1:443 failed (99: Cannot assign requested address). Here's my conf's:
Conf 1:
server {
listen 10.10.0.1:443 ssl http2;
}
Conf 2:
server {
listen 10.10.1.1:443 ssl http2;
}
I have no 443 port open by any other process - netstat -tulpn | grep :443 gives nothing. I assume that second bind fails after binding first block. For example, if I change second block to listen 133.10.1.1:443 I get no errors.
There is no default configs in my sites-enabled folder.
Please help sort it out =)
Upd:
# cat /etc/hosts
127.0.1.1 serv serv
127.0.0.1 localhost
#Custom
10.10.0.1 main.site
10.10.1.1 test.site
Upd:
# ip addr sh
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet *<external IP>*/20 brd *<external IP>* scope global eth0
valid_lft forever preferred_lft forever
inet 10.24.0.5/16 brd 10.24.255.255 scope global eth0
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet 10.150.0.2/16 brd 10.150.255.255 scope global eth1
valid_lft forever preferred_lft forever
4: int0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
link/none
inet 10.10.0.1/16 scope global int0
valid_lft forever preferred_lft forever
10.10.0.0 - tunnel network, server conf 1 works perfectly on 10.10.0.1 without conf 2 enabled.
Upd:
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 <external IP> 0.0.0.0 UG 0 0 0 eth0
10.10.0.0 0.0.0.0 255.255.0.0 U 0 0 0 int0
10.24.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
10.150.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1
<external IP> 0.0.0.0 255.255.240.0 U 0 0 0 eth0
This configuration brings the error, however, adding separate 10.10.1.1/24 address to the int0 (opposing just 10.10.0.1\16) solved the issue, like so:
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 <external IP> 0.0.0.0 UG 0 0 0 eth0
10.10.0.0 0.0.0.0 255.255.255.0 U 0 0 0 int0
10.10.1.0 0.0.0.0 255.255.255.0 U 0 0 0 int0
10.24.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
10.150.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1
<external IP> 0.0.0.0 255.255.240.0 U 0 0 0 eth0
Everything works fine now.

How does Kubernetes assign an IP to fieldPath: status.hostIP on a host with multiple interfaces and IPs

The title says it all; how does Kubernetes assign an IP to fieldPath: status.hostIP on a host with multiple interfaces and IPs.
If My node has the following IPs
# ip a | grep "inet "
inet 127.0.0.1/8 scope host lo
inet 10.68.48.206/22 brd 10.68.51.255 scope global virbr0
inet 253.255.0.35/24 brd 253.255.0.255 scope global bond0.3900
inet 10.244.2.0/32 scope global flannel.1
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
Kube picks 10.68.48.206 when I want it to pick 253.255.0.35, so how does it decide?
Is it based off of DNS hostname resolution?
nslookup ca-rain03
Server: 10.68.50.60
Address: 10.68.50.60#53
Or default route?
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.68.48.1 0.0.0.0 UG 0 0 0 virbr0
10.0.0.0 10.68.48.1 255.0.0.0 UG 0 0 0 virbr0
10.68.48.0 0.0.0.0 255.255.252.0 U 0 0 0 virbr0
10.244.0.0 10.244.0.0 255.255.255.0 UG 0 0 0 flannel.1
10.244.1.0 10.244.1.0 255.255.255.0 UG 0 0 0 flannel.1
169.254.0.0 0.0.0.0 255.255.0.0 U 1007 0 0 bond0
169.254.0.0 0.0.0.0 255.255.0.0 U 1045 0 0 bond0.3900
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
253.255.0.0 0.0.0.0 255.255.255.0 U 0 0 0 bond0.3900
Or something else? How can I pass the host IP of 253.255.0.35 into a pod?
Thanks
It's really picked up by the kubelet the configuration. For example, on pretty much all *nix systems it's managed by systemd. So you can see it like this 👀:
systemctl cat kubelet
# Warning: kubelet.service changed on disk, the version systemd has loaded is outdated.
# This output shows the current version of the unit's original fragment and drop-in files.
# If fragments or drop-ins were added or removed, they are not properly reflected in this output.
# Run 'systemctl daemon-reload' to reload units.
# /lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/
[Service]
ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet
Restart=always
StartLimitInterval=0
# Tuned for local dev: faster than upstream default (10s), but slower than systemd default (100ms)
RestartSec=600ms
[Install]
WantedBy=multi-user.target
# /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/c
onfig.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 👈
[Install]
You can see the node IP is identified with the --node-ip=172.17.0.2 kubelet option. 💡
✌️☮️
OK so there must have been something wierd in my k8 config. It is working as expected now, and status.hostIP is returning the correct IP

Using SNMP retrieve IP and MAC addresses of directly connected machines to a SNMP Device

How to get connected machine's IP and Mac of SNMP device.
ARP cache is not giving correct details.
Example for Linux shell commands (no tag for other languages or Windows at time of writing)
Providing that the machine you want to query does run a SNMP Daemon ( generally snmpd from Net-SNMP under Linux ) and that you know how/are allowed to speak to it ( version 1, 2c or 3 with various community names or usernames/passwords/encoding for v3 ) you may issue the following SNMP requests:
For the test I started a snmpd on a CentOS 7 virtual machine whose main address was 192.168.174.128.
I choose port 1610 over the traditional 161 in order not to sudo or to setcap (snmpd). The snmpd.conf file contents is out of the range of this question.
This first one for IPs
snmptable -v 2c -c private 192.168.174.128:1610 ipAddrTable
SNMP table: IP-MIB::ipAddrTable
ipAdEntAddr ipAdEntIfIndex ipAdEntNetMask ipAdEntBcastAddr ipAdEntReasmMaxSize
127.0.0.1 1 255.0.0.0 0 ?
192.168.122.1 3 255.255.255.0 1 ?
192.168.174.128 2 255.255.255.0 1 ?
The second command (with 3 columns only printed) for MAC
snmptable -v 2c -c private 192.168.174.128:1610 ifTable | awk -c '{print $1 "\t" $2 "\t\t" $6}'
SNMP table:
ifIndex ifDescr ifPhysAddress
1 lo up
2 ens33 0:c:29:53:aa:c6
3 virbr0 52:54:0:e6:6b:2f
4 virbr0-nic 52:54:0:e6:6b:2f
When we check under CentOS 7 we get
ifconfig
ens33: ... mtu 1500
inet 192.168.174.128 netmask 255.255.255.0 broadcast 192.168.174.255
inet6 ...
ether 00:0c:29:53:aa:c6 netmask 255.0.0.0
...
lo: ... mtu 65536
inet 127.0.0.1
...
virbr0: ... mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:e6:6b:2f ...
...
Bonus shell command:
snmptranslate -Oaf IF-MIB::ifTable
.iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable
and
snmptranslate -Oaf IP-MIB::ipAddrTable
.iso.org.dod.internet.mgmt.mib-2.ip.ipAddrTable
I do not know why/if there is a single table holding both information.

Network unreachable inside docker container without --net=host parameter

Problem: there is no internet connection in the docker container.
Symptoms: ping 8.8.8.8 doesn't work. Wireshark from host system gives back:
19 10.866212113 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=0/0, ttl=64
20 11.867231972 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=1/256, ttl=64
21 12.868331353 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=2/512, ttl=64
22 13.869400083 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=3/768, ttl=64
But! If container was started with --net=host internet would work perfectly.
What I've tried so far:
altering DNS
adding --ip-masq=true to /etc/default/docker (with restart off)
enabling everything related to masquerade / ip_forward
altering default route
everything suggested here
Host config:
$ sudo route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.4.2.1 0.0.0.0 UG 0 0 0 eno1.3001
default 10.3.2.1 0.0.0.0 UG 100 0 0 eno2
10.3.2.0 * 255.255.254.0 U 100 0 0 eno2
10.4.2.0 * 255.255.254.0 U 0 0 0 eno1.3001
nerv8.i 10.3.2.1 255.255.255.255 UGH 100 0 0 eno2
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
sudo iptables -L, cat /etc/network/interfaces, ifconfig, iptables -t nat -L -nv
Everything is fine, forwarding is also enabled:
$ sudo sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
This is the not full answer you are looking for. But I would like to give some explanation on why the internet is working
If container was started with --net=host internet would work
perfectly.
Docker by default supports three networks. In this mode(HOST) container will share the host’s network stack and all interfaces from the host will be available to the container. The container’s host name will match the hostname on the host system
# docker run -it --net=host ubuntu:14.04 /bin/bash
root#labadmin-VirtualBox:/# hostname
labadmin-VirtualBox
Even the IP configuration is same as the host system's IP configuration
root#labadmin-VirtualBox:/# ip addr | grep -A 2 eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:b5:82:2f brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
3: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
root#labadmin-VirtualBox:/# exit
exit
HOST SYSTEM IP CONFIGURATION
# ip addr | grep -A 2 eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:b5:82:2f brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
3: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
Refer this for more information about docker networking.
Can you run "sudo ifconfig" and see if the range of IPs for your internet connection (typically wlan0) is colliding with the range for docker0 interface 172.17.0.0 ?
I had this issue with my office network (while it was working fine at home) that it ran on 172.17.0.X and Docker tried to pick exactly that range.
This might be of help: http://jpetazzo.github.io/2013/10/16/configure-docker-bridge-network/
I ended up creating my own bridge network for Docker.
Check that net.ipv4.conf.all.forwarding (not net.ipv4.ip_forward) is set to 1, if not, turn it on:
$ sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 0
$ sysctl net.ipv4.conf.all.forwarding=1
$ sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 1

why failed to forward public ip to docker NAT ip

For example, on the physical machine:
# ip addr
5: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 ..
inet 10.32.230.90/24 scope global eth2
valid_lft forever preferred_lft forever
inet 10.32.230.61/24 scope global secondary eth2
valid_lft forever preferred_lft forever
8: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
link/ether 02:42:65:1b:b0:25 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
"10.32.230.90" is the main IP of this machine, and "10.32.230.61" is secondary added with "ip addr add 10.32.230.61/24 dev eth2".
After creating a docker instance, with IP = 172.17.0.10, I add the following rules to connect native IP with secondary IP:
# iptables -A POSTROUTING -j MASQUERADE
# iptables -t nat -A PREROUTING -d 10.32.230.61 -j DNAT --to 172.17.0.10
# echo 1 > /proc/sys/net/ipv4/ip_forward
But it doesn't work because external PC still cannot get access to 10.32.230.61, but can get access to 10.32.230.90. What's the solution?
(From a certain PC, which IP is, for example, 10.32.230.95)
# ping 10.32.230.90
PING 10.32.230.90 (10.32.230.90) 56(84) bytes of data.
64 bytes from 10.32.230.90: icmp_seq=1 ttl=52 time=280 ms
64 bytes from 10.32.230.90: icmp_seq=2 ttl=52 time=336 ms
^C
# ping 10.32.230.61
(Timeout..)
I am sure that there is no IP confliction: 10.32.230.61 is not used by any other hosts.

Resources